forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
BsQTw0uPDX
Hierarchical Preference Optimization: Learning to achieve goals via feasible subgoals prediction
[ "Utsav Singh", "Souradip Chakraborty", "Wesley A. Suttle", "Brian M. Sadler", "Anit Kumar Sahu", "Mubarak Shah", "Vinay P. Namboodiri", "Amrit Singh Bedi" ]
This work introduces Hierarchical Preference Optimization (HPO), a novel approach to hierarchical reinforcement learning (HRL) that addresses non-stationarity and infeasible subgoal generation issues when solving complex robotic control tasks. HPO leverages maximum entropy reinforcement learning combined with token-level Direct Preference Optimization (DPO), eliminating the need for pre-trained reference policies that are typically unavailable in challenging robotic scenarios. Mathematically, we formulate HRL as a bi-level optimization problem and transform it into a primitive-regularized DPO formulation, ensuring feasible subgoal generation and avoiding degenerate solutions. Extensive experiments on challenging robotic navigation and manipulation tasks demonstrate HPO’s impressive performance, where HPO shows an improvement of up to 35% over the baselines. Furthermore, ablation studies validate our design choices, and quantitative analyses confirm HPO’s ability to mitigate non-stationarity and infeasible subgoal generation issues in HRL.
[ "hierarchical reinforcement learning", "preference learning" ]
https://openreview.net/pdf?id=BsQTw0uPDX
https://openreview.net/forum?id=BsQTw0uPDX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "p45Em4vzNT", "lvhaTGTnwg", "Vw3dKmEQ4V", "SsmnUOj1Y0", "RM9MFHMSpc", "O0e43RscdD", "DK2ZvTdycm", "0cjtbKbhU5" ], "note_type": [ "official_review", "comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1730751708535, 1731973744526, 1731475255516, 1730414871403, 1731475135792, 1730280820212, 1730372937108, 1731953704407 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8804/Reviewer_ELYQ" ], [ "ICLR.cc/2025/Conference/Submission8804/Authors" ], [ "ICLR.cc/2025/Conference/Submission8804/Authors" ], [ "ICLR.cc/2025/Conference/Submission8804/Reviewer_MsT6" ], [ "ICLR.cc/2025/Conference/Submission8804/Authors" ], [ "ICLR.cc/2025/Conference/Submission8804/Reviewer_DgAQ" ], [ "ICLR.cc/2025/Conference/Submission8804/Reviewer_ZAGe" ], [ "ICLR.cc/2025/Conference/Submission8804/Reviewer_MsT6" ] ], "structured_content_str": [ "{\"summary\": \"This paper is about (goal-conditioned) hierarchical reinforcement learning. The authors describe two key challenges in hierarchical reinforcement learning: training instability due to non-stationary of off-policy learning for the higher-level policy and generation of infeasible sub-goals by the higher-level policy. It proposes a hierarchical approach in which the higher-level policy is optimized with a token-level direct preference optimization method and the lower-level policy is optimized with reinforcement learning. The goal of this approach is to make the learning of the higher-level policy independent from the lower-level policy (i.e. its current sub-optimal form) to avoid issues arising from non-stationarity. To this end, the paper re-formulates the hierarchical reinforcement learning problem as a bi-level optimization problem which is solved by first posing an equivalent constrained optimization problem. The proposed method is evaluated in a set of experiments and compared to a set of baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides good background on reinforcement learning from human feedback and direct preference optimization. The paper also clearly describes the limitations it aims to address. The authors introduce a bi-level formulation of the hierarchical reinforcement learning problem to provide formalized arguments for the issues that they want to address. The overall issue that is raised in this paper, i.e. the complications arising for the interplay between the high-level and low-level policies is highly relevant for hierarchical reinforcement learning and satisfying solutions for this problem are in demand.\", \"weaknesses\": \"In parts, this paper is hard to follow. For example, the part where the notation and the sub-goals are introduced is confusing as to the nature and purpose of the sub-goals. More clarity as to the definition of the hierarchical MDP would be good. Another reason is the level to which the paper is self-contained. For example, in line 206, the authors use an equation for the optimal policy with reference to a tutorial, but it is unclear what the equation means and why it is used.\", \"questions\": \"In Fig. 3, the authors are presenting a form of evaluation for their claim regarding non-stationarity. This evaluation is indirect and is relying on the task (i.e. what distances mean). Can the authors present a task independent evaluation that supports their claim? E.g. sometime closer to the formalization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response regarding ethic concerns [part 2]\", \"comment\": \"**Remarks**: We deeply regret any confusion caused by our presentation and sincerely apologize if it gave the wrong impression. We humbly request the reviewer to reconsider raising an ethics flag, as our intention has never been to misrepresent contributions. We hope our clarifications have highlighted the distinctions. We are happy to add more experimental comparisons between both approaches.\\n\\nWe greatly value your feedback and would be happy to provide further clarifications if necessary. Thank you for your time and consideration.\"}", "{\"summary\": \"The authors present HPO, a hierarchical RL method that directly optimizes environment reward and preferences coming from sparse sub-goal reaching reward by a lower level policies. The specific objective is derived from DPO, and helps mitigate non-stationarity common to most HRL methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Motivation:** Non-stationarity in HRL is a big issue and this paper presents a well-motivated solution for it.\\n\\n**Comparisons:** The # of and relevancy of baselines is solid, this is a convincing set of comparisons.\\n\\n**Experiments:** The experiments are performed on tasks well-suited for HRL, and the analysis on goal distance prediction against HIER and HAC demonstrates that HPO\\u2019s objective encourages sampling reachable goals for the lower-level policy.\\n\\n**Clarity:** THe paper is overall quite clear and the walkthrough of how to obtain the objective was both interesting and easy to read.\", \"weaknesses\": [\"**Clarity:** Overall clarity is good, but the reason *why* non-stationarity is solved should be explicated better, earlier in the paper. Non-stationarity occurs because a high level policy outputting a certain subgoal can result in a different reward later in training. The reason why this is solved is because the ***reward** for the high-level policy automatically adapts with the low level policies changin*g as it is based on the value function. The part I italicized isn\\u2019t that clearly presented in the paper.\", \"For example, when looking at Figure 1, it just looks like the Value function being given to DPO is the reason why non-stationarity is solved. The caption states \\u201cSince this preference-based learning approach does not depend on lower primitive, this mitigates non-stationarity. Note that since the current estimation of value function is used to regularize the higher policy, it does not cause non-stationarity.\\u201d\", \"Instead, this can be simplified to some form of the italicized statement above; the current statement does not directly explain why.\", \"Similar comment for the introduction and after giving the full objective in Eq.14.\", \"**Experiments:** Why not compare HAC and HIER on the same graphs in Figs 3/4? It\\u2019s a little strange to pick each one individually for a separate comparison when they can be compared on the same things.\", \"**Minor Issues:**\", \"A high level policy discount factor is missing from Equations 4, 9, 10 and so on. Maybe it\\u2019s not necessary as the authors are considering the one-step DPO objective, but perhaps that could be mentioned?\", \"Figure 2 text size and line widths are too small\"], \"questions\": \"From Eq. 6 to Eq. 7, the constraint that $V_{\\\\pi_L} > V_{\\\\pi_L^*}$ is dropped for $V_{\\\\pi_L} > \\\\delta$ due to the justification that for sparse-reward goal-reaching, $V_{\\\\pi_L^*} > 0$ must be true. But this no longer optimizes the same objective, right? We still don\\u2019t know the ground truth value that $V_{\\\\pi^*_L}$ should be; the writing seems to ignore this issue. A simple footnote or extra sentence of discussion stating this problem would make this part clearer.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"Hi, I am also a reviewer for the following paper: https://openreview.net/forum?id=mJKhn7Ey4y&referrer=%5BReviewers%20Console%5D(%2Fgroup%3Fid%3DICLR.cc%2F2025%2FConference%2FReviewers%23assigned-submissions)\\n\\nAfter reading both, I believe that the papers are so similar that they should not be two separate submissions. For example, the challenges they aim to solve in hierarchical RL are identical:\\n\\n\\\"The first (C1) is non-stationarity caused by evolving lower-level policies, which destabilizes the higher-level reward function (Chane-Sane et al., 2021). The second (C2) is the high-level policy\\u2019s tendency to generate subgoals that are infeasible for the lower-level policy to achieve.\\\"\\n\\nand\\n\\n\\\"Limitation L1: non-stationarity due to evolving lower-level primitive policy, and Limitation L2: infeasible subgoal generation by higher-level policy(Chane-Sane et al., 2021). When the higher and lower level policies are trained concurrently in HRL, due to continuously changing and sub-optimal lower level policy, the higher level reward function and transition model become non-stationary.\\\"\\n\\nfrom the first paragraph of the introduction for both.\\n\\nFurthermore, their proposed solution is almost the same:\\n\\nWe propose a novel Hierarchical Preference Optimization (HPO) method that leverages primitiveregularized Direct Preference Optimization (DPO) to solve complex RL tasks using human preference data (Section 4). Our approach is principled; we derive it by reformulating the HRL problem as a bi-level optimization problem. To the best of our knowledge, this is the first work to utilize the bi-level optimization framework to develop a principled solution for HRL.\\n\\nand\\n\\n\\\"The key idea underlying DIPPER is twofold: we introduce a DPO-based approach to directly learn higher-level policies from preferences, replacing the two-tier RLHF component in the scheme described above with a simpler, more efficient single-tier approach; we replace the reference policy inherent in DPO-based approaches, which is typically unavailable in complex robotics tasks, with a primitive-enabled reference policy derived from a novel bi-level optimization formulation of the HRL problem.\\\"\\n\\nThe resulting \\\"novel\\\" solution for the procedure proposed is in Eq 13 in this paper and Eq 15 in the other, and are exactly identical.\\n\\nFinally, the experiment figures are almost identical, with near-identical performance between the two methods introduced in the two papers comparing against an identical set of baselines on an identical set of environments. This by itself isn't strange, but the near-identical performance points to how these methods are essentially the same. I believe it's the same authors too, as they cite the same paper (Singh et al 2024) as the main prior work they build upon, with the same art style for all figures.\", \"the_main_differences_between_the_two_papers\": \"{0, 1} (this paper) reward for the low level policy vs {-1, 0} (the other paper) reward\\nThe use of human preferences vs substituting them with environment-generated preferences.\\nI think this should be one paper, ablating the single choice of preferences vs environment-generated preferences. I don't believe they are sufficiently different to create two papers for.\"}", "{\"title\": \"Response regarding ethic concerns [part 1]\", \"comment\": \"**General Response:** We sincerely thank the reviewer for taking the time to provide detailed feedback and for raising these concerns. We sincerely apologize for any misunderstanding caused by our writing, and we greatly appreciate the opportunity to clarify the differences between the two papers and address the reviewer\\u2019s comments in detail.\\n\\n> Comment 1: After reading both, I believe that the papers are so similar that they should not be two separate submissions. For example, the challenges they aim to solve in hierarchical RL are identical.\\n\\n**Response to Comment 1:** We acknowledge that both DIPPER and HPO papers share a similar motivation, as both aim to address challenges in hierarchical reinforcement learning (HRL) using ideas from preference optimization. However, we would like to highlight the key differences between the two. While both tackle non-stationarity and infeasible subgoal generation in HRL, the solution approach, derivations, and practical implementations are distinct. Below, we provide a detailed breakdown to address these concerns further.\\n\\n> **Comment 2:** The resulting \\\"novel\\\" solution for the procedure proposed is in Eq 13 in HPO paper and Eq 15 in DIPPER other, and are exactly identical.\\n\\n\\n**Response to Comment 2:** We understand the concern about the similarity in the mathematical structure of the two equations. However, the underlying derivation, assumptions, and implementation of these equations are different. Below, we provide a side-by-side comparison of the two equations and highlight the key differences:\\n\\nEquation (15) from the DIPPER is given by: \\n\\n\\\\begin{align}\\n\\\\mathcal{L}\\\\^d = - \\\\mathbb{E}\\\\_{(\\\\tau^1, \\\\tau^2) \\\\sim \\\\mathcal{D}} \\n\\\\bigg[& \\n\\\\log \\\\sigma \\n\\\\bigg( \\n\\\\sum\\\\_{t=0}^{T-1} \\n\\\\big(\\\\alpha \\\\log \\\\pi\\\\_U \\\\big( g^1\\\\_t \\\\mid s^1\\\\_t\\\\big) - \\\\alpha \\\\log \\\\pi\\\\_U \\\\big( g^2\\\\_t \\\\mid s^2\\\\_t\\\\big) + \\\\lambda \\\\underbrace{{(V\\\\_{L}^{k}(s^1\\\\_t, g^1\\\\_t) - V\\\\_{L}^k(s^2\\\\_t, g^2\\\\_t) )}}\\\\_{A:=}\\n\\\\big) \\n\\\\bigg) \\n\\\\bigg]. \\\\tag{15}\\n\\\\end{align}\\n\\nEquation (13) from the HPO paper is given by: \\n\\n\\\\begin{align}\\n\\\\mathcal{L}(\\\\pi^H\\\\_{\\\\star}, \\\\mathcal{D}) = - \\\\mathbb{E}\\\\_{(\\\\tau\\\\^1, \\\\tau\\\\^2, y) \\\\sim \\\\mathcal{D}} \\n\\\\bigg[& \\n\\\\log \\\\sigma \\n\\\\bigg( \\n\\\\sum\\\\_{t=0}^{T-1} \\n\\\\big(\\\\beta \\\\log \\\\pi\\\\^H\\\\_{\\\\star} \\\\big( g^1\\\\_t \\\\mid s^1\\\\_t, g^{\\\\star} \\\\big) - \\\\beta \\\\log \\\\pi^H\\\\_{\\\\star} \\\\big( g^2\\\\_t \\\\mid s^2\\\\_t, g^{\\\\star} \\\\big) + \\\\lambda \\\\underbrace{(V\\\\_{\\\\pi\\\\_L}(s^1\\\\_t, g^1\\\\_t) -V\\\\_{\\\\pi\\\\_L}(s^2\\\\_t, g^2\\\\_t))}\\\\_{B:=} \\n\\\\big) \\n\\\\bigg) \\n\\\\bigg]. \\\\tag{13}\\n\\\\end{align}\\n\\n\\n\\n**Key Differences between Eq. (15) and Eq. (13):** To highlight the difference between Eq. (13) and Eq. (15), let us consider the terms A in Eq. (15) and B in Eq. (13). We remark that the value function used in A is $V_{L}^k$ is a k step approximation of the optimal value function (which is derived using huristics in DIPPER without rigorous mathematical justifications, but works in practice, and also requires higher value of $k$). This leads to a double-loop algorithm proposed in DIPPER. One loop is to obtain the value of $V_{L}^k$ (k iterations), and then another loop is to update the policy after calculating the gradient of Eq. (15). \\n\\nIn contrast, let us consider the term B in Eq. (13) from HPO; we note that it just has the value function evaluations $V_{\\\\pi_L}$ without any additional inner loop to calculate the optimal value function, which is required in Eq. (15). Therefore, the algorithm proposed in HPO is a single loop algorithm to solve the challenges of HRL. \\n\\nWe concede that both approaches are trying to solve the challenges posed by HRL, but the solution approaches are different, which leads to different algorithms (two loop algorithm to solve Eq. (15) and only single loop algorithm to solve Eq. (13)). We agree with the reviewer that the contributions can be incremental, but we humbly request the reviewer not to raise an ethics flag because that was never the intentions and we extremely apologies if our writing has lead to the wrong impression. \\n\\n\\n**Additional Fundamental Difference between DIPPER and HPO.** \\n\\n(i) DIPPER requires access to a reference policy $\\\\pi_{ref}$ as mentioned in the objective in [Eq. (5), DIPPER]. Also the derivation and the loss function derived for DIPPER in Eq. (15) holds only for the specific design choice of the reference policy defined in [Eq. (9), DIPPER], which depends upon the optimal value function, which later requires a k-step approximation of the optimal value function.\\n\\n(ii) On the other hand, the algorithmic development in HPO is independent of any reference policy and does not require any such assumptions. The derivations in HPO are motivated by the developments in this paper (https://arxiv.org/pdf/2404.12358).\"}", "{\"summary\": \"The paper proposed a Hierarchical Preference Optimization (HPO) algorithm for hierarchical reinforcement learning. The algorithm aims to generate feasible subgoals and mitigate the non-stationary in HRL. HPO leveraged the low-level value functions to condition higher-level policy for subgoal generation and utilized the direct preference optimization (DPO) to optimize the higher-level policy.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea to introduce low-level value to regularize high-level policy optimization, and leveraging DPO to optimize a traditional RL problem is novel.\", \"weaknesses\": \"1. The proposed HPO algorithm is built on the goal-conditioned HRL concept. However, in the problem formulation, the definition of the high-level reward deviates from the standard goal-conditioned HRL framework, making it not a robust problem definition. Additionally, some derivations need further clarification or analysis. See details in Questions.\\n\\n2. The HPO is an HRL approach, but the paper doesn't compare with SOTA HRL works. I encourage the author to involve at least one recently representative HRL algorithm as baseline to further demonstrate HPO's advantages (reference [1][2][3]).\\n\\n[1] G\\u00fcrtler, Nico, Dieter B\\u00fcchler, and Georg Martius. \\\"Hierarchical reinforcement learning with timed subgoals.\\\" Advances in Neural Information Processing Systems. (2021).\\n\\n[2] Kim, Junsu, Younggyo Seo, and Jinwoo Shin. \\\"Landmark-guided subgoal generation in hierarchical reinforcement learning.\\\" Advances in neural information processing systems. (2021).\\n\\n[3] Zhang T, Guo S, Tan T, Hu X, Chen F. Generating adjacency-constrained subgoals in hierarchical reinforcement learning. Advances in Neural Information Processing Systems. (2020).\", \"questions\": \"**Question 1: Problem Formulation**.\\n\\nThe problem formulation for goal-conditioned HRL is not entirely accurate. Specifically, in the paragraph starting from Line 157: \\\"the lower-level policy is driven by a sparse reward signal, ...., indicating that the subgoal is reached.\\\" This is correct, as the low-level policy aims to achieve the subgoal set by the high-level policy. However, the high-level reward function is defined as $r^H = \\\\sum_{sub-trajectory}{r^L}$, where $r^L$ is the low-level reward. This doesn't seem correct to me, as the high-level aims to generate sub-goals guide the low-level to **achieve the final task objective**, i.e., the high-level reward is usually defined based on the environmental reward signal from the problem MDP. (Check the goal-conditioned HRL framework definition in reference [4]). With the definition given in the paper, the high-level reward appears to be evaluating \\\"how many steps in total of the low-level policy is staying near my generated subgoal.\\\" In this problem formulation, the original environmental reward signal is completely omitted, so how can HPO ensure that it is optimizing the original task rewards?\\n\\nThis definition also leads to an extreme case where the high-level policy simply generates the current state as the next subgoal, making the low-level policy do nothing and still \\\"achieve\\\" the subgoal. In this scenario, both the low-level policy and high-level policy would receive the highest reward, fully satisfying their optimization objectives. However, this would cause the agent's overall policy to just be idle. (My main concern is the problem formulation doesn't involve the MDP reward function).\\n\\nGiven this, I'm unclear how HPO is supposed to work.\\n\\n[4] Kulkarni, Tejas D., et al. \\\"Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation.\\\" Advances in neural information processing systems (2016).\\n\\n**Question 2**. Following up on Question 1, around Line 485, it is mentioned that \\\"HPO consistently generates low average distance values, which implies that HPO mitigates non-stationarity.\\\" The subgoals generated by HPO are often \\\"close\\\" to the current state. Could this be due to the aforementioned definition of the high-level reward function, which evaluates whether the low-level policy has achieved the subgoal? If so, would generating only near subgoals prevent the agent from progressing toward the overall task objective?\\n\\n**Question 3**. At around Line 347, could you further prove why the advantage equals the entropy of the policy ($A(s_t,g^*,g_t) = \\\\beta log(\\\\pi^H (g_t | s_t, g^*))$)? and how is the $\\\\beta$ defined? The advantage directly equates to the entropy of the policy is not intuitive to me. Ziebart's paper studies a special case based on some assumptions, it may not be generally applicable to all RL problems.\\n\\nI would like to increase the score if these concerns are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper formulate HRL as a bi-level optimization problem and transform it into a primitive-regularized DPO formulation. The proposed method HPO incoporates token-level DPO into Max-Ent RL for mitigating non-stationary issue and infeasible subgoal generation issue.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes a primitive-regularized preference optimization approach for HRL, which is a novel try.\\n2. The derived DPO formulation has theoretical groundings.\", \"weaknesses\": \"HPO is sensitive to the two introduced hyperparameters, $\\\\lambda$ and $\\\\beta$, according to Figure 5 and Figure 6 in the Appendix. Further, it is not clear the values of $\\\\lambda$ and $\\\\beta$ used in each task of HPO in Figure 2-4.\", \"questions\": \"1. Based on the experimental settings detailed in the Appendix, the pick-and-place task appears simple enough for single-level RL methods to solve, as seen in environments like panda-gym [1], meta-world [2], and td-mpc [3]. Therefore, it is unclear what makes your experimental setup unique, given that none of the baselines aside from HPO achieve a satisfactory success rate.\\n\\n [1] Gallou\\u00e9dec, Quentin, et al. \\\"panda-gym: Open-source goal-conditioned environments for robotic learning.\\\" arXiv preprint arXiv:2106.13687 (2021).\\n\\n [2] Yu, Tianhe, et al. \\\"Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning.\\\" Conference on Robot Learning, PMLR, 2020.\\n\\n [3] Hansen, Nicklas, Xiaolong Wang, and Hao Su. \\\"Temporal difference learning for model predictive control.\\\" arXiv preprint arXiv:2203.04955 (2022).\\n\\n2. Could you clarify why HPO\\u2019s performance is relatively low on Maze navigation tasks?\\n\\n3. To better illustrate HPO's ability to address non-stationarity and infeasible subgoal generation (Figure 3 and Figure 4), a comparison with the HAC baseline would be better, as HAC also addresses non-stationarity and is a recent work. I think this comparison should not pose too much burden on the authors, as HAC is already included as a baseline for success rate comparison in Figure 2.\\n\\n4. There is a typo in line 761: \\\"in Figure 6.\\\"\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"Thank the reviewer Reviewer MsT6 for pointing out Submission5077. I read both papers today and recognized that they indeed exhibit a high level of similarity, which raises concerns about the dual submission.\\n\\nKey areas of overlap include the core idea (Eq.13 and Eq.15), and the problem solved. The diagrams and experimental settings outlined in both papers are strikingly similar. The graphics illustrate parallel structures for each algorithm\\u2019s architecture, with comparable layout and labeling, reinforcing the visual similarity. The written content of the related works, problem formulation, and technical approach sections shows marked resemblance in terminology and phrasing. Additionally, the cited references in the related works of each paper appear nearly in the same order.\\n\\nOverall, the problem focus, methodology, and even application scenarios\\u2014complex robotic tasks like maze navigation and pick-and-place\\u2014are nearly identical. Thus I suggest they could be better presented as a single, comprehensive approach rather than separate works.\"}", "{\"comment\": \"> Comparing EQ 15 and Eq 13, difference comes in $k$ step Value function\\n\\nThis $k$ step value function is, for all practical purposes, a design choice. Ironically, Algorithm 1 of this paper, HPO, line 8, literally uses $V_\\\\pi^k$ which is only in the DIPPER paper; this signifies essentially a direct copy-paste of the latex algorithm block from DIPPER to HPO's algorithm. I'm not convinced that $k>1$ is even relevant as the DIPPER paper does not list $k$'s value in the hyperparameter list. \\n\\n> Needing $\\\\pi_{\\\\text{ref}}$\\n\\nThis reference policy is absorbed into the objective, and again, results in very little difference between the two algorithms. \\n\\nThus I am unconvinced and will be keeping my ethics flag. If the ethics review sees nothing wrong, I will be reviewing both papers as standalone contributions. But, at least to me, submitting two nearly identical papers with the same claimed contributions and nearly identical algorithms/objectives that have nearly identical experimental results seems to be ethically flawed. \\n\\nI would suggest combining into one paper and resubmitting next time.\"}" ] }
BrqFB8Nl7e
Continual Learning After Model Deployment
[ "Derda Kaymak", "Gyuhak Kim", "Tomoya Kaichi", "Tatsuya Konishi", "Bing Liu" ]
This paper studies continual learning after model deployment. A real-world application environment is often an open world filled with novel or out-of-distribution (OOD) objects that have not been seen before. We can call continual learning in such an environment *open-world continual learning* (OWCL). OWCL incrementally performs two main tasks: (1) detecting OOD objects, and (2) continually learning the OOD or new objects on the fly. Although OOD detection and continual learning have been extensively studied separately, their combination for OWCL has barely been attempted. This is perhaps because in addition to the existing challenges of OOD detection and continual learning such as *catastrophic forgetting* (CF), OWCL also faces the challenge of data scarcity. As novel objects appear sporadically, when an object from a new/novel class is detected, it is difficult to learn it from one or a few samples to give good accuracy. This paper proposes a novel method called OpenLD to deal with these problems based on *linear discriminant analysis* (LDA) and a pre-trained model. This method enables OOD detection and incremental learning of the detected samples on the fly with no CF. Experimental evaluation demonstrates the effectiveness of OpenLD.
[ "Open-World", "Continual Learning" ]
https://openreview.net/pdf?id=BrqFB8Nl7e
https://openreview.net/forum?id=BrqFB8Nl7e
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jCQTRVnvLg", "h0jCYjA3SC", "fhBZ3eSTAS", "Y1YX9GSlVY", "Rgx00JZSXS" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730280025799, 1730410827633, 1729078050863, 1730196309283, 1731629378609 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7678/Reviewer_ipVQ" ], [ "ICLR.cc/2025/Conference/Submission7678/Reviewer_XK34" ], [ "ICLR.cc/2025/Conference/Submission7678/Reviewer_NFvs" ], [ "ICLR.cc/2025/Conference/Submission7678/Reviewer_vex5" ], [ "ICLR.cc/2025/Conference/Submission7678/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel setting called Open-World Continual Learning (OWCL), which addresses the real-world scenario where a model must continue learning after deployment to handle new, unseen objects (out-of-distribution, OOD). The proposed method, OpenLD, is based on Linear Discriminant Analysis (LDA) combined with a pre-trained model to enable efficient OOD detection and incremental learning without catastrophic forgetting. Experimental results on benchmark datasets (CIFAR-10, CIFAR-100, TinyImageNet) demonstrate that OpenLD outperforms existing methods in both OOD detection and continual learning after model deployment.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tI think this is a very meaningful topic.It introduces Open-World Continual Learning (OWCL), which is significant for AI models operating in real-world environments. OWCL allows the model to continuously adapt after deployment, enhancing its ability to autonomously acquire new knowledge.\\n2.\\tThe OpenLD effectively combines Linear Discriminant Analysis (LDA) with a pre-trained model to handle OOD detection and incremental learning. This approach avoids catastrophic forgetting by using a shared covariance matrix and updating class means incrementally.\\n3.\\tThe experimental results on standard benchmark datasets (CIFAR-10, CIFAR-100, TinyImageNet) demonstrate that OpenLD performs better in terms of both accuracy and robustness compared to existing methods. This validates its effectiveness in an open-world continual learning scenario.\", \"weaknesses\": \"1.\\tOpenLD relies too much on pre-trained models, which makes it unable to learn new features from new data in existing categories after deployment. Will this limit the recognition of known categories?\\n2.\\tThe OpenLD method uses a shared covariance matrix to handle all categories, which can become problematic as the number of categories increases significantly. A shared covariance matrix can result in reduced accuracy or increased computational burden when managing a large number of categories.\\n3.\\tI think your article is lacking in the methodological explanation, such as why Marhalanobis distance is used, what are the advantages of Marhalanobis distance, and whether there are theoretical or experimental advantages over other distances.\\n4.\\tI think you can include as many comparison methods as possible. Although this is a very novel question, similar methods can be compared, and your table needs to be beautified.\", \"questions\": \"Have you considered combining OpenLD with other state-of-the-art OOD detection approaches, such as those using neural network uncertainty or ensemble methods, to improve robustness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces OpenLD, a method for open-world continual learning (OWCL), where models deployed in real-world environments encounter novel, out-of-distribution (OOD) objects. OWCL combines OOD detection with continual learning to address challenges like catastrophic forgetting (CF) and data scarcity when new objects appear sporadically. OpenLD leverages linear discriminant analysis (LDA) and a pre-trained model to enable efficient OOD detection and incremental learning without CF.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Good sets of experiments across various backbones\\n2. Paper is easy to understand and flows well\", \"weaknesses\": \"1. Sounds like continual learning with pre-trained models and fine-tuning continually - which is the SoTA when it comes to transformer based CL methods.\\n2. There are existing generalized continual learning [1,2] frameworks that the authors overlook.\\n3. Authors complain most continual learning methods are replay based which is incorrect and must not be emphasized. \\n4. Small datasets are used for experiments, not representative of \\\"open-world\\\" as the authors emphasize often. Must use large datasets such as iNaturalist\\n4. Why aren't existing methods compared in the proposed setup? This is a major weakness.\\n\\n\\n\\n[1] Generalized Class Incremental Learning - Fei Mi; Lingjing Kong; Tao Lin; Kaicheng Yu; Boi Faltings \\n[2] Online Class-Incremental Learning For Real-World Food Image Classification - Siddeshwar Raghavan, Jiangpeng He, Fengqing Zhu\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Overall, this paper constructs a new continual learning setting named open-world continual learning. After that, it proposes a novel method called OpenLD under this setting.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The problem that this paper aims to handle is important, as continual learning does should be expected to happen in an open-set scenario.\\n\\nThe paper achieves good experimental results.\", \"weaknesses\": \"(See the questions section below)\", \"questions\": \"Overall, I believe that this paper is currently not ready to be accepted. Below are my concerns.\\n\\n1. I am kind of confused over the difference between the proposed setting and the simple combination of OOD detection and few-shot class incremental learning. Considering this, I am curious that, why simply combining an existing OOD method and an existing few-shot class incremental learning method cannot handle the proposed problem.\\n\\n2. Meanwhile, I am also a bit confused over the difference between OOD detection and continual OOD detection. This is because, while the authors seem to try to highlight that OOD detection can face many challenges in their proposed setting, they seem to finally just use existing OOD methods to perform OOD detection in their setting. Thus, does this mean that it is just the most typical OOD detection that is performed in the proposed framework?\\n\\n3. I am confused over the practicity of the proposed setting. Specifically, continual learning is also known as lifelong learning. Yet, in the proposed setting, if I am not wrong, it seems that a person is always required to be involved to annotate every detected OOD data. This is quite strange and non-realistic to me.\\n\\n4. Meanwhile, the paper seems to heavily base their method on an assumption that all classes share the same covariance. I believe that this can be a non-realistic assumption. Specifically, it is very natural that some classes hold a large intra-variance than other classes. The constraint that all classes must have the same covariance is thus a very strict assumption from my perspective.\\n\\nIn summary, in light of the above, I believe that this submission is not ready for being published in its current form.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes an open-world continual learning setting where the model can detect OOD classes and learn new classes in a class-incremental learning setup without fine-tuning the pre-trained model. The paper also proposes a realistic setup for class-incremental learning where the incremental data contains a mix of in-distribution and out-of-distribution classes. The proposed method uses distance metrics to detect OOD classes and uses LDA for class-incremental learning. The results are shown on three datasets including CIFAR-10, CIFAR-100 and TinyImageNet.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The author identifies important and realistic challenges in continual learning. The proposed problem setting of open-world continual learning is relevant and realistic compared to previously studied class-incremental learning settings where new class data arrives in pure chunks of new classes. Combining OOD detection with continual learning in a realistic setup is a step in the correct direction.\\n\\n2. The proposed approach is quite neat where both OOD class detection and incremental learning can be performed using similar distance-based approaches. The results show that the proposed openLD method is able to handle the upcoming new OOD classes without losing performance obtained using the LDA classifier.\\n\\n3. Related work is well-written and provides a good summary of relevant work on continual learning and OOD detection.\", \"weaknesses\": \"The problem setup is quite relevant for the field, but the application of the method in this work has some limitations.\\n1. Firstly, the paper title is a bit deceiving since model deployment usually refers to a model without model weights access, which is assumed to be available in this work. \\n2. The pre-trained is only used for feature extraction and not for fine-tuning using the ID train set before deployment. Since there is a big gap in performance using the LDA classifier and fine-tuned model, it would make sense to start with the strongest possible model on ID classes. Additionally, it is not realistic to ignore or throw away the new upcoming ID APP data \\u201cafter deployment\\u201d. \\n3. The method keeps the pre-trained model \\u2018frozen\\u2019 all the time and only performs continual learning on the obtained features. This setting is highly limiting for the performance of the model and does not reflect a realistic continual learning setting when model weights are accessible. \\n4. Missing comparison with similar baselines like nearest neighbor approach or nearest centroid approach or prototypical network. The benefits of using LDA are not motivated. \\n5. In Table 1, the difference between the OpenLD and Joint Fine-tuning upper bound grows as the model is scaled to a dataset with a larger number of classes, showing the approach is not scalable and relies heavily on the extracted features from the frozen model. \\n6. The presentation of the paper can be improved. The captions are not self-complete and the text contains copied sentences. For example, the Figure 1 caption, does not explain the figure. Table 1 caption does not say which performance is shown in the table. The same text describing the OWCL setting is repeated from the introduction to Section 2.1 reducing the quality of the paper. \\n\\nMinor comments\\n1. Line 454 claims that OpenLD consistently outperforms the methods without using C^E. However, this is not true for CIFAR-10 shown in Figure 2. \\n2. In Line 471, can the authors explain why VIT-S/16 DINO and VIT-B/16 SAM are expected to show poorer results? \\n3. In Line 472, although VIT-B/16 DINO is as big as VIT-B/16 SAM, why is it expected to perform better?\", \"questions\": \"1. Is the pre-trained model fine-tuned with the samples from the ID train set before deployment? If not, please reason about it. This is important to know because there is a significant gap between fine-tuned joint-training performance and Joint LDA performance.\\n2. Since the classes of CIFAR and miniImageNet datasets are removed from the pre-training dataset, how is it ensured that features extracted from the pre-trained frozen model generalize for new classes? \\n3. Why did authors use LDA and not other distance-based methods? Please include comparisons with other off-the-shelf classification methods like nearest centroid-based classification or kNN. \\n4. The paper requires improved presentation and stronger reasoning behind different design choices with ablations and comparisons. The proposed model should be compared with the strongest fine-tuning-based baseline setup to make claims about the removal of catastrophic forgetting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
Br42izY8eU
MAD-Sherlock: Multi-Agent Debates for Out-of-Context Misinformation Detection
[ "Kumud Lakara", "Juil Sock", "Christian Rupprecht", "Philip Torr", "John Collomosse", "Christian Schroeder de Witt" ]
One of the most challenging forms of misinformation involves the out-of-context (OOC) use of images paired with misleading text, creating false narratives. Existing AI-driven detection systems lack explainability and require expensive finetuning. We address these issues with MAD-Sherlock: a Multi-Agent Debate system for OOC Misinformation Detection. MAD-Sherlock introduces a novel multi-agent debate framework where multimodal agents collaborate to assess contextual consistency and request external information to enhance cross-context reasoning and decision-making. Our framework enables explainable detection with state-of-the-art accuracy even without domain-specific fine-tuning. Extensive ablation studies confirm that external retrieval significantly improves detection accuracy, and user studies demonstrate that MAD-Sherlock boosts performance for both experts and non-experts. These results position MAD-Sherlock as a powerful tool for autonomous and citizen intelligence applications.
[ "misinformation detection", "out-of-context image use", "LLMs", "multimodal models", "multi-agent debates", "safety" ]
Reject
https://openreview.net/pdf?id=Br42izY8eU
https://openreview.net/forum?id=Br42izY8eU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwurn6HZNP", "xqYsyxAZtU", "wQf8bwr2IL", "uAGi4oYc3r", "tIWUBv3CBo", "qaaU1gcLJv", "mSrY6XExcD", "hukPJ8Uk3s", "g4FjhnAY11", "ctIS9ezukD", "acmzzW15lP", "aCVxeNWbaq", "WYfn9GjriW", "VCVh1b19W7", "TuLwgedN4M", "TjIDWFima6", "T2gYesIOYs", "PKOKlWWf2m", "IEqREuLqJo", "CW7jkEou1Z", "Bn2YhlqYlq", "ALjZkekdGl", "3zQ1P7xXgL", "2sS4t8pax8" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733204496397, 1733153917904, 1730682241479, 1733229297423, 1732805719694, 1732806688033, 1730200510533, 1732806820295, 1737523876580, 1732803800642, 1732805037902, 1732804533900, 1733079649774, 1733135973758, 1733069766773, 1732807294412, 1730060951356, 1732802803727, 1734640900029, 1732805240933, 1732804500080, 1732804710502, 1730366603693, 1732804979253 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_peoS" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_AktV" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_xyn6" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_UiNZ" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_xyn6" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_AktV" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Area_Chair_7e71" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ], [ "ICLR.cc/2025/Conference/Submission7941/Reviewer_peoS" ], [ "ICLR.cc/2025/Conference/Submission7941/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for your response! Your reply addressed most of my concerns, and I will increase my score. However, the response to \\\"Weakness-1\\\" did not convince me. I believe your explanation is more about why the debate approach is used to address misinformation rather than solving OOC, so the motivation here is not very clear. Also, regarding \\\"Weakness-4,\\\" which mentions ablation experiments, I did not see sufficient results from the ablation experiments. Therefore, I will increase my score by 2 points.\\n\\nLastly, I really appreciate the author's effort and time!\"}", "{\"comment\": \"Thank you for your response. I am happy to increase the score if you include those plans in the updated submission. However, as they remain just a plan, I will maintain my current score. Additionally, for the cost, I believe a comparison is needed rather than focusing solely on your method.\"}", "{\"summary\": \"The paper describes a method to detect a particular kind of misinformation, where text is paired with an image misleadingly out of context, using debating LLMs. The LLMs are equipped with a reverse image web search retrieval system. The paper shows this system performs well compared to many baselines and alternatives from the literature, as well as in a user study assessing how much the explanations the LLMs generate help humans.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Important problem and the general approach of retrieval-augmented LLMs, with something to improve their reasoning (here, debate), makes a lot of sense.\\n\\nComparison with many alternative methods from literature.\\n\\nGood experiments on different debate setups, both as an ablation here and potentially informative for methods in other domains too.\", \"weaknesses\": \"A baseline with the LLM and retrieval system used but without debate - similar to actor-skeptic but without the skeptic - seems missing. I feel like this is important to understand how much debate is actually helping, since it isn't guaranteed that it would perform worse than e.g. some less effective forms of debate that might confuse things more, or compared to other methods from the literature which might be using models weaker than GPT-4o.\\n\\nCost and time efficiency are not reported. This also connects to the previous point, and seems a key consideration when comparing multiple LLMs engaging in multi-turn debates, which could be significantly more costly than e.g. a single LLM setup. A high cost could be an important limitation, and regardless, important information for readers considering if they could apply the work.\\n\\nAlthough - aside from the baseline point mentioned above - the comparisons with existing methods are extensive, they are all performed on a single dataset. The margin compared to the next-best performing approach (Sniffer with fine-tuning) is only about 1.7%, and there are no error bars reported. So, it's not very clear how definitive the performance conclusions are. \\n\\nOverall, the combination of the three preceding two points forms my main concern: the framework looks promising, but some information is missing for a reader to make a full, confident assessment. Below I note two minor issues I had with the writing:\\n\\nDiscussion of Lin et al (line 167): it's clear the current work is quite different. It's less clear to me, though, why this work is highlighted in general, given that it is so different, including entirely different domain. Maybe this could be contextualized a bit more broadly in terms of approaches to classification by debating LLMs, or some other connecting insight or argument beyond \\\"here's another work that used debating LLMs\\\".\\n\\nSection 3.1: I was a bit unclear when first reading this on what is background information on possible ways a debate could be structured, vs. what you actually test yourselves. Maybe the wording could be a bit more explicit that you test all of these.\", \"questions\": \"Why summarize using Llama 13B as opposed to a more recent Llama? It seems like Llama 3 8b is both smaller and has significantly better performance?\\n\\nThe user study asks participants to not search the web themselves. I can see that being applicable for laypeople, who might not want to spend time checking stuff or know what should be checked. I'm less sure for journalists, are there cases where they too wouldn't be using web search?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concluding Remarks\", \"comment\": \"We sincerely thank the reviewers for their thoughtful feedback, which has greatly helped improve the clarity and rigor of our work. In summary, we have clarified that:\\n\\n1. The debate setup is fundamental to the effectiveness of our method by adding a new baseline to our updated submission: GPT-4o with external context but without the debate framework. This baseline isolates the effect of the debate setup, allowing us to directly demonstrate its critical role in MAD-Sherlock\\u2019s performance. Results from our updated experiments show that the inclusion of multi-agent debates improves both detection accuracy (from 86% to 90%) and explanation quality. \\n\\n2. The purpose of the user study is not to convey that insights generated by MAD-Sherlock are better than those from journalists. Rather, our intention with the study is to show that insights from MAD-Sherlock are able to add significant value to the misinformation detection workflow in a fully automated way. By restricting internet access, we aimed to isolate the effectiveness of MAD-Sherlock's insights without external influences, ensuring that the results reflect the system's intrinsic utility rather than participants' independent research skills. Therefore, the current setup of the study is completely fair.\\n\\n3. The external retrieval module can be further refined by using a stronger language model however we do not have evidence to believe that our choice of summarization model had a negative impact on overall performance. We also include information related to time and cost efficiency in order to allow for a more comprehensive evaluation of the system\\u2019s applicability to varied use cases.\\n\\nWe note however, that despite our detailed rebuttal, Reviewer UiNZ did not engage with our responses. We believe we have fully addressed their concerns and respectfully request that the other reviewers and the Area Chair consider this when making their final decision.\\n\\nWith best regards,\\n\\nThe Authors\"}", "{\"title\": \"Rebuttals to Weaknesses\", \"comment\": \"We thank the reviewer for their time and efforts and address their concerns below.\\n\\n## Weaknesses\\n\\n**Weakness-1**: _\\\"Lack of data cleaning for external information\\\"_\\n\\n**Weakness-1.1**: _\\\"The paper seems to lack verification of the authenticity and quality of the extracted external information. Without thorough data cleaning, if low-quality data or even fake news is retrieved, it could negatively impact the judgment results.\\\"_\\n\\nWe thank the reviewer for bringing up a crucial point related to the quality of the retrieved external information. While we currently only opt for a qualitative analysis of the retrieved information due to the large scale of the NewsCLIPpings dataset, we would like to include some form of quantitative analysis of the retrieved information as well. \\n\\nWe mitigate the risk of retrieving and relying solely on fake news by aggregating information from multiple independent webpages to construct the external context. This approach ensures a more diverse and balanced set of sources, significantly reducing the likelihood that the retrieved context is dominated by misinformation and thereby minimizing its potential impact on the results.\\n\\n**Weakness-1.2**: _\\\"During the pre-training process, commercial LLMs use carefully cleaned data. If a conflict arises between the parameter knowledge of the LLM itself and the external knowledge, how should it be resolved? This is not uncommon; when an event occurs, it is often accompanied by numerous rumors, even conflicting ones. Blindly trusting the external knowledge retrieved online could lead to undesirable outcomes.\\\"_\\n\\nThis is also addressed with the previous point.\\n\\n**Weakness-1.3**: _\\\"Only the Bing Visual Search API was used for information retrieval. Is it proven to be reliable and effective enough?\\\"_\\n\\nWe explored using the Google Visual Search API but found that it did not support reverse image-based search, which was a critical requirement for our use case. Given the limited availability of visual search APIs, we turned to the Bing API, which offers robust access to a substantial pool of internet resources which is crucial to making informed decisions about the authenticity of the image and text input. We are open to suggestions by the Reviewer as to what additional data sources could be integrated into our approach.\\n\\n**Weakness-2**: _\\\"Reliability and effectiveness of LLM summarization, and potential side effects\\nIn Section 3.3.2, you mentioned using LLMs, such as Llama-13B, to summarize information, focusing only on the most important parts of the text. I am curious about how it determines the most important parts and whether it might miss important details. Could the performance of Llama-13B itself become a bottleneck in the workflow?\\nInformation summarized and rewritten by the LLM inevitably alters the original language pattern, and when such processed information is provided to the agents, could it make the already potentially unreliable external information even harder to detect?\\\"_\\n\\nThe model determines the most important parts of the text only based on the prompt that is provided to it. The model is specifically prompted to summarize the given text based on the most important parts of the input. We also prompt the model to only base its output on the input text and not introduce any new information into the generated summary. We acknowledge the reviewer\\u2019s concern that the performance of the Llama-13B model could possibly become a bottleneck and we are happy to include ablation studies with a better summarization model with the CRC. However, random manual qualitative checks of the generated summaries do not indicate that summarization lead to a general loss of information. Leveraging more advanced and refined models like Llama3, we believe, would only further improve system performance. We also include this as a part of our future work section.That being said, we agree with the reviewer and acknowledge that there is a definite tradeoff between the computational efficiency of the summarization model and the quality of the summaries and in turn our system performance which should be considered based on the criticality of the use-case where MAD-Sherlock is being used. However with the current setup it should be straightforward to replace the llama-13B model with a larger/more powerful or smaller/less powerful model.\\n\\nWe acknowledge that leveraging an LLM for generating summaries introduces a potential risk of incorporating unreliable or false elements, which could compromise the reliability of the external information. However, this risk is a general limitation of LLMs rather than a specific issue with our approach. Furthermore, we have not found any evidence to suggest that this has been an issue in practice. We believe that using more aligned and safer models for this task in the future could further mitigate this risk.\"}", "{\"title\": \"Rebuttals to Weaknesses (continued) and Final Words\", \"comment\": \"**Weakness-3**: _\\\"Limited dataset: Only the NewsCLIPpings dataset was used, which may lack representativeness. This dataset is from 2021, a time when LLMs were not as prevalent as they are now, and AIGC content was limited. I question whether it is representative of the current and future online news landscape and the ability of this work to detect LLM-generated misinformation.\\\"_\\n\\nWe acknowledge this limitation and appreciate the reviewer\\u2019s observation. However, the NewsCLIPpings dataset remains one of the largest and most widely used datasets for fake news detection, serving as a community-accepted benchmark. For this reason, we believe it is essential to demonstrate our results on this dataset to ensure a fair and meaningful comparison with existing methods.\\n\\nThat said, we fully agree with the reviewer that the dataset is outdated, particularly given the evolving prevalence of LLM-generated content and already include the need for a continual more up-to-date dataset as a part of future work. Additionally, in the external retrieval component of our approach, we observed that some of the webpages linked to the dataset samples are no longer accessible due to the dataset's age. While the number of unavailable webpages is currently not significant enough to impact our results, it highlights the importance of transitioning to more recent and relevant datasets.\\n\\n**Weakness-4**: _\\\"Questionable fairness of the user study: As mentioned in Appendix A.4.1, in the user study, participants were not allowed to access the internet and could not retrieve external information (e.g., Bing Visual Search API) like MAD-Sherlock. They could only rely on their own experience and common ensense, which is unfair. At the very least, a control group should be added, allowing participants to access the same external information as MAD-Sherlock.\\\"_\\n\\nWe agree that our current findings can definitely be refined by using a more refined user study. We are in the process of conducting the study and would like to include the results in the camera ready version of the paper. However, on the fairness of our study we would like to clarify that our study is motivated by our concern about how informative and trustworthy MAD-Sherlock\\u2019s insights are for humans. We wanted to understand if the insights confuse human participants or further enhance their line of reasoning. In this set-up, we believe internet access could be considered a confounding factor in the study as it introduces variability in participants' ability to search, interpret, and evaluate online content. By restricting internet access, we aimed to isolate the effectiveness of MAD-Sherlock's insights without external influences, ensuring that the results reflect the system's intrinsic utility rather than participants' independent research skills. In summary, we would like to clarify that our empirical results do not indicate that MAD-Sherlock alone can outperform human experts with access to the internet and without time constraints. Rather, our results indicate that MAD-Sherlock can help journalists make better decisions under time constraints, and that it can significantly uplift unskilled humans, e.g. in a citizen intelligence context.\\n\\n## Final Words\\nWe would like to thank the reviewer for their time once again. We hope that our answers help clarify their concerns and the reviewer might consider increasing their score.\"}", "{\"summary\": \"MAD-Sherlock is a multi-agent debate system designed to detect out-of-context misinformation by analyzing inconsistencies between images and accompanying text. Unlike traditional AI models, it enables multiple multimodal agents to independently assess and debate the context of information, using external retrieval to enhance accuracy and provide clear, explainable insights.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The MAD-Sherlock framework introduces a multi-agent debate approach to detect out-of-context misinformation, combining asynchronous debates with external information retrieval to enhance the model's contextual understanding and interpretability. This approach shows significant innovation compared to single-agent methods.\\n2. The model provides human-readable explanations during the decision-making process, which is a major improvement over current AI-driven misinformation detection systems.\\n3. This work attempts to leverage internet searches to extract external information and use it to enhance the performance of misinformation detection and proves its effectiveness.\\n5. Interesting findings:\\n 1. The comparison of various debate methods reveals that asynchronous debate is the most effective, providing valuable insights for designing multi-agent debate frameworks.\\n 2. There is a significant performance improvement when models believe they are debating against a human rather than another AI agent.\\n 3. The method also allows agents the freedom to change their opinions mid-debate. In such settings, agents demonstrate an enhanced ability to critically evaluate arguments and identify subtle inconsistencies.\", \"weaknesses\": \"1: Lack of data cleaning for external information\\n\\nApart from what was mentioned in Appendix A.1:\\n\\n> First, while our model excels at detecting out-of-context image-text pairs, its reliance on external retrieval can lead to reduced accuracy when relevant context is unavailable or difficult to retrieve.\\n\\n(1) The paper seems to lack verification of the authenticity and quality of the extracted external information. Without thorough data cleaning, if low-quality data or even fake news is retrieved, it could negatively impact the judgment results.\\n\\n(2) During the pre-training process, commercial LLMs use carefully cleaned data. If a conflict arises between the parameter knowledge of the LLM itself and the external knowledge, how should it be resolved? This is not uncommon; when an event occurs, it is often accompanied by numerous rumors, even conflicting ones. Blindly trusting the external knowledge retrieved online could lead to undesirable outcomes.\\n\\n(3) Only the Bing Visual Search API was used for information retrieval. Is it proven to be reliable and effective enough?\", \"2\": \"Reliability and effectiveness of LLM summarization, and potential side effects\\n\\nIn Section 3.3.2, you mentioned using LLMs, such as Llama-13B, to summarize information, focusing only on the most important parts of the text. I am curious about how it determines the most important parts and whether it might miss important details. Could the performance of Llama-13B itself become a bottleneck in the workflow? \\n\\nInformation summarized and rewritten by the LLM inevitably alters the original language pattern, and when such processed information is provided to the agents, could it make the already potentially unreliable external information even harder to detect?\", \"3\": \"Limited dataset\\n\\nOnly the NewsCLIPpings dataset was used, which may lack representativeness. This dataset is from 2021, a time when LLMs were not as prevalent as they are now, and AIGC content was limited. I question whether it is representative of the current and future online news landscape and the ability of this work to detect LLM-generated misinformation.\", \"4\": \"Questionable fairness of the user study\\n\\nAs mentioned in Appendix A.4.1, in the user study, participants were not allowed to access the internet and could not retrieve external information (e.g., Bing Visual Search API) like MAD-Sherlock. They could only rely on their own experience and common ensense, which is unfair. At the very least, a control group should be added, allowing participants to access the same external information as MAD-Sherlock.\", \"questions\": \"see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttals to Weaknesses\", \"comment\": \"We thank the reviewer for taking the time to thoroughly understand and review our work. We are happy to address their concerns below.\\n\\n## Weaknesses\\n\\n**Weakness-1**: _\\\"Multi-agent collaboration and debate strategies are popular methods for improving results, and incorporating external information is also common in multi-agent setups like AutoGPT. The experiments here don\\u2019t include comparisons with these popular frameworks, especially those that also use external data and agent collaboration. Adding such comparisons would highlight where MAD-Sherlock stands out.\\\"_\\n\\nWe appreciate the reviewer highlighting this point. While AutoGPT and similar frameworks are powerful tools that incorporate external data and agent collaboration, they typically rely on a significantly higher number of external API calls and focus more on retrieval tasks. In contrast, MAD-Sherlock's focus is primarily on reasoning and misinformation detection, which necessitates a different approach.\\n\\nThat said, we recognize the value of benchmarking MAD-Sherlock against these popular frameworks to understand its relative strengths and limitations better. We will explore incorporating such comparisons in the camera-ready version of the paper to further contextualize MAD-Sherlock's performance and highlight its distinct advantages.\\n\\n**Weakness-2**: _\\\"While external retrieval improves the system\\u2019s accuracy, it could backfire if relevant information isn\\u2019t available or if the search results are inconsistent or irrelevant. The paper doesn\\u2019t delve much into this issue; it would be helpful to discuss how the system performs in cases where external information is incomplete or unavailable to gauge robustness.\\\"_\\n\\nThe reviewer raises an important point regarding the reliance on external retrieval. In cases where relevant external information is incomplete or unavailable, our system is designed to proceed using the available inputs without external augmentation. During the initial phases of our work, we conducted preliminary experiments to establish that the inclusion of external information significantly enhances model performance. However, these experiments were not included in the current submission.\\n\\nWe agree that a more thorough investigation into how the system performs under scenarios of incomplete or unavailable external information would provide valuable insights into its robustness. We plan to include these experiments in the camera-ready version, along with a detailed discussion of their implications. We thank the reviewer for bringing this to our attention.\\n\\n**Weakness-3**: _\\\"The multi-agent debate structure adds a lot of computational cost, which isn\\u2019t fully detailed in the paper. It would be useful to see ablation studies comparing debate length and performance to understand the trade-offs in runtime and accuracy. This would make it easier to evaluate MAD-Sherlock\\u2019s feasibility for practical applications.\\\"_\\n\\nWe acknowledge that the multi-agent debate structure introduces additional computational cost. To address this, we will include detailed cost and latency analyses in our revised submission. These will quantify the trade-offs between debate length, runtime, and performance, helping to evaluate the feasibility of MAD-Sherlock for various practical applications.\\n\\nIt is also worth noting that the framework itself is modular, allowing the underlying models to be replaced with smaller, more efficient ones for scenarios where computational resources are constrained. While there is a trade-off between performance and computational cost, we believe the significant performance improvements observed in our experiments justify the additional cost. However, this trade-off may vary depending on the use case. We have included information related to the time and cost efficiency of our method in our updated submission.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Summary of Reviews, Improvements and Clarifications\", \"comment\": \"We sincerely thank all reviewers for their time, effort, and insightful comments on our paper. Taking all feedback into account, we respond to each review individually.\\n\\n## Summary of reviews\\n- Reviewer xyn6 appreciates our exploration of an \\u201cimportant problem\\u201d and believe our proposed approach \\u201cmakes a lot of sense\\u201d. They also acknowledge the extensive experiments and ablations we include and believe they can be \\u201cpotentially informative for methods in other domains too\\u201d.\\n- Reviewer peoS finds our work \\u201cvery interesting\\u201d and acknowledges that our experiment results validate the effectiveness of our proposed method. They describe the paper itself as \\u201cwritten in great detail\\u201d.\\n- Reviewer UiNZ finds our approach to exhibit \\u201csignificant innovation\\u201d that shows \\u201cmajor improvement\\u201d over current methods, underscoring the high novelty of our work.\\n- Reviewer AktV believes that our work offers \\u201cvaluable insights\\u201d for using multi-agent systems for related tasks.\\n\\n## Improvements and Clarifications\\n### Improvements\\nBased on the insightful and valuable feedback received from all the reviewers, we have made specific revisions to the paper. These include:\\n- Adding a new baseline\\u2014**GPT-4o with external context but without the debate framework**\\u2014to highlight the importance of our multi-agent debate setup. This comparison shows that the debate framework significantly improves performance, further reinforcing the value of our methodology.\\n- Further refining and expanding our discussion on the work related to our proposed method\\n- Deepening the discussion to establish why we pick \\u201cdebates\\u201d for our particular problem\\n- Including information related to time and cost efficiency\\n- Expanding our future work section to include efforts to further improve the external information retrieval system\\n\\n### Clarification on the User Study\\nFirstly, we would like to clarify that the purpose of the user study is not to convey that insights generated by MAD-Sherlock are better than those from journalists. Rather, our intention with the study is to show that insights from MAD-Sherlock are able to add significant value to the misinformation detection workflow in a fully automated way. This can be important in settings where: \\n1. Fully automated detection is necessary due to unavailability of human experts\\n2. Models can assist human experts in making better decisions faster, and can help partially stopgap a lack of human expert availability by uplifting unskilled humans\\n\\nIn our current set-up, we believe internet access could be considered a confounding factor in the study as it introduces variability in participants' ability to search, interpret, and evaluate online content. By restricting internet access, we aimed to isolate the effectiveness of MAD-Sherlock's insights without external influences, ensuring that the results reflect the system's intrinsic utility rather than participants' independent research skills. That being said, in order to allow for a more comprehensive study analysis we have added an additional group to the study which would have internet access. We are currently in the process of conducting the study and would like to include the results in the camera ready version of the paper.\\n\\n## Conclusion\\nWe are grateful to all the reviewers for having taken the time to carefully understand and review our work. We appreciate the opportunity to refine our work and look forward to further discussion and feedback.\"}", "{\"title\": \"Rebuttals to Weaknesses (continued) and References\", \"comment\": \"**Weakness-5**: _the paper lacks an explanation and analysis to clarify why performance increases when the agent believes it is conversing with a human instead of another AI agent_\\n\\nWe acknowledge the reviewer\\u2019s observation and agree that this phenomenon requires further investigation. While we do not have a definitive explanation for why this occurs, we propose a few potential hypotheses which could potentially inform future work in this direction:\\n\\n1. Training data: a substantial portion of the data that large models are trained on is human-generated content, which may implicitly condition the model to respond differently or more robustly to a potential human compared to another agent. \\n2. Agents reward heuristics during inference: the agent could internally optimize for human-centric conversations/interactions which could explain better performance when it believes a human is part of the exchange. \\n3. Training process: the training process for many large language models also involves reward formulation based on human preferences and feedback which could also lead to the development of an implicit bias in the model towards interactions involving humans. \\n4. Commercial LLMs are additionally safety-finetuned. This safety-finetuning may prompt the model to behave differently in the context of human users or contexts.\\n\\nWhile these are speculative experiments, the observation is consistent across all our experiments. We believe this can provide a valuable insight for future work related to designing interaction configurations for models to improve performance of multi-agent systems. \\n\\n## References\\n[1] Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh\\nRadhakrishnan, Edward Grefenstette, Samuel R. Bowman, Tim Rockt\\u00e4schel,\\nand Ethan Perez. Debating with more persuasive llms leads to more truthful\\nanswers, 2024.\\n\\n[2] Haotian Wang, Xiyuan Du, Weijiang Yu, Qianglong Chen, Kun Zhu, Zheng Chu, Lian Yan and Yi Guan. Learning to break: knowledge-enhanced reasoning in multi-agent debate system, 2023.\\n\\n[3].Chen Y, Li D, Zhang P, et al. Cross-modal ambiguity learning for multimodal fake news detection[C]//Proceedings of the ACM web conference 2022. 2022: 2897-2905.\\n\\n[4]. Qian S, Wang J, Hu J, et al. Hierarchical multi-modal contextual attention network for fake news detection[C]//Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 2021: 153-162.\"}", "{\"title\": \"Rebuttals to Questions and Final Words\", \"comment\": \"## Questions\\n**Q1**: _Why summarize using Llama 13B as opposed to a more recent Llama? It seems like Llama 3 8b is both smaller and has significantly better performance?_\\n\\nWe appreciate the reviewer\\u2019s suggestion to use a more recent model, such as Llama3-8B. We are happy to include those ablations using a more recent and better performing model in the CRC. That being said, we don\\u2019t have evidence to believe that summarization led to a loss of information on average. We also include this as a part of our future work section.\\n\\n**Q2**: _The user study asks participants to not search the web themselves. I can see that being applicable for laypeople, who might not want to spend time checking stuff or know what should be checked. I'm less sure for journalists, are there cases where they too wouldn't be using web search?_\\n\\nThe primary objective of MAD-Sherlock is to offer a solution that minimizes the effort required from the end user, allowing them to verify image-caption pairs without needing to perform additional tasks such as web searches. This approach is particularly relevant for laypeople in a citizen intelligence setting, but we believe it is also applicable to journalists in certain scenarios.\\n\\nAs we learned from our domain project partners, journalists often face tight deadlines and high workloads, where the ability to quickly assess the credibility of content is essential. By removing the need for manual web searches, MAD-Sherlock significantly reduces the time and cognitive effort required for verification. For example, in our user study, participants took less than 13 minutes on average to complete the evaluation of 10 image-caption pairs using AI-generated insights. In contrast, performing this task manually, including web searches, would have taken over 30 minutes on average. This demonstrates the potential time-saving benefits of our system, even for professionals who might have the skills and resources to perform manual verification. However, we would like to be clear that our results do not indicate that MAD-Sherlock can outperform trained human experts with full access to the internet and without time constraints.\\n\\nWhile we recognize that journalists may still choose to perform independent searches in some cases, MAD-Sherlock is designed to complement their workflows by providing immediate, actionable insights, enabling them to focus their efforts on more nuanced investigative tasks. \\n\\n## Final Words\\nWe once again thank the reviewer for their valuable insights and feedback that helped us improve the quality of our work. We hope that our answers help further clarify the reviewer\\u2019s concerns and the reviewer would consider increasing their score.\"}", "{\"comment\": \"Regarding time and cost efficiency, thank you for adding the information. I would suggest, however, providing the same information for some next-best methods, such as Sniffer and GPT-4o#. The information is helpful on its own, and does not seem too exorbitant, but would be even more helpful if one could easily compare with alternatives.\\nI don't think I agree with the argument that avoiding fine-tuning reduces cost a lot, unconditionally. It reduces a potentially significant one-time cost, but if running the system on millions of examples, the inference cost may be much more relevant than the one-time fine-tuning cost. As far as I know this doesn't affect anything you've been reported in the paper, but would be careful about the argument in general.\", \"table_5_of_https\": \"//arxiv.org/pdf/2409.00009 suggests more powerful summarizers can have a small but possibly non-trivial effect. The setting is different, but related.\\n\\nOverall, aside from some tiny points mentioned above, the new / in progress results address most of my concerns. The main remaining one is single dataset. While this one may be the standard one, that doesn't really solve the potential issues, such as some bias or spurious correlation in the dataset that aligns better with the proposed approach than other ones. Of course, equally plausible that it could go the other way and this method would be even better on a different dataset. But still a significant element of uncertainty that affects the strength of the results.\\nAre there other viable datasets that could be used for a reduced comparison? E.g., rather than all the baselines, just compare MAD-Sherlock with say GPT-4o# and Sniffer on another dataset? Or even comparing with GPT-4o# alone could still significantly confirm the robustness of the results.\\n\\nI'll raise my score a point now, as most of my criticisms have been strongly addressed, and consider another point after reading the other reviews.\"}", "{\"title\": \"Did we address your concerns?\", \"comment\": \"Dear Reviewer peoS,\\n\\nAs the rebuttal period is coming to a close, we would like to ask whether we have successfully addressed your concerns, or whether there is anything else that you would like to see addressed.\\n\\nIn particular, please note the additional empirical baseline clarifying the utility of the debate approach, and our responses to your other concerns about relevant baselines.\\n\\nMany thanks\\n\\nThe Authors\"}", "{\"title\": \"Polite Invitation to Engage with the Rebuttal\", \"comment\": \"Dear Reviewers,\\n\\nAs the rebuttal period is nearing its conclusion (with the window for comments closing tomorrow, December 2nd), we would like to kindly encourage all reviewers to consider engaging with our rebuttal.\\n\\nWe would particularly like to draw your attention to the new baseline results we have presented, which further strengthen our central claim that multi-agent debate possesses intrinsic advantages. We believe these additional results address some of the key concerns raised during the initial review process and provide valuable insights for evaluating our work.\\n\\nWe greatly appreciate your time and effort in reviewing our submission and look forward to any further feedback you may have.\\n\\nWith best regards,\\n\\nThe Authors\"}", "{\"title\": \"Rebuttals to Questions\", \"comment\": \"## Questions\\n\\n**Q1**: _Why did you only compare with pretrained multimodal baseline methods? Is it because this approach is more commonly used for this task? Additionally, why didn\\u2019t you include comparisons with systems that use multi-agent collaboration with external information retrieval?_\\n\\nWe appreciate the reviewer\\u2019s question. Our selection of baseline methods was guided by the specific requirements of our problem statement, which focuses on determining whether an image-text pair constitutes misinformation. For this task, multimodal reasoning is essential, as it directly incorporates both image and text inputs rather than relying solely on text-based descriptions of images.\\n\\nWe initially experimented with text only models where input was a textual description of the image and the corresponding text but found the results to be suboptimal. These experiments established that a multimodal approach is critical for effectively addressing this task. \\n\\nWe recognize the value of including multi-agent systems that incorporate external information retrieval, as suggested. This will be explored in future work and has been mentioned as a potential direction for enhancing our comparative framework in our updated submission.\"}", "{\"summary\": \"This paper presents MAD-Sherlock, a multi-agent debate system designed for out-of-context misinformation detection. MAD-Sherlock leverages a multi-agent framework where each agent independently analyzes image-text pairs and engages in multiple rounds of debate to assess contextual consistency. The system incorporates external information retrieval to enhance the agents' reasoning capabilities, achieving high detection accuracy and explainability without the need for task-specific fine-tuning. A unique aspect of MAD-Sherlock is its systematic construction and comparison of various multi-agent debate strategies, offering a comprehensive exploration of debate structures within a multi-agent framework for OOC detection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper applies external information retrieval and multi-agent collaboration to the task of out-of-context misinformation detection, which is a relatively novel approach. The authors effectively combine these elements in this context, and the experimental results demonstrate the effectiveness of this method.\\n\\n2. Unlike previous work, MAD-Sherlock specifically constructs and compares different multi-agent debate strategies, providing a systematic analysis of various debate methods in out-of-context detection. This exploration is quite interesting, and it also offers valuable insights for applying multi-agent frameworks in similar tasks.\\n\\n3. The authors built a complete pipeline that combines image and text processing, including external data collection and cleaning, which is a substantial effort. Handling multimodal data and integrating external information adds complexity to the system.\", \"weaknesses\": \"1. Multi-agent collaboration and debate strategies are popular methods for improving results, and incorporating external information is also common in multi-agent setups like AutoGPT. The experiments here don\\u2019t include comparisons with these popular frameworks, especially those that also use external data and agent collaboration. Adding such comparisons would highlight where MAD-Sherlock stands out.\\n\\n2. While external retrieval improves the system\\u2019s accuracy, it could backfire if relevant information isn\\u2019t available or if the search results are inconsistent or irrelevant. The paper doesn\\u2019t delve much into this issue; it would be helpful to discuss how the system performs in cases where external information is incomplete or unavailable to gauge robustness.\\n\\n3. The multi-agent debate structure adds a lot of computational cost, which isn\\u2019t fully detailed in the paper. It would be useful to see ablation studies comparing debate length and performance to understand the trade-offs in runtime and accuracy. This would make it easier to evaluate MAD-Sherlock\\u2019s feasibility for practical applications.\", \"questions\": \"Why did you only compare with pretrained multimodal baseline methods? Is it because this approach is more commonly used for this task? Additionally, why didn\\u2019t you include comparisons with systems that use multi-agent collaboration with external information retrieval?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Overview of Contributions and Improvements\", \"comment\": \"We thank the reviewers for their valuable feedback and insights provided on our submission. We are very happy to see a positive reception of our work and to be granted the opportunity to answer any questions that the reviewers might have.\\n\\nWe understand that there might have been some misunderstandings while interpreting our work, therefore, we would like to reiterate our core contributions and describe our experimental setup including the user study.\\n\\n## Key Contributions:\\n\\n- We propose using multi-agent debates with external retrieval for the task of out-of-context (OOC) misinformation detection. Our proposed system, MAD-Sherlock, not only accurately detects instances of misinformation but also provides coherent explanations for the same. \\n- We present a novel LLM-based post training approach for scalable OOC misinformation detection that simultaneously improves contextual reasoning, provides in-built explainability and achieves state-of-the-art detection accuracy even without task-specific finetuning.\\n- Our system involves the use of an advanced external retrieval module which uses reverse image based search and LLM-based summarization to provide agents with external real-time context related to the input.\\n- We provide an extensive set of experiments comparing MAD-Sherlock to other related methods and baselines in order to establish the superiority of our method. We show that compared to single-agent chain-of-thought approaches, the use of multiple agents allows for a clean separation of agent contexts, decentralisation of action spaces and the opportunity for parallel computation.\\n- We provide a comprehensive user study to evaluate the effectiveness of our system in detecting and explaining misinformation and show that insights from our system are able to increase overall accuracy for the task of detecting misinformation.\\n\\n## New Baseline Analysis:\\nIn response to reviewer feedback, we have added a new baseline to our updated submission: **GPT-4o with external context but without the debate framework**. This baseline isolates the effect of the debate setup, allowing us to directly demonstrate its critical role in MAD-Sherlock\\u2019s performance. Results from our updated experiments show that the inclusion of multi-agent debates improves both detection accuracy (from 86% to 90%) and explanation quality. This strengthens our claim that the debate setup is not only beneficial but fundamental to the effectiveness of our method.\\n\\nWe are confident that this addition further substantiates our methodology and its contributions, and we thank the reviewers for inspiring us to conduct this crucial analysis.\"}", "{\"metareview\": [\"**Summary:**\", \"The authors introduce a multiagent multimodal misinformation detection system that focuses on out-of-context (OOC) image usage to produce false narratives. Two or more independent LLM agents debate over several rounds to ideally reach a consensus about whether an image-text pair is misinformation. The results indicate that the system can effectively identify OOC multimodal misinformation and assist human fact-checkers with varying expertise and levels of experience.\", \"**Strengths:**\", \"This is a simple concept that intuitively should lead to improvement, given prior positive results using multiagent communication to improve LLM reasoning for tasks like mathematical problem-solving. While not particularly technically innovative, this is a relatively novel use case.\", \"The LLM explanations of OOC image uses improve transparency of model decision-making and also provide a potential tool for human fact-checkers to identify problematic cases (as concretely shown by Tables 3/4 where their system improves human accuracy).\", \"The authors seem to have been careful and rigorous in their implementation. Certain details like the debate setup comparison in Table 1 and having agents explicitly point out each other's inconsistencies or ambiguities could inform future multiagent debate research.\", \"The comparison of laypeople, journalist and academic misinformation detection performance with and without the system in Table 4 is very compelling.\", \"**Weaknesses**\", \"The system is evaluated on only one (albeit large-scale) dataset.\", \"Critical ablations and baseline comparisons are missing.\", \"Overall, this is a very interesting paper but not entirely convincing in its present state. It would benefit from polishing and more comprehensive comparisons before publication.\"], \"additional_comments_on_reviewer_discussion\": \"The reviews were mixed, and all borderline. The primary concerns appear to be (1) lack of ablations to confirm the effectiveness of the debate and narrow margins in the performance improvement, (2) the inefficiency of multiagent debate, and (3) comparison with a single OOC image-text benchmark (NewsClippings). While this is a well-known and widely used benchmark within the field, it would significantly strengthen the paper's results if the authors can confirm improvement on other evaluation sets. By the author's own admission, their experience with the benchmark \\\"highlights the importance of transitioning to more recent and relevant datasets.\\\" For (1), the authors performed a single-agent comparison on a subset of the data, but were not able to perform the full analysis yet. Since this baseline is important to confirm the validity of the paper's findings, I believe these results need to be included and reviewed before publication.\"}", "{\"title\": \"Rebuttals to Questions and Final Words\", \"comment\": \"## Questions\\n\\n**Q1**: _Why introduce the debate framework to address the OOC problem? It would be helpful if the authors could clarify the insight behind this choice._\\n\\nAddressed under rebuttal to weaknesses.\\n\\n**Q2**: _For the different debate strategies, how do they vary in addressing the OOC problem? Apart from the results shown in Table 1, is there additional analysis or explanation provided here?_\\n\\nThe debating strategies were part of a preliminary set of experiments to determine which one would be best suited for extensive experimentation going forward. The main objective behind trying different debating strategies was not to directly detect OOC but to see which interaction configuration enabled the most substantial discussions and allowed for better explainability. We agree that this should be further clarified in section 3.1 to avoid possible confusion.\\n\\n**Q3**: _How can it be ensured that the introduction of external information retrieval will not lead to label leakage issues?_\\n\\nAddressed under rebuttal to weaknesses.\\n\\n## Final Words\\nOnce again, we thank the reviewer for their valuable insights and feedback. We hope we have sufficiently addressed their concerns and the reviewer would consider increasing their score.\"}", "{\"title\": \"Rebuttals to Weaknesses\", \"comment\": \"We thank the reviewer for their feedback and address their concerns below.\\n\\n## Weaknesses\\n\\n**Weakness-1**: _\\\"A baseline with the LLM and retrieval system used but without debate - similar to actor-skeptic but without the skeptic - seems missing. I feel like this is important to understand how much debate is actually helping, since it isn't guaranteed that it would perform worse than e.g. some less effective forms of debate that might confuse things more, or compared to other methods from the literature which might be using models weaker than GPT-4o.\\\"_\\n\\nWe agree that including a single-agent baseline enhances comparison and performance analysis. Accordingly, we have included preliminary but statistically significant results for the requested single-agent baseline on a randomly sampled subset (10% of the full dataset) in our updated submission. Results on the entire dataset will be included in the camera-ready version.\\nOur findings confirm that the debate setup significantly outperforms the single-agent setup, providing strong evidence for our hypothesis that multi-agent debate offers inherent advantages by leveraging separate context windows (as suggested in prior work, e.g. https://arxiv.org/abs/2305.14325). Furthermore, our qualitative analysis highlights that the debate setup improves explainability, as distinct context windows allow for better role-specific separation. This critical insight has been added to the updated submission.\\n\\n**Weakness-2**: _\\\"Cost and time efficiency are not reported. This also connects to the previous point, and seems a key consideration when comparing multiple LLMs engaging in multi-turn debates, which could be significantly more costly than e.g. a single LLM setup. A high cost could be an important limitation, and regardless, important information for readers considering if they could apply the work.\\\"_\\n\\nWe appreciate the reviewer\\u2019s suggestion to include details on cost and time efficiency, which we have added to the updated submission. Our approach avoids finetuning, making it significantly more cost-effective compared to prior methods reliant on extensive finetuning. While we report results with a more powerful model, our system is model-agnostic and can readily use any open-source alternative for greater cost and time efficiency. Given the notable performance gains, we believe the overall cost of our method is well-justified.\\n\\n**Weakness-3**: _\\\"Although - aside from the baseline point mentioned above - the comparisons with existing methods are extensive, they are all performed on a single dataset. The margin compared to the next-best performing approach (Sniffer with fine-tuning) is only about 1.7%, and there are no error bars reported. So, it's not very clear how definitive the performance conclusions are.\\\"_\\n\\nWith regard to concern around reporting results only on a single dataset, we report all results on the NewsCLIPpings dataset which is the community accepted benchmarking dataset for the task of out of context misinformation detection. We do this in order to compare to existing baseline methods. We would also like to emphasize that our proposed method does not require any finetuning compared to Sniffer and we not only significantly outperform the unfinetuned version of the model but also the finetuned version therefore achieving state of the art performance across all baselines and related methods.\\n\\n**Weakness-4**: _\\\"Discussion of Lin et al (line 167): it's clear the current work is quite different. It's less clear to me, though, why this work is highlighted in general, given that it is so different, including entirely different domain. Maybe this could be contextualized a bit more broadly in terms of approaches to classification by debating LLMs, or some other connecting insight or argument beyond \\\"here's another work that used debating LLMs\\\".\\\"_\\n\\nWe also agree that this work does not directly relate to MAD-Sherlock. The mentioned work, approaches the problem of harmful meme detection using a multi-agent setup. We initially included it to provide a more comprehensive overview of related works and this work is one of the few to use debating multi-modal models. We understand it might not directly relate to our work here since the problem of misinformation detection in the news domain is significantly different from that of detecting harmful or offensive memes. We have moved this work to the appendix in our updated submission.\\n\\n**Weakness-5**: _\\\"Section 3.1: I was a bit unclear when first reading this on what is background information on possible ways a debate could be structured, vs. what you actually test yourselves. Maybe the wording could be a bit more explicit that you test all of these.\\\"_\\n\\nWe would like to thank the reviewer for bringing this to our notice. We agree that making the fact that we test all debate setups in order to select the best one more explicit would make the section more clear. We have now incorporated this into our updated submission.\"}", "{\"title\": \"Rebuttals to Weaknesses\", \"comment\": \"We thank the reviewer for taking the time to carefully consider our paper. We are happy to address their concerns and add corresponding improvements to our paper.\\n\\n## Weaknesses\\n**Weakness-1**: _\\\"The paper primarily addresses the Out-of-Context (OOC) issue of fake online content. However, it does not provide a detailed explanation or analysis of why the debate approach was introduced and how it effectively addresses the OOC problem.\\\"_\\n\\nFirstly, we opt for a multiagent setup to allow for clean separation of agent contexts and decentralisation of action spaces. In addition to this, the problem of OOC misinformation detection requires looking at the input from multiple perspectives which is something a single model is less equipped to do. We therefore opt for a multi-agent setup. With regard to the debate setup in particular, in our work we extend the insights from [1], which shows that robustness and interpretability of Large Language Models (LLMs) is improved when multiple LLMs are placed in a debate environment. Our motivation to apply multi-agent debate to OOC misinformation detection is also partially motivated by the successful application of such settings to reasoning tasks, which we believe are at least somewhat related to OOC detection tasks [2]. We structure the conversation between different agents as a debate to allow for difference of opinion and cleanly separates role contexts, which we find enables models to uncover different elements of misinformation. \\n\\nDuring preliminary experiments we also explored non-debate configurations and observed agents often converged rapidly to the same answers regardless of correctness. This limits the agents\\u2019 ability to look at the input from different aspects. In contrast to this, we observe that putting models in a debating setup with the ability to change line-of-reasoning and debate stance, enabled more informed and substantive discussions around the potential elements of misinformation in the input, which in turn helped improve performance and explainability. We can include experimental evidence on a smaller subset of the data to further support these claims.\\n\\nIt is also important to address how \\u201cdebates\\u201d are defined in our work, which we should include in the paper as well. Agents are not provided predefined stances and are allowed to choose independent positions based on the input. Agents are further allowed to change their position mid-\\u201ddebate\\u201d therefore engaging in a debate or reaching consensus. While we refer to this form of interaction between the agents as a \\u201cdebate\\u201d, it diverges from the conventional setup of one.\\n\\n**Weakness-2**: _\\\"The introduction of external information retrieval can lead to label leakage issues.\\\"_\\n\\nWe appreciate the concern around label leakage issues that can arise from the external information retrieval. However, we would like to clarify that our methodology does not involve finetuning or training the models on any sort of retrieved information. The external retrieval module operates independently of model parameters which remain unchanged during our experiments. The primary focus of our work is to propose a novel approach towards explainable misinformation detection without any domain-specific finetuning. Therefore we believe that the concern around label leakage is not applicable to our setup. Our external information retrieval module is designed to provide agents with additional context related to the input and is independent of the ground truth labels. \\n\\nIf the concern is around the test data itself, we can confirm that a thorough qualitative analysis was performed to ensure that the retrieved information does not contain the label itself and is only limited to news articles related to the image.\\n\\nShould there remain concerns related to specific scenarios, we are open to conducting ablation studies to explicitly establish and demonstrate the absence of label leakage issues. Could the Reviewer kindly clarify if our understanding of their concerns is correct?\"}", "{\"summary\": \"MAD-Sherlock, a Multi-Agent Debate system for detecting out-of-context (OOC) misinformation, addresses issues of existing AI detection systems. It introduces a novel framework where multimodal agents collaborate to assess context consistency and request external info. Enables explainable detection with high accuracy without domain-specific fine-tuning. The experimental results confirm external retrieval improves accuracy, and user studies show it boosts performance for both experts and non-experts, making it a powerful tool for intelligence applications.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a meaningful problem in multimodal fake content detection: OOC (Out-of-Context).\", \"The method proposed in the paper is very interesting and achieved state-of-the-art (SOTA) results, validating its effectiveness.\", \"The paper is written in great detail.\"], \"weaknesses\": [\"The paper primarily addresses the Out-of-Context (OOC) issue of fake online content. However, it does not provide a detailed explanation or analysis of why the debate approach was introduced and how it effectively addresses the OOC problem.\", \"The introduction of external information retrieval can lead to label leakage issues.\", \"The selection of baselines in the paper is not enough; it should include some multimodal fake online content detection methods[1,2] as baselines for comparison. For example, models like GPT-4o were not designed specifically for fake online content detection, so the comparison methods in the paper lack convincing power.\", \"This paper lacks sufficient ablation experiments to demonstrate the effectiveness of each component of MAD-Sherlock and its contribution to the overall performance.\", \"In the section 4.3.1,\", \">We also observe a significant performance increase when the agent believes it is conversing with a human instead of another AI agent\", \"the paper lacks an explanation and analysis to clarify why this phenomenon occurs.\", \"So, I think this paper needs further work.\"], \"references\": \"[1].Chen Y, Li D, Zhang P, et al. Cross-modal ambiguity learning for multimodal fake news detection[C]//Proceedings of the ACM web conference 2022. 2022: 2897-2905.\\n\\n[2]. Qian S, Wang J, Hu J, et al. Hierarchical multi-modal contextual attention network for fake news detection[C]//Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 2021: 153-162.\", \"questions\": \"1. Why introduce the debate framework to address the OOC problem? It would be helpful if the authors could clarify the insight behind this choice.\\n\\n2. For the different debate strategies, how do they vary in addressing the OOC problem? Apart from the results shown in Table 1, is there additional analysis or explanation provided here?\\n\\n3. How can it be ensured that the introduction of external information retrieval will not lead to label leakage issues?\\n\\nAdditionally, it would be helpful if the authors could address the issues mentioned in the \\\"weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttals to Weaknesses (continued)\", \"comment\": \"**Weakness-3**: _\\\"The selection of baselines in the paper is not enough; it should include some multimodal fake online content detection methods[1,2] as baselines for comparison. For example, models like GPT-4o were not designed specifically for fake online content detection, so the comparison methods in the paper lack convincing power.\\\"_\\n\\nThe baselines selected for our work represent a comprehensive range of state-of-the-art methods in the field, encompassing diverse approaches from simple MLP fine-tuning models to advanced multimodal reasoning frameworks. These baselines were carefully chosen to provide a balanced and rigorous evaluation of our method\\u2019s performance against both foundational and cutting-edge techniques.\\nWhile the works suggested by the reviewer are interesting, we do not find them directly comparable to our method. Specifically, the suggested methods target broader multimodal tasks or are optimized for use cases distinct from the detection of fake online content. Including such baselines, while informative, would not provide an equitable comparison given the methodological and task-specific differences.\\nWhile both MAD-Sherlock and COOLANT [3] address multimodal fake news detection, their objectives, methodologies, and scopes differ fundamentally, making direct comparisons unsuitable. MAD-Sherlock focuses on misinformation detection through multi-agent debate strategies that simulate human reasoning, emphasizing explainability and the integration of external contextual information. In contrast, COOLANT optimizes feature alignment and aggregation using cross-modal contrastive learning within a dual-encoder framework, prioritizing classification accuracy rather than reasoning or contextual adaptability. Furthermore, MAD-Sherlock evaluates on the NewsCLIPpings dataset with an emphasis on reasoning under complex misinformation scenarios, while COOLANT is tailored to social media datasets like Twitter and Weibo, focusing on alignment-based classification tasks. These distinctions underline that the two approaches address different aspects of multimodal fake news detection, making direct comparisons impractical. In addition to this, COOLANT also suffers from the same short-comings as the other, possibly more relevant, methods we compare against: 1. It lacks the essential component of reproducibility, 2. It requires extensive finetuning, while MAD-Sherlock does not require any fine-tuning.\\n\\nThe second suggested work (HMCAN) [4] employs a methodology from 2021, using ResNet for image feature extraction and BERT for textual feature extraction, which are then fused through a multi-modal contextual attention network. While this approach was notable at the time, our paper already includes comparisons with more advanced and contemporary methods that better align with the current state of the field. Additionally, HMCAN is limited to providing binary classification scores and does not address explainability, which is a core focus of our proposed system.\\n\\nAlso both of the above works, might be more inclined towards Chinese content and fake news detection, which is not supported by our system. We currently are unable to use datasets that have been created using content from websites in languages other than English since our external retrieval module only supports English, as well as the lack of multilingual datasets. As an extension of the project we would like to add multilingual capabilities to the system and have included it as one of the future works. We believe that preliminary multilingual capabilities can be added through in-context instructions, although support for minority languages may be harder to attain - which is, of course, a reflection of systemic issues, and not our specific research project.\\n\\nIf the reviewer is still concerned about our selection of baseline methods, we would be open to adding more related methods for comparison in our camera ready submission as long as: \\nthose methods have not been shown to be outperformed by our baselines, and,\\nthe methods are amenable to explainability.\\n\\n**Weakness-4**: _\\\"This paper lacks sufficient ablation experiments to demonstrate the effectiveness of each component of MAD-Sherlock and its contribution to the overall performance.\\\"_\\n\\nWe agree that this would be a valuable addition to the paper and would be including this in our camera ready submission. We thank the reviewer for bringing this to our attention.\"}" ] }
BqtoARyz7Y
RGB-Event ISP: The Dataset and Benchmark
[ "Yunfan LU", "Yanlin Qian", "Ziyang Rao", "Junren Xiao", "Liming Chen", "Hui Xiong" ]
Event-guided imaging has received significant attention due to its potential to revolutionize instant imaging systems. However, the prior methods primarily focus on enhancing RGB images in a post-processing manner, neglecting the challenges of image signal processor (ISP) dealing with event sensor and the benefits events provide for reforming the ISP process. To achieve this, we conduct the first research on event-guided ISP. First, we present a new event-RAW paired dataset, collected with a novel but still confidential sensor that records pixel-level aligned events and RAW images. This dataset includes 3373 RAW images with $2248\times 3264$ resolution and their corresponding events, spanning 24 scenes with 3 exposure modes and 3 lenses. Second, we propose a convential ISP pipeline to generate good RGB frames as reference. This convential ISP pipleline performs basic ISP operations, e.g., demosaicing, white balancing, denoising and color space transforming, with a ColorChecker as reference. Third, we classify the existing learnable ISP methods into 3 classes, and select multiple methods to train and evaluate on our new dataset. Lastly, since there is no prior work for reference, we propose a simple event-guided ISP method and test it on our dataset. We further put forward key technical challenges and future directions in RGB-Event ISP. In summary, to the best of our knowledge, this is the very first research focusing on event-guided ISP, and we hope it will inspire the community.
[ "event camera", "image signal processor", "color correction", "denoising" ]
Accept (Poster)
https://openreview.net/pdf?id=BqtoARyz7Y
https://openreview.net/forum?id=BqtoARyz7Y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wmxv1nMC6Z", "vhmwlfTPP3", "vMCYQkrLPx", "sGEvORdgk4", "qyLRgP8TlW", "mJSz0vViSz", "jbHDmtFxTZ", "iUlMJTjHWp", "fNtyWQxEcz", "crez02FtPv", "c2wS3ZZ5Pt", "biuhgzzR0P", "Z1tYsNw7er", "WF5xKhLpvI", "VmnkHl5Vux", "Un0qE6hv5L", "PbyaOqSqON", "O0DZD1pXjS", "Nr7TkHfkYJ", "Nk5S5m0kv0", "IUl5Q9McYq", "Et0j9EOW4x", "EIL0mxpl6N", "CmoF7UE4cJ", "9JyW4U6BUU", "7taL8oH5wR", "6H6oWioI69", "4oztEWbQ9g", "4GVAv9LSX1" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732670286998, 1732510669619, 1732677277910, 1732019430212, 1737523640207, 1732675829705, 1732498746754, 1734668368600, 1732019194644, 1732018877182, 1732029489008, 1733027138321, 1732020736987, 1732020607531, 1732677557389, 1730208744681, 1730718300057, 1732498518671, 1732499742049, 1729047061296, 1732675519792, 1732018844010, 1730697797895, 1731464790631, 1732506785088, 1732498077320, 1732674204039, 1732020943581, 1732498298905 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_AvAS" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_fVak" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Area_Chair_MiXd" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_fVak" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_qDHH" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_vrds" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_vrds" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_AvAS" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_q8r6" ], [ "ICLR.cc/2025/Conference/Submission4440/Reviewer_qDHH" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ], [ "ICLR.cc/2025/Conference/Submission4440/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thoughtful response. The small size of the dataset is one of the reasons why this paper may not stand out as much, especially since it's focused on dataset and benchmark development. However, considering this is my first attempt at such work, I\\u2019ve decided to maintain the original positive score. Wishing you the best of luck!\"}", "{\"title\": \"Ensuring Accessibility and Transparency: A Commitment to Open Data for Event-Based Vision Research\", \"comment\": \"Thank you for your timely response.\\n\\n**We are delighted to see your positive recognition that the dataset is a major contribution of this paper.** Your acknowledgment is very encouraging to us.\\n\\nWe assure you that **all datasets, along with the training and testing code, will be made publicly available to ensure clarity and accessibility for everyone**. \\n\\nDue to ICLR\\u2019s anonymous review policy, we are unable to share the dataset link with you directly at this stage. However, we will privately share the data with the Area Chair to ensure accessibility during the review process.\\n\\nFurthermore, we commit to making the dataset fully public immediately after the paper is accepted (upon lifting anonymity). **Aligned with ICLR\\u2019s OpenReview policies, we believe our commitment will also be closely observed by the event-based vision community.** \\n\\nWe hope this clarifies your concerns and we look forward to further discussions.\"}", "{\"title\": \"Grateful for Your Encouraging Support\", \"comment\": \"Thank you for your thoughtful and constructive feedback. We are very pleased to hear that our clarifications have addressed your concerns and helped improve your understanding of the paper. We are truly grateful for your decision to increase your score, as it greatly encourages and motivates us.\\nYour valuable suggestions have played an important role in enhancing the quality of our research, and we sincerely appreciate your support. Once again, thank you for your time, effort, and helpful feedback.\"}", "{\"comment\": \"Dear Reviewer fVak:\\n\\nThank you for your thoughtful feedback and valuable suggestions. We greatly appreciate the time and effort you have taken to review our work. Below, we provide detailed responses to your concerns and describe how we have addressed them in the revised manuscript.\\n\\n### Q.1 Concern about Dataset Size\\nThank you for pointing this out.\\nThis concern has been discussed in detail in the section \\\"Summary and Answers of Official Reviews (2/2).\\\" Please refer to this section for our comprehensive response.\\n\\n\\n### Q.2 Elaboration on the Role of Event Data in Each ISP Stage\\nThank you for raising this important point. To clarify, in the Section CONTROLLABLE ISP stage, we used a ColorChecker-based ISP pipeline to generate reference images as ground truth. In this stage, tasks such as demosaicing and color correction are performed using traditional computational methods. The evaluarion results are demonstrated in Figure 5 of the main paper. This process does not involve the use of event data.\\n\\nInspired by your suggestion, we have included additional analysis in the supplementary materials to explore how event characteristics could assist in improving RAW-level ISP tasks when integrated into deep learning frameworks. We believe this will provide valuable insights for future research directions and further demonstrate the potential of event-guided ISP.\\n\\n### Q.3 Refine Writing\\nThank you for highlighting the need for improved clarity in writing. We have thoroughly reviewed the manuscript to correct spelling errors and ensure precise language throughout. Your feedback has helped us significantly improve the readability and robustness of the paper.\\n\\n### Q.4 Hyperparameters of Comparative Algorithms\\nThank you for the suggestion. We have conducted additional experiments with CameraNet using optimized hyperparameters. These results, included in the revised manuscript, provide a more comprehensive and fair comparison of baseline methods.\\n\\n### Q.5 Alignment of EVS and APS\\nWe appreciate your question regarding the alignment between EVS (Event-based Vision Sensor) and APS (Active Pixel Sensor) data. In the appendix (Section A, Pages 16\\u201319), we provide a detailed explanation of the temporal alignment process. Specifically, the EVS and APS sensors are equipped with unified timestamps, enabling precise alignment at the pixel level. This ensures that event data is synchronized with the rolling shutter frames of the APS output, allowing for accurate integration in hybrid sensor tasks.\\n\\nOnce again, we thank you for your constructive feedback. Your comments have not only helped us address key concerns but also inspired us to refine our work and present a more robust contribution to the field. We hope our responses and revisions address your concerns satisfactorily.\\n\\nBest\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Acknowledging Clarifications and Adjusting Assessment\", \"comment\": \"Thank you for addressing my questions. I feel that I have gained a deeper understanding of the paper, so I have increased my score and confidence accordingly.\"}", "{\"title\": \"Looking forward to further discussions with reviewer qDHH.\", \"comment\": \"Thank you for your time and attention. In the revision paper, we have elaborated on the dataset scale and provided additional examples to showcase its diversity. Additionally, we have committed to releasing the dataset and benchmark code upon acceptance of the paper. We hope you have had the opportunity to review our revisions and look forward to further engaging discussions with you.\"}", "{\"metareview\": \"The paper introduces an event-RAW paired dataset and an event-guided ISP pipeline, addressing a clear gap in the ISP domain. The reviewers acknowledged the significance of the dataset and the approach but raised concerns about the dataset's scale, reproducibility, and the method's limited performance gains. The authors' revisions addressed these concerns effectively by expanding dataset details, clarifying event integration, and providing additional evaluations. While some limitations remain, the work lays a good foundation for future research in event-guided ISP. The AC recommends acceptance for its valuable contribution to the community and potential to inspire further advancements.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about dataset scale, reproducibility, clarity of event integration, and performance of the proposed method. The authors addressed these by expanding dataset descriptions, improving clarity, adding visual examples, and explaining alignment processes. Reviewer q8r6 gave the most critical rating. During the rebuttal period, the AC determined that the authors adequately addressed Reviewer q8r6's concerns. Despite multiple reminders, Reviewer q8r6 provided no further feedback. A few reviewers raised concerns about the dataset's size and/or availability. The AC concluded that the dataset is still significantly larger than existing benchmarks and is confident that the authors will release it as promised. The rebuttal effectively clarified key concerns, leading to a final decision favoring acceptance for the paper's foundational contribution.\"}", "{\"comment\": \"Dear Reviewer, vrds:\\n\\nThank you for your valuable suggestions.\\nWe have carefully revised the paper to address your concerns to the fullest extent possible. Specifically, we have added Section A and Section B in the supplementary materials (pages 16\\u201320) to provide detailed explanations about the imaging process, underlying principles, and dataset scale. These additions aim to clarify and enhance the understanding of the key aspects you highlighted.\\n\\nBelow are specific answers to your questions.\\n\\n### Q.1 Concern about Dataset Size for ISP tasks:\\nThank you for your suggestion and concern. We have answered the question about the scale of the dataset in Summary of Official Reviews (2/2). We hope that this will resolve your doubts.\\n\\n### Q.2 Open-Sourcing the Dataset\\nWe promise that the dataset, benchmark code and pre-train model will be open sourced upon the acceptance of this paper. This will allow the broader community to access, evaluate, and build upon our work.\\nA clear statement regarding dataset release has been added to the paper.\\n\\n### Q.3 Concern about Reproducibility\\nThank you for your concern regarding the reproducibility of our experiments with the sensor. We appreciate this opportunity to provide further clarification.\\nWe would like to emphasize that the sensor is expected to be commercialized and made publicly available in the near future.\\n\\nOur research serves as an early-stage exploration of its potential applications, aiming to uncover the benefits and challenges of integrating such technology into ISP tasks. We hope that this work can inspire further advancements in this field and help guide the community's adoption and innovation around this promising technology.\\n\\n### Q.4 & Q.5 More Background of Dataset & More Explain about Events\\nThank you for your suggestion. We agree that clarifying the dataset\\u2019s background and unique characteristics will greatly benefit readers.\\nIn the appendix, we have added section A to explain the imaging principles, features, and unique advantages of this sensor. Additionally, we have included new examples to highlight its performance in challenging scenarios, such as high-speed motion and low-light conditions, demonstrating the benefits of this hybrid sensor.\\n\\nBest\"}", "{\"title\": \"Summary and Answers of Official Reviews (2/2)\", \"comment\": \"Thank you to all the reviewers for their hard work and careful review. Here we answer the questions of common concern.\\n\\n## Concern about Dataset Size for ISP tasks:\\n\\nWe acknowledge the reviewer\\u2019s concern regarding the limited scale of our dataset. Below, we provide a detailed explanation to justify the sufficiency and significance of our dataset for ISP tasks:\\n\\n### 1. ISP Tasks Focus on Pixel-Level Data\\u00b7\\nUnlike traditional perception tasks, such as face detection, which often require datasets with tens of thousands of examples, ISP tasks focus on processing pixel-level data.\\nIn ISP, every pixel with different neighbors can be considered an example, making the requirements of dataset size fundamentally different.\\nTherefore, our dataset is not only sufficient in terms of quantity (3,373) but also provides high-resolution (2248 \\u00d7 3264) samples that are effective for training and testing ISP models.\\n\\n### 2. Size Comparison with Related Dataset\\nOur work is inspired by the MIPI Demosaic Workshop [a], which represents the most closely related study in this field. Compared to the MIPI dataset, our dataset is significantly larger, containing over 3,373 real images. This is four times the size of the MIPI dataset, which includes only 800 images. Moreover, our dataset offers higher-resolution images, with dimensions of 2248 \\u00d7 3264 pixels (approximately 7.3 million pixels per image), while the MIPI dataset has resolutions around 2K, such as 2040 \\u00d7 1356 pixels.\\n\\nAlthough the MIPI dataset is only about one-fourth the size of ours, it is still sufficient to train large networks, such as transformers [b]. Our dataset also supports more comprehensive training and testing.\\n\\nMore importantly, the MIPI dataset is entirely composed of simulated data [a], whereas our dataset is based on real-world data, providing a more realistic representation for ISP tasks. Additionally, our dataset includes authentic event streams, enabling research into event-guided ISP, which was not possible with the MIPI dataset.\\n\\nFurthermore, the representative works [c,d,e] in ISP is summarized in Table B. These datasets all contain no more than 200 training samples. However, they feature high-resolution images, which are sufficient to support effective model training and evaluation.\\n\\nIn summary, in ISP tasks, the size of the dataset is not only related to the number of images but also to their resolution. Our dataset is sufficient in both aspects.\\n\\nTable B, Size Comparison of Related Datasets\\n| Dataset | Resolution | Scalse | Real-World | Events | Tasks| Publication |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Ours | 2248 \\u00d7 3264 | 3373 | Yes | Yes | hybrid sensor ISP | |\\n| MIPI[a,b] | 2K, e.g. 2040 \\u00d7 1356 | 800 | No | No | hybrid sensor ISP | CVPR 2024 |\\n| ISPW[c] | 1368 \\u00d7 1824, 4480\\u00d76720 | 197 | Yes | No | ISP | ECCV 2022 |\\n| NR2R[d] | 3464\\u00d75202 | 150 | Yes | No | ISP | CVPR 2022 |\\n| DeepISP [e] | 3024\\u00d74032 | 110 | Yes | No | ISP | IEEE TIP 2018 |\\n\\n\\n\\n### 3. Diversity in Scenes and Conditions\\nOur dataset includes a wide variety of lenses, scenes, and lighting conditions. This diversity supports comprehensive testing across different scenarios, ensuring the dataset\\u2019s relevance and utility for a broad range of ISP tasks.\\n\\nTo further substantiate the sufficiency of our dataset, we have included a new section in the revised appendix of our paper.\\nThis table highlights the scale, diversity, and resolution of our dataset compared to existing ISP-related benchmarks.\\n\\n### 4. Long-term Maintenance\\nOur dataset provides a comprehensive resource for ISP research. Its real-world nature distinguishes it from existing datasets and makes it particularly well-suited for event-guided ISP tasks. Moreover, we are committed to the long-term maintenance of this dataset and plan to expand it in the future to accommodate larger and more complex tasks.\\n\\n\\n## Reference\\n\\n- [a] Wu, Yaqi, et al. \\\"MIPI 2024 Challenge on Demosaic for Hybridevs Camera: Methods and Results.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n- [b] Xu, Senyan, et al. \\\"DemosaicFormer: Coarse-to-Fine Demosaicing Network for HybridEVS Camera.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n- [c] Shekhar Tripathi, Ardhendu, et al. \\\"Transform your smartphone into a dslr camera: Learning the isp in the wild.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\\n- [d] Li, Zhihao, Si Yi, and Zhan Ma. \\\"Rendering nighttime image via cascaded color and brightness compensation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n- [e] Schwartz, Eli, Raja Giryes, and Alex M. Bronstein. \\\"Deepisp: Toward learning an end-to-end image processing pipeline.\\\" IEEE Transactions on Image Processing 28.2 (2018): 912-923.\"}", "{\"title\": \"Summary of Revision Paper\", \"comment\": \"Dear Reviewers,\\n\\nWe thank all reviewers' thorough and careful evaluations. Here, we provide a summary of the revisions made to our paper.\\n\\n**Please note that due to file size limitations, we have compressed the PDF document, which may result in blurry images. However, the original, clear PDF can be downloaded from the supplementary materials.**\\n\\nBelow, we outline the key revisions made to the paper.\\n\\n**Section A**\\n\\nWe have added a new section titled \\\"HYBRID SENSOR IMAGING PROCESS, PRINCIPLES, AND POTENTIAL.\\\"\\n- (1) Firstly, we included the imaging principles to help readers better understand the sensor's details.\\n- (2) Next, we analyzed the advantages of event data through theoretical analysis and practical demonstrations. For example, in Figure 9, we present cases of fast motion and low-light scenes.\\n- (3) Moreover, we examined the time alignment in imaging, which addresses the issue of synchronizing the two types of data.\\n- (4) Finally, we explored the potential of our approach in various ISP tasks, providing insights for future research.\\n\\n**Section B**\\n\\nWe have added more dataset examples and a scale analysis.\\n- (1) To address the reviewers' concerns, we included additional examples from our dataset.\\n- (2) Furthermore, we analyzed the size of our dataset and compared it with studies most similar to ours. Compared to the latest research, our dataset is four times larger.\\n- (3) We also compared our dataset with classic ISP methods, demonstrating that its scale is sufficient to support training.\\n- (4) **Diversity Testing with GPT-4o**: We conducted a comprehensive analysis of dataset diversity using ChatGPT-4. This includes statistical evaluations of captured scenes, objects, lighting conditions, and weather variations, offering a clearer understanding of the dataset's richness.\\n\\n**Section E**\\n\\nWe added a discussion on how hyperparameters affect model training results.\\n\\n**Section F**\\n\\nWe included a discussion of the results of various methods on the entire dataset.\\n\\nBest\\n\\nICLR-4440 Authors\"}", "{\"title\": \"Looking Forward to Further Discussions with Reviewer q8r6\", \"comment\": \"Dear Reviewer q8r6,\\n\\nThank you again for your valuable comments. We have carefully addressed your feedback and updates into the revision paper.\\n**We sincerely hope to know if there are any unresolved concerns and look forward to engaging in further discussions with you.**\\n\\nSpecifically, we have made the following revisions to address your points:\\n\\n### 1. Sec. B [Q.1, Q.2]:\\n - Added additional visual examples (Fig. 11 and Fig. 13) showcasing various dataset scenarios.\\n - Included qualitative analyses (Fig. 12) summarizing key elements, lighting conditions, and their distributions.\\n### 2. Sec. A [Q.3]:\\n - Expanded discussion on the imaging principles of events and frames, emphasizing the advantages of event data for ISP tasks.\\n - Clarified the evaluation of the controllable ISP framework, which relies on ColorChecker-based assessments (Fig. 5). While intermediate processes like demosaicing and white balancing lack ground truth for quantitative analysis, we provided an overall pipeline performance evaluation.\\n### 3. Fig. 6 and Fig. 7 [Q.4]:\\n - Revised these figures as per your suggestion to reduce redundancy and improve clarity.\\n### 4. Tab. 8 and Sec. F [Q.5]:\\n - Conducted a comprehensive quantitative analysis of overall dataset performance.\\n### 5. Sec. 5.1, 5.3, D, and F [Q.6]:\\n - Discussed the benefits and challenges of event-data fusion, including its advantages and limitations under artificial lighting.\\n### 6. Fig. 6, Fig. 7, and Fig. 9 [Q.7]:\\n - Improved layout and added visual examples to illustrate the unique strengths of hybrid vision sensors.\\n### 7. Fig. 6, Fig. 7, Fig. 19 and Fig. 20 [Q.8]:\\n - Showcased the performance of the same method under varying scenes, providing insights into dataset diversity.\\n### 8. Sec. 5 and E [Q.9]:\\n - Analyzed task-specific performance across indoor and outdoor scenes, with a focus on lighting conditions and event-data gains.\\n### 9. Sec. 5.1, 5.2, and F [Q.10]:\\n - Provided a simple baseline fusion method, discussing its potential gains and associated challenges for future event-based ISP research.\\n\\n**We sincerely appreciate your time and effort in evaluating our work and look forward to engaging in further discussions to clarify any outstanding concerns.**\\n\\nBest regards,\\nICLR-4440 Authors\"}", "{\"comment\": \"Dear Reviewer qDHH,\\n\\nThank you for your thoughtful review and for highlighting the key contributions of our work. We greatly appreciate your recognition of the following strengths.\\n(1) Dataset: The first of its kind with aligned RAW and event data from a hybrid vision sensor, providing new opportunities for ISP algorithm development.\\n(2) Benchmark Task: Evaluating the dataset with various ISP methods and deriving valuable insights for event-guided ISP research.\\nBelow, we address each of your concerns in detail.\\n\\n### Q.1 Dataset Scale and Diversity\\nThank you for raising this important point. While our dataset consists of 3,373 high-resolution, real-world images, making it the larger datasets for hybrid sensor ISP tasks now.\\nFor a more detailed response, please refer to \\\"Summary and Answers of Official Reviews (2/2).\\\"\\n\\nAdditionally, we plan to maintain and expand this dataset as a dynamic, growing resource for the research community. We aim to enhance its diversity while ensuring its focus on high-quality, real-world data.\\n\\n### Q.2 Reproducibility and Dataset Usage\\nWe greatly appreciate your suggestion regarding early release. However, to comply with double-blind review policies, we are unable to release the dataset during the review process. We are fully committed to making the dataset, benchmark code, and results publicly available immediately upon acceptance of the paper. This timeline ensures fairness and accessibility for the research community.\\n\\n### Q.3 Event-Guided ISP Baseline Evaluation\\nThank you for this constructive suggestion. As the first dataset specifically designed for event-guided ISP tasks, our primary goal was to establish a baseline by demonstrating the utility of a simple event fusion approach.\\n**Prior to this work, no research has explored using events to guide the RAW ISP process.**\\n\\nWe recognize the importance of further developing and evaluating more event-guided architectures. To address this, we have included discussions in the revised manuscript to emphasize future directions, including advanced event-guided ISP methods and architectures.\\n\\n### Q.4 Dataset Scale and Diversity:\\nThank you for this question. As mentioned, our dataset is already larger than other similar datasets and sufficient for training and testing ISP models effectively.\\n\\nWe are also committed to expanding its scale and diversity over time by incorporating additional data, scenes, and lighting conditions. This dynamic approach will address current limitations and better serve the research community.\\n\\n### Q.5 Early Data Release:\\nThank you for raising this question. To ensure accessibility and reproducibility, we will release the dataset, benchmark codes, and experimental results immediately upon acceptance of the paper. This timeline ensures compliance with double-blind review policies while supporting the broader research community.\\n\\n### Q.6 Event-Guided ISP Baseline Evaluation:\\nAs this is the first dataset designed for event-guided ISP tasks, we demonstrated the potential of a simple baseline approach in this work.\\n\\n\\nOnce again, we sincerely thank you for your constructive feedback. Your comments have been invaluable in helping us improve the paper and refine its contributions. We hope our responses address your concerns and demonstrate the robustness of our work.\\n\\nSincerely,\"}", "{\"comment\": \"Dear Reviewer AvAS,\\n\\nThank you for taking the time to review our work and for providing thoughtful and constructive feedback. We are grateful for your recognition of the following strengths in our paper:\\n(1) Background: Clearly explained and accessible to readers.\\n(2) Relevance: RGB-event ISP is an interesting and meaningful topic for future cameras.\\n(3) Writing: Well-structured and easy to understand.\\nWe have carefully reviewed your comments and suggestions. Below, we provide detailed responses to your specific concerns.\\n\\n### Q.1 Dataset Scale and Simulated Data Generation\\nThank you for raising this concern. Compared to the most recent similar datasets, our dataset is four times larger in scale. More importantly, our dataset is based on real-world data and includes event outputs, which are critical for hybrid sensor ISP tasks. For a more detailed response, please refer to \\\"Summary and Answers of Official Reviews (2/2).\\\"\\n\\nAdditionally, we appreciate your suggestion about expanding the dataset. We are committed to the long-term maintenance and future expansion of this dataset to include more scenes and conditions. We also believe the current scale is already sufficient to support training and testing models effectively.\\n\\n\\n### Q.2 Trustworthiness of Ground Truth Generated by MATLAB\\nWe understand your concerns regarding the use of MATLAB-based tools for generating ground truth. To clarify:\\n\\n- The MATLAB-based controllable ISP framework is a widely recognized tool in academic research for generating high-quality ground truth images. This framework allows us to systematically control key ISP stages, such as demosaicing [a], denoise [b] and color correction [c].\\nIn traditional ISP tasks, this approach is a standard and reliable method in the academic community [d].\\n\\n- To ensure transparency, we have included detailed descriptions of the framework in the manuscript and have conducted quantitative evaluations to validate the accuracy and consistency of the generated ground truth images. These are illustrated in Figure 5 of the main paper and Figure 13 of the Appendix.\\n\\nAdditionally, in practical scenarios, consumer cameras imaging often lacks precise color references.\\nBy including a ColorChecker in every scene and using it to generate accurate color correction matrices, our approach ensures greater color accuracy than methods without such reference data.\\n\\n### Q.3 Integration of Events into Traditional ISP\\nThank you for this valuable suggestion. We have expanded the discussion of the advantages of integrating events into traditional ISP tasks in the revised manuscript. Specifically:\", \"practical_examples\": \"In Section A, Figure 10 of the supplementary materials, we present examples in low-light and fast-motion scenarios. These demonstrate how events provide high temporal resolution and wide dynamic range, highlighting their advantages over RGB-based approaches.\", \"theoretical_analysis\": \"Based on your suggestion, we have added a new section in the supplementary materials analyzing why event data is effective for ISP tasks, including demosaicing and color correction. This analysis provides a theoretical foundation for integrating events into ISP.\\n\\nOnce again, thank you for your constructive feedback. Your comments have significantly improved the clarity and impact of our work. We hope our responses adequately address your concerns and demonstrate the robustness of our contributions.\\n\\nBest\\n\\n## Reference:\\n- [a] Malvar, H.S., L. He, and R. Cutler, High quality linear interpolation for demosaicing of Bayer-patterned color images. ICASPP, Volume 34, Issue 11, pp. 2274-2282, May 2004.\\n- [b] Metzler, Christopher A., Arian Maleki, and Richard G. Baraniuk. \\\"BM3D-AMP: A new image recovery algorithm based on BM3D denoising.\\\" 2015 IEEE international conference on image processing (ICIP). IEEE, 2015.\\n- [c] Westland, Stephen, Caterina Ripamonti, and Vien Cheung. Computational colour science using MATLAB. John Wiley & Sons, 2012.\\n- [d] Sumner, Rob. \\\"Processing raw images in matlab.\\\" Department of Electrical Engineering, University of California Sata Cruz 2 (2014).\"}", "{\"title\": \"Grateful for Your Feedback and Support\", \"comment\": \"Thank you for your kind and encouraging response! We are delighted to hear that our revisions have addressed your concerns and clarified the points you raised. Your decision to increase your score is deeply motivating for us and inspires us to continue striving for excellence in RGB Event ISP.\\n\\nWe deeply appreciate the time and effort you have dedicated to reviewing our paper and providing constructive feedback. Your insights have contributed to improving the quality of this research. Once again, thank you for your time, effort, and helpful feedback!\"}", "{\"summary\": \"This paper introduces the RGB-Event ISP dataset and benchmark, a novel paired dataset that combines RAW and event data from hybrid vision sensors to support research in event-guided image signal processing (ISP). The authors design a customizable ISP pipeline, allowing for benchmarking ISP methods and exploring the benefits of integrating event data in RAW-level ISP tasks. The experiments demonstrate the effectiveness of several methods in both outdoor and indoor scenes, and the dataset offers unique insights into ISP improvements through event data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The dataset fills an existing gap by providing paired RAW and event data, which is the first of its kind specifically designed for ISP tasks.\\n2. The proposed ISP pipeline is flexible, covering key ISP stages like black level adjustment, demosaicing, and color correction, making it versatile for testing various ISP algorithms.\\n3. The experimental evaluation is thorough, comparing several state-of-the-art ISP methods and highlighting the advantages of using event data, particularly in dynamic outdoor scenes.\\n4. The paper also addresses its limitations and suggests potential solutions. Specifically, it provides practical insights into the challenges of integrating event data with ISP, including issues like artifacts under artificial indoor lighting.\", \"weaknesses\": \"1. The dataset size is relatively small (3373 images), which might limit the generalizability of models trained on it. Additional data could enhance the applicability of the findings.\\n2. The theoretical justification for how event data contributes to each ISP stage could be more developed. For instance, further details on how event data specifically enhances tasks like white balancing and noise reduction would make the contribution clearer.\\n3. The writing lacks clarity, making the paper difficult to understand for readers not deeply familiar with the field. Enhancing the language clarity and providing more structured explanations would significantly improve the paper's readability and accessibility.\", \"questions\": \"1. In Section 3.1, it would be helpful to elaborate on the role of event data in each ISP stage (e.g., how it improves demosaicing or color correction compared to using RAW data alone).\\n2. There are some spelling errors in the paper. For instance, both \\\"convential ISP pipeline\\\" and \\\"conventional ISP pipeline\\\" appear, where \\\"convential\\\" is incorrect; it should be \\\"conventional ISP pipeline.\\\" Additionally, in Figure 5c, the y-axis should be labeled as \\\"probability density,\\\" but the word \\\"density\\\" seems to have been omitted. \\n3. My concern is that some comparative algorithms have very low PSNR scores. From the varying performance of PyNET and UNet under different hyperparameters, I believe it would also be beneficial to test CameraNet with different hyperparameters. This would allow us to select a high-performance configuration as a baseline, ensuring a fairer and more comprehensive comparison.\\n4. I have a question regarding the alignment between APS (Active Pixel Sensor) and EVS (Event-based Vision Sensor) data. As mentioned in the paper, there is a significant difference in their frame rates. How is the event data aligned with the RAW data given this disparity in FPS?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel dataset tailored for event-guided image signal processing (ISP), featuring 3,373 high-resolution RAW images paired with corresponding event data across various scenes, exposure modes, and lenses. This dataset aims to advance RGB-Event ISP research by enabling the development of methods that directly incorporate event information within the ISP pipeline. Besides, they introduce an event-guided ISP neural network as a baseline, fusing events with RAW data to optimize ISP tasks. Then, various ISP methods are evaluated on this dataset, establishing a foundation for future RGB-Event ISP advancements. Note that this work generated high-quality RGB images as ground truth by using a ColorChecker.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tDataset: This is the first dataset with aligned (in both time and spatial space) RAW and events from a new HVS equipment, which may provide some avenues for developing new ISP algorithms.\\n2.\\tBenchmark Task: This work evaluates the proposed dataset by testing various ISP baseline methods along with an event-guided ISP approach. Some conclusions and insights can be drawn from these experiments.\", \"weaknesses\": \"1.\\tScale of the dataset: From my understanding, a robust dataset should have both scale and diversity. Although the authors mention that the dataset is relatively small due to the low stability of the HVS sensor, I am not completely convinced by it, and I still believe it would be beneficial to expand the dataset further to increase its richness and variety. Additionally, the authors could include suggestions in the future work section on strategies for scaling up the dataset.\\n2.\\tReproducibility and Usage of the dataset: Given that this is a dataset/benchmark paper, I strongly encourage the authors to release the data as soon as possible, even during the review stage. Early access is crucial for the community to begin evaluating and utilizing the dataset.\\n3.\\tBenchmark Task: This dataset is tailored for RAW-Event ISP tasks. So, I wonder if we should focus more on designing various Event-guided ISP baselines. In the current setup, most baselines do not utilize the event data, raising concerns about whether these limited event-guided baselines effectively demonstrate the dataset\\u2019s potential. I believe increasing the variety of event-driven baselines could better validate the usefulness of this dataset and highlight its unique contributions to event-guided ISP research.\", \"questions\": \"My questions are highly overlapped with the weakness as follows:\\n1.\\tDataset Scale and Diversity: Could you provide more insights into the limitations that prevented a larger dataset collection, and provide more insights about how to scale up the data?\\n2.\\tEarly Data Release: Is there a timeline or plan for public release of the dataset, especially considering its importance for community validation and use? \\n3.\\tEvent-Guided ISP Baseline Evaluation: Given that the dataset is designed for RAW-Event ISP tasks, consider developing additional baseline methods that explicitly incorporate event data to better assess the dataset's strengths. For instance, could you explore a wider variety of event-guided architectures or fusion methods to better showcase the potential of the event data in ISP?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to further discussions with reviewer AvAS.\", \"comment\": \"Thank you for your appreciation of our work. In the revision paper, we have discussed the dataset scale in detail and provided additional examples. Additionally, we have demonstrated the advantages of this camera in motion and low-light scenarios. We hope the revision paper has addressed your concerns and captured your interest. We look forward to engaging with you further.\"}", "{\"title\": \"Looking forward to further discussions with reviewer q8r6.\", \"comment\": \"Thank you for your valuable suggestions. In the revision paper, we have incorporated additional visual examples to intuitively showcase the specific content of the dataset. We have also improved the formatting to make the paper's structure more logical and accessible. Furthermore, we have expanded discussions on dataset scale and background knowledge. We hope you have had the chance to review these updates and look forward to engaging in further discussions with you.\"}", "{\"summary\": \"The paper presents a new event-RAW paired dataset collected with a novel but private sensor that records pixel-level aligned events and RAW images. 3373 RAW images with paired events spanning 24 scenes are captured. A convential ISP pipeline is proposed to generate RGB references and learnable ISP methods are used to train and evaluate on the dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Event-guided ISP seems like a novel and interesting idea.\\n2. A novel sensor is designed to capture the datasets, although it's private and confidential.\\n3. The analysis and results looks good.\", \"weaknesses\": \"1. The proposed is rather small-scaled. There are 24 videos captured in total, with 80 to 140 frames for each video. Considering a FPS of 60, it's just 1-2 seconds. It would be better called an image dataset instead of a video dataset.\\n2. I assume the dataset would be open-sourced? Although the paper does not explicitly state that.\\n3. As the authors stated, the prototype is cumbersome with low stability and also private. This weakens the reproducibility of the work.\\n4. The background of the dataset and issues need further clarifications. It is not clear to me at the moment.\", \"questions\": \"Please explain what is 'event' (with examples) and why event camera is important in addition to RGB camera, maybe in the introduction/related work section. This is important for readers unfamiliar with this topic.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your hard work. My concerns are addressed and I increased my rating to 6.\"}", "{\"title\": \"Summary and Answers of Official Reviews (1/2)\", \"comment\": \"# Summary and Answers of Official Reviews (1/2)\\n\\nDear Reviewers,\\n\\nWe sincerely thank all reviewers for their thoughtful feedback and constructive comments. We deeply appreciate the recognition of our contributions and the valuable suggestions provided to improve our work. Below, we summarize the key strengths of our paper as highlighted by the reviewers in the Table A. We are grateful for the reviewers' recognition of the novelty, practical contributions, and thorough analysis presented in our work. These acknowledgments motivate us to further improve the quality and impact of our research.\", \"table_a\": \"Recognized Contributions.\\n| **Contribution** | **Reviewer** | **Official Review** |\\n| ---------------------------------------- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- |\\n| **1. Novel Research Question** | vrds | \\\"Event-guided ISP seems like a **novel and interesting** idea.\\\" |\\n| | fVak | \\\"The paper presents a **new** event-RAW paired dataset...\\\" |\\n| | AvAS | \\\"The relevant background knowledge of this paper is **clearly explained**.\\\" |\\n| | | \\\"The topic of RGB-event ISP is a very **interesting and meaningful** topic for future cameras.\\\" |\\n| | q8r6 | \\\"This paper **addresses a gap** in the field of ISP and **facilitates further** research on event-guided ISP.\\\" |\\n| **2. New Real-world Dataset** | vrds | \\\"A **novel sensor** is designed to capture the datasets...\\\" |\\n| | fVak | \\\"This paper introduces the RGB-Event ISP dataset and benchmark, **a novel paired dataset** that combines RAW and event data.\\\" |\\n| | | \\\"The dataset **fills an existing gap** by providing paired RAW and event data, the **first** of its kind specifically designed for ISP tasks.\\\" |\\n| | | \\\"The proposed ISP pipeline is **flexible**, making it versatile for testing various ISP algorithms.\\\" |\\n| | AvAS | \\\"I **greatly appreciate** the effort to create a dataset for RGB-Event ISP, which **opens up opportunities** for event-assisted RGB ISP tasks.\\\" |\\n| | qDHH | \\\"This is the **first dataset** providing avenues for developing new ISP algorithms.\\\" |\\n| | | \\\"This paper presents a novel dataset **across various** scenes, exposure modes, and lenses.\\\" |\\n| **3. Conventional ISP Pipeline** | vrds | \\\"A **conventional ISP** pipeline is proposed to generate RGB references...\\\" |\\n| | fVak | \\\"The authors design a **customizable ISP** pipeline, allowing for benchmarking...\\\" |\\n| | AvAS | \\\"Using a controllable ISP pipeline developed by the authors, **high-quality RGB frames** are generated.\\\" |\\n| | qDHH | \\\"This work generated **high-quality RGB** images as ground truth by using a ColorChecker.\\\" |\\n| | q8r6 | \\\"The study presents a conventional ISP pipeline that generates **high-quality RGB** frames for reference.\\\" |\\n| | | \\\"A **conventional ISP** pipeline is proposed and learnable ISP methods are used.\\\" |\\n| **4. Thorough Analysis of Benchmarking** | vrds | \\\"The **analysis and results** look good.\\\" |\\n| | | \\\"The experiments **demonstrate the effectiveness** of several methods in both outdoor and indoor scenes.\\\" |\\n| | fVak | \\\"The experimental **evaluation is thorough**.\\\" |\\n| | | \\\"The paper also **addresses its limitations** and **suggests potential solutions**.\\\" |\\n| | qDHH | \\\"This work evaluates the proposed dataset by **testing various ISP** baseline methods along with an event-guided ISP approach.\\\" |\\n| | | \\\"Some **conclusions and insights** can be drawn from these experiments.\\\" |\\n| | q8r6 | \\\"The trainable ISP methods are evaluated on this event-RAW paired dataset.\\\" |\\n\\nWe sincerely appreciate the reviewers' valuable feedback and constructive suggestions, which have guided us in refining our work. In response to your insightful comments, we have carefully addressed each point and made substantial updates to the paper. These revisions have significantly strengthened the robustness and clarity of our research. Thank you again for your support and suggestions, which have greatly enhanced the quality of our paper.\\n\\nBest\\n\\nICLR-4440 Authors\"}", "{\"summary\": \"The authors introduce the first events-RAW paired dataset specifically designed for event-guided image signal processing (ISP) research. This dataset comprises 3,373 high-quality, high-resolution RAW images alongside corresponding pixel-level aligned events. Using a controllable ISP pipeline developed by the authors, high-quality RGB frames are generated. A thorough evaluation and analysis of existing learnable ISPs, as well as a straightforward event-guided ISP method, are performed on this dataset. From this analysis, the authors highlight several key points and challenges associated with event-guided ISP.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The relevant background knowledge of this paper is clearly explained.\\n\\n2. The topic of RGB-event ISP is very interesting topic and meanful for future camera.\\n\\n3. The writing is very well and easy to understand.\", \"weaknesses\": \"1. I greatly appreciate the effort to create a dataset for RGB-Event ISP, which opens up opportunities for event-assisted RGB ISP tasks. However, the dataset's scale is quite limited, with only 3,373 samples, which may not be sufficient to support data-driven learning methods. This raises concerns about the dataset's ability to serve as a professional, standardized, and challenging benchmark. If this is primarily a workload issue, could the authors consider generating some simulated datasets? I would appreciate an explanation regarding this.\\n\\n2. The authors use images generated from a controllable ISP framework based on MATLAB as ground truth. While they provide extensive explanations for this approach, it is difficult to trust software-generated images as ground truth, especially for a professional dataset. This practice differs significantly from the ground truth methods used by existing sensor or smartphone manufacturers for their ISPs.\\n\\n3. The integration of events into traditional ISP theoretically brings certain advantages, and the authors should elaborate on these benefits. Additionally, to demonstrate the advantages of event cameras, the authors should showcase scenes, particularly under high-speed motion or extreme lighting conditions, that highlight the potential benefits of using event-based approaches.\", \"questions\": \"Please see the weaknesses. I have assigned a preliminary score based on the initial manuscript and will consider adjustments depending on the authors' responses and feedback from other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a event-RAW paired dataset of pixel-level aligned events and RAW images, including 3373 images across 24 scenes, for ISP process reforming. The study presents a conventional ISP pipeline that generates high-quality RGB frames for reference, performing basic ISP operations like demosaicing and white balancing. Some existing learnable ISP methods are trained and evaluated on the dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The present paper introduces a event-RAW paired dataset that addresses a gap in the field of ISP and facilitates further research on event-guided ISP.\\n2. The trainable ISP methods are evaluated on this event-RAW paired dataset.\", \"weaknesses\": \"This paper, as a work primarily contributing a dataset, provides insufficient information about the dataset itself and focuses heavily on performance comparisons with existing methods, but the proposed method fails to outperform the current ones across all metrics. The overall logic and structure of the paper need improvement and refinement.\\n1. As a dataset-centric paper, it presents too few sample images, making it difficult for readers to intuitively grasp the specific content of the dataset and the differences in the scenes, camera shots, and exposure modes mentioned.\\n2. The introduction to the specific content of the dataset is insufficient. It is recommended to list statistical information about the dataset regarding the mentioned types and present them in tables.\\n3. The ISP presented in the paper is used to handle demosaicing, white balance, denoising, and color space transformations, but its capabilities in handling these tasks are not reflected in the performance analysis results.\\n4. Figures 6 and 7 have identical legends and can be combined into one figure, or one could be replaced with the visualization results of indoor data samples.\\n5. The experimental section only presents quantitative analysis results for outdoor and indoor data separately, failing to reflect the overall performance of the dataset.\\n6. The presented integrated improvement method does not enhance processing performance, and the paper does not provide sufficient explanation or analysis of this issue.\\n7. The experimental visual results are too few, and the layout is too compact, making it difficult for readers to intuitively appreciate the value of the dataset.\\n8. It is recommended to analyze the performance of different types of samples within the dataset using the same method in future work, rather than focusing primarily on comparing the performance of multiple ISP methods.\", \"questions\": \"1. The ISP method presented in this paper demonstrates varying effects across different tasks. It would be beneficial to analyze whether these effects are reflected in the proposed dataset.\\n2. The authors should explain and analyze the reasons why the proposed method fails to surpass existing methods in performance, as well as highlight its advantages.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your explanations. I believe the primary contribution of this paper lies in the dataset. Therefore, I think it is essential to release the dataset and accompanying code to ensure clarity and accessibility for everyone. As such, I am inclined to maintain my original rating for now.\"}", "{\"title\": \"Looking forward to further discussions with reviewer vrds.\", \"comment\": \"Thank you for your thoughtful review. In the revision paper, we have incorporated additional background explanations to address your concerns. We hope these revisions provide clarity and resolve the issues you raised. We look forward to engaging in further discussions with you.\"}", "{\"title\": \"Grateful Acknowledgment and Commitment to Advancing RGB-Event ISP\", \"comment\": \"Thank you for your kind and encouraging feedback!\\nWe deeply appreciate your understanding and recognition of our efforts in advancing this work. Your acknowledgment inspires us to continue contribute to the development of the RGB-Event ISP field. \\n\\nThank you again for maintaining your positive score, and we sincerely wish you all the best as well!\"}", "{\"comment\": \"Dear Reviewer q8r6,\\n\\nThank you for your insightful review and for highlighting the significant contributions of our work. We are especially grateful for your recognition of:\\n(1) Dataset Contribution.\\n(2)Thoroughly assessing trainable ISP methods on the dataset, showcasing its potential.\\nBelow, we address your concerns in detail, incorporating your valuable suggestions to improve the clarity and quality of our work.\\n\\n### Q.1. Insufficient Dataset Information\\nWe appreciate your suggestion. To address this, we have:\\n- (1) Added a detailed description of the dataset in Appendix Section B, including information on the types of scenes, exposure modes, and camera settings used.\\n- (2) Included more visual samples to better illustrate the dataset's diversity and highlight specific scenarios of interest.\\nWe hope these updates will enhance readers' understanding of the dataset's content and significance.\\n\\n\\n### Q.2. Statistical Dataset Information\\nThanks for your suggestions. We have included a statistical summary table in the supplementary materials. This table provides a clear breakdown of the dataset by scene type, exposure mode, and camera-specific settings, offering a more comprehensive understanding of its structure and diversity.\\n\\n\\n### Q.3.Performance Analysis of ISP Tasks\\nThanks for your suggestions.\\nWe would like to clarify that the controllable ISP framework used in this work relies on ColorChecker-based evaluations to validate the accuracy of the generated RGB images, as shown in Figure 5.\\nWhile intermediate processes such as demosaicing and white balancing are difficult to evaluate quantitatively due to the absence of ground truth, the final evaluation results provide an overall assessment of the pipeline's performance.\\n\\nFor the learning-based methods, we have included comprehensive evaluations using objective metrics (e.g., PSNR, SSIM, L1) as well as non-reference metrics (e.g., NIQE, PI), which collectively demonstrate the effectiveness of our dataset in supporting ISP research.\\n\\n\\n### Q.4. Figures 6 and 7\\nThank you for pointing this out. In the revised manuscript:\\nFigures 6 and 7 have been combined to reduce redundancy.\\nIndoor data visualization has been added to Figure 8, providing a more diverse set of examples.\\n\\n### Q.5. Comprehensive Dataset Performance Analysis\\nWe have updated the supplementary materials to include a combined evaluation of indoor and outdoor data. This comprehensive analysis provides a more holistic view of the dataset\\u2019s overall performance, addressing your concern.\\n\\n### Q.6. Integrated Improvement Method\\nThank you for highlighting this. The event-guided ISP baseline introduced in our work is intended as a simple starting point to demonstrate the potential of integrating event data into ISP tasks. While this approach has shown performance gains in outdoor scenarios, it faces challenges in indoor conditions due to factors such as artificial lighting flicker. These findings, along with their implications, have been discussed in greater detail in the revised manuscript and supplementary materials.\\n\\n### Q.7. Experimental Visual Results and Layout\\nThank you for highlighting this. We have significantly expanded the visual results in the supplementary materials. This includes:\\nAdding more examples, particularly for low-light and fast-motion scenarios, as shown in Figure 10.\\nAdjusting the layout in the main paper to ensure that visual results are presented more clearly and intuitively.\\n\\n### Q.8. Sample-Specific Analysis\\nWe appreciate this suggestion. In response, we have conducted additional analyses in the supplementary materials to evaluate the performance of different dataset subsets. This provides deeper insights into the dataset's characteristics and potential applications.\\n\\n### Q.9. Task-Specific Effects in the Dataset\\nThank you for raising this question. In the supplementary materials (Section E), we have added analyses discussing task-specific effects, such as the impact of lighting flicker in indoor scenes. These findings provide valuable insights into the dataset's strengths and limitations in supporting diverse ISP tasks.\\n\\n### Q.10. Comparison with Existing Methods\\nWe acknowledge that the proposed method is a baseline approach designed to demonstrate the utility of the dataset rather than achieve state-of-the-art performance. The method involves a simple integration of event data into UNet and serves as a foundation for future research. We have included additional discussions in the supplementary materials analyzing its limitations, such as challenges in modeling global context, and its potential for further improvement.\\n\\nOnce again, we thank you for your constructive feedback. Your comments have been instrumental in refining the quality and clarity of our work. We hope our responses address your concerns comprehensively and look forward to your further insights.\\n\\nSincerely,\"}", "{\"title\": \"Looking forward to further discussions with reviewer fVak.\", \"comment\": \"Thanks for your time and attention. In the revision paper, we have addressed the dataset scale, the data alignment methodology, and provided additional background information. We hope you have had the opportunity to review the revised version. We are eager to discuss any further questions or feedback you may have.\"}" ] }
BqbeJzN9Ie
BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis
[ "Lutao Jiang", "Lin Wang" ]
Text-to-3D synthesis has recently seen intriguing advances by combining the text-to-image models with 3D representation methods, e.g., Gaussian Splatting (GS), via Score Distillation Sampling (SDS). However, a hurdle of existing methods is the low efficiency, per-prompt optimization for a single 3D object. Therefore, it is imperative for a paradigm shift from per-prompt optimization to one-stage generation for any unseen text prompts, which yet remains challenging. A hurdle is how to directly generate a set of millions of 3D Gaussians to represent a 3D object. This paper presents BrightDreamer, an end-to-end single-stage approach that can achieve generalizable and fast (77 ms) text-to-3D generation. Our key idea is to formulate the generation process as estimating the 3D deformation from an anchor shape with predefined positions. For this, we first propose a Text-guided Shape Deformation (TSD) network to predict the deformed shape and its new positions, used as the centers (one attribute) of 3D Gaussians. To estimate the other four attributes (i.e., scaling, rotation, opacity, and SH coefficient), we then design a novel Text-guided Triplane Generator (TTG) to generate a triplane representation for a 3D object. The center of each Gaussian enables us to transform the triplane feature into the four attributes. The generated 3D Gaussians can be finally rendered at 705 frames per second. Extensive experiments demonstrate the superiority of our method over existing methods. Also, BrightDreamer possesses a strong semantic understanding capability even for complex text prompts.
[ "Text-to-3D Generation" ]
https://openreview.net/pdf?id=BqbeJzN9Ie
https://openreview.net/forum?id=BqbeJzN9Ie
ICLR.cc/2025/Conference
2025
{ "note_id": [ "i6vCyrUkkD", "a3xX2piTIR", "B65bjE3sfw", "6iMef9mV2s", "4WifnXYtyG" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731486517102, 1730537126904, 1730195481294, 1730672576090, 1730624445484 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6100/Authors" ], [ "ICLR.cc/2025/Conference/Submission6100/Reviewer_DLaw" ], [ "ICLR.cc/2025/Conference/Submission6100/Reviewer_UAaa" ], [ "ICLR.cc/2025/Conference/Submission6100/Reviewer_cs7N" ], [ "ICLR.cc/2025/Conference/Submission6100/Reviewer_GZMt" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper presents a generic text-to-3D generation method. Instead of performing score distillation to optimize a single 3D representation one at a time, the proposed method aims for a single-stage solution, where a text prompt is used to generate a set of deformations from predefined anchored points as well as three feature planes. The generated deformed points are then used to query the feature planes to construct 3D Gaussians, where the positions of the Gaussians are at the deformed points themselves, and the remaining four attributes of the Gaussians (scale, rotation, opacity, spherical harmonic coefficients) are established from the feature planes. The results show that the proposed method can generate 3D objects from text prompts in just ~77 ms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper addresses an important problem in text-to-3D generation, where the current speed to generate a 3D object using optimization-based methods are relatively low, taking minutes on average to generate an object. By making the generation process entirely feed forward, the method directly samples a 3D object in milliseconds.\\n\\nThe paper is quite well written. I appreciate Table 1 which clearly demonstrates the differences of the method with previous work.\", \"weaknesses\": [\"1) The technical content suffers from two notable issues.\", \"The training seems to be done for each class of object, e.g., vehicle, animal, etc. For per-prompt optimization methods, the benefit is that no such class definition is required. It remains unclear whether why such categorization is required, or minimum how many samples per class is required for training. Additionally, some data must be prepared for training.\", \"While the inference speed is at milliseconds, there seems to have some fidelity-speed tradeoff. The generated 3D model quality is not as good as per-prompt optimization case.\", \"From the above points, my guess is that training a generative framework like the proposed method is actually a trade-off. It leads to some limitations in the generative process (limited to object categories) as well as some degradations in object quality. This makes it tricky to handle arbitrary prompts like in optimization case.\", \"2) The experiment results appear to be quite limited. Some results are not particularly convincing.\", \"The qualitative results appear to be more blurry than optimization-based text-to-3D Gaussian methods.\", \"The 3D models also do not have a lot of details, and the colors appear to be very saturated.\", \"The generalization of the model (to unseen prompts) are not well demonstrated. In terms of diversity and unseen objects it seems the generalization is worse than an image diffusion model as the current training scale is way smaller.\", \"Fig. 9: I do not see clear improvement between a, b, c.\", \"The training is quite expensive, with 30+ hours on 8 x 80GB GPUs.\", \"3) The paper writing can be further improved by addressing the following issues:\", \"Some newer works for per-prompt optimization methods can be cited. Currently some 2024 methods are missing.\", \"[A] Taming Mode Collapse in Score Distillation for Text-to-3D Generation, CVPR 2024\", \"[B] DiverseDream: Diverse Text-to-3D Synthesis with Augmented Text Embedding, ECCV 2024\", \"The discussion of network blocks in 3.2 is quite long. Some of the details are not directly relevant to 3D generation and can be moved to appendix (Spatial Transformer Block, ResConv, UpSample).\", \"Reduce the use of bold, italic text in the writing. Too much emphasis diluted the emphasis.\", \"Some paragraphs are particularly dense. Some spaces between paragraphs should be reserved.\"], \"questions\": \"1. Could the authors provide a comment on the quality-speed trade-off? How can this issue be potentially addressed?\\n\\n2. Could the authors comment on the diversity and the generalization of the model? \\n\\n3. Could the authors explain the choice of training data? How are the current training prompts selected? What happens if the prompts are just randomly sampled?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"All good.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel amortized Text-to-3D Gaussian generation method, enabling the creation of 3D Gaussian structures from text prompts in a single forward pass. The proposed approach incorporates two key innovations:\\n\\n- Anchor-based Deformation:\\nTraditional 3D Gaussian representations require numerous Gaussian points to model complex scenes, making direct generation challenging. To address this, the authors introduce a strategy where a small number of anchor points are fixed and subsequently deformed through a text-conditioned transformer. These deformed points act as the central points for generating the 3D Gaussian structures.\\n- Text-guided Triplane Generator (TTG):\\nThe paper further proposes the use of a Text-guided Triplane Generator (TTG) to create a triplane structure. The anchor points are then utilized to query Gaussian features from the generated triplane.\\n\\nTo train the models, the Score Distillation Sampling (SDS) method is applied to rendered images, following the practice used in previous amortized Text-to-3D models. This ensures effective training of the networks. The authors conducted extensive experiments, demonstrating the effectiveness of their approach compared to traditional train-from-scratch Text-to-3D Gaussian methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper consist of following strengths:\\n\\n- **Introduction of 3D Gaussians to Amortized Text-to-3D Generation:**\\n\\nThis paper is the first to apply 3D Gaussians to the amortized Text-to-3D generation task, demonstrating higher efficiency compared to traditional methods that require training from scratch.\\n\\n- **Reframing the 3D Gaussian Generation Problem:**\\n\\nThe paper transforms the challenge of generating 3D Gaussians into a deformation problem. This approach addresses the difficulty of generating complex objects that would typically require millions of Gaussian points, making direct generation infeasible.\\n\\n- **Proposal of a Triplane Generator for Spatial Features:**\\n\\nA novel triplane generator is introduced to produce spatial features, which can be decoded into Gaussian attributes. The method is further refined with several specific design improvements to enhance performance.\\n\\n- **Experimental Validation of Efficiency:**\\n\\nExtensive experiments validate the efficiency of the proposed approach compared to original Text-to-3D Gaussian generation methods.\", \"weaknesses\": \"Though this method achieves promising results through its proposed designs, the paper presents several weaknesses that need to be addressed:\\n\\n**1.Novelty:**\\n\\nWhile the paper is the first to reformulate 3D Gaussian generation as a deformation problem, this concept is not new in the field of explicit 3D generation. Similar deformation-based approaches have been explored in previous works, such as:\\n\\n- Wang, Nanyang, et al. \\\"Pixel2mesh: Generating 3d mesh models from single rgb images.\\\" Proceedings of the European conference on computer vision (ECCV). 2018.\\n- Wen, Chao, et al. \\\"Pixel2mesh++: Multi-view 3d mesh generation via deformation.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2019.\\n\\nSince 3D Gaussians represent a novel but explicit 3D format, the deformation approach is straightforward to apply, suggesting that this work is more of an incremental adaptation than a fundamentally novel idea.\\n\\nIt is accessable to bring an old idea to a novel field. However, another critical issue lies in the representation ability. While Gaussian splatting typically involves millions of points to accurately capture scene details, this paper proposes using a fixed, smaller number of anchor points. The lack of discussion on how many anchor points are required for convincing results raises concerns about the expressiveness of the generated models. If the number is reduced too much, the method may struggle to represent complex scenes effectively. The authors should address this limitation by providing a more thorough analysis and justification for their approach.\\n\\n**2.Network Structure:**\\n\\nThe network architecture closely resembles the one proposed by Zou et al. in:\\n\\n- Zou, Zi-Xin, et al. \\\"Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers.\\\" Proceedings of the IEEE/CVF CVPR. 2024.\\n\\nBoth networks feature a triplane decoder and a point-based querier. Although the authors claim to introduce improvements, the paper does not provide an ablation study to evaluate the impact of these modifications. A more in-depth discussion and comprehensive ablation studies are necessary to demonstrate the value of these improvements.\\n\\n**3.Experiments:**\\n\\nThe experimental results focus only on comparisons with text-to-3D Gaussian generation methods. However, since the paper's task aligns more closely with amortized Text-to-3D generation, it should also be compared against those methods to ensure a fair evaluation.\\n\\nAdditionally, as mentioned earlier, an ablation study is needed to assess the effectiveness of the network design choices. This would clarify the role of individual components and provide insights into which modifications contribute to the improved performance.\\n\\n**4.Writing:**\\n\\nThe paper introduces new terms, such as \\\"Triplane Generator Division\\\" and \\\"Coordinate Shortcut,\\\" in ablation study, but fails to explain them clearly. As a result, it becomes difficult to understand the exact meaning and purpose of these concepts. Providing detailed descriptions of these definitions would improve clarity and make the paper more accessible to readers.\", \"questions\": \"As noted in the Weaknesses section, I believe the authors should clarify the motivation behind their approach and the network structure similarty with triplane meet gaussians. Therefore, I am giving a rating of 3. If the authors address these points, I would be open to raising my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an amortized text-to-3D Gaussian generator trained with SDS loss. The framework consists of two modules: TSD, responsible for center deformation, and TTG, which generates other Gaussian attributes. The model achieves fast 3D Gaussian generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe model achieves fast 3D Gaussian generation, requiring only 77 ms.\\n2.\\tThe paper finds that re-parameterizing the Gaussian\\u2019s scaling helps stabilize optimization.\\n3.\\tThe architecture and model design are reasonable.\", \"weaknesses\": \"1.\\tIn Table 1, the timing for Latte3D should be less than 1 second. Additionally, the method is not \\u20183D diffusion\\u2019 but a model trained with amortization. ATT3D is missing from Table 1.\\n2.\\tFor the baseline comparison in Figures 7 and 8, it would be better to include SDS with the MVdream prior, which is much more robust and produce higher-quality shapes.\\n3.\\tThe experimental results are not fully convincing. Specifically: 1) the baseline comparison does not include the improved version of SDS methods with MBdream; 2) in Figure 9, only one pair of results is presented to demonstrate the advantage of the complete design. For this ablation study, running an FID or CLIP score evaluation on a holdout set would better illustrate the complete design\\u2019s effectiveness compared to other settings.\", \"questions\": \"1.\\tHow many iterations are needed to train this model?\\n2.\\tWhat are the parameter counts for the TSD and TTG modules, respectively?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a real-time text-to-3D synthesis method called BrightDreamer, where a Text-guided Shape Deformation (TSD) network and a Text-guided Triplane Generator (TTG) are proposed to predict the attributes of a 3DGS representation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper proposes a real-time text-to-3D synthesis method based on 3DGS, which does not require 3D data for training.\\n2. The proposed Text-guided Shape Deformation (TSD) and Text-guided Triplane Generator (TTG) are novel and provide a certain level of technical contribution.\", \"weaknesses\": \"1. The quality of the generated content by the proposed method is not high enough. The generated texture looks over-smoothed and lacks realism, e.g., the rabbit in Figure 8. The generated geometry is incomplete, e.g., the car in Figure 9(a).\\n2. The comparison experiments are not fair enough. (a) The methods compared in Section 4.4 are insufficient. Some of the latest 3D diffusion methods should also be included in the comparison, e.g., Latte3D [1] and LGM [2]. (b) The author needs to ensure the quality of the generated content. For instance, although ProlificDreamer may have issues with multiple facets, the generated content in Figures 7 and 8 is clearly below its typical standard.\\n3. The summary of existing methods in Table 1 is not comprehensive. For example, there are GAN-based generators that also support text input [3-9].\\n\\n[1] LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis. [2] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. [3] Text2shape: Generating shapes from natural language by learning joint embeddings. [4] CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation [5] Shapecrafter: A recursive text-conditioned 3d shape generation model. [6] Towards implicit text-guided 3d shape generation. [7] Autosdf: Shape priors for 3d completion, reconstruction and generation. [8] CLIP-Sculptor: Zero-Shot Generation of High-Fidelity and Diverse Shapes from Natural Language [9] Hierarchical Text-Conditional Image Generation with CLIP Latents.\", \"questions\": \"See the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Bq3fEAGXUL
Realistic Evaluation of Model Merging for Compositional Generalization
[ "Derek Tam", "Yash Kant", "Brian Lester", "Igor Gilitschenski", "Colin Raffel" ]
Merging has become a widespread way to cheaply combine individual models into a single model that inherits their capabilities and attains better performance. This popularity has spurred rapid development of many new merging methods, which are typically validated in disparate experimental settings and frequently differ in the assumptions made about model architecture, data availability, and computational budget. In this work, we characterize the relative merits of different merging methods by evaluating them in a shared experimental setting and precisely identifying the practical requirements of each method. Specifically, our setting focuses on using merging for $\textit{compositional generalization}$ of capabilities in image classification, image generation, and natural language processing. Additionally, we measure the computational costs of different merging methods as well as how they perform when scaling the number of models being merged. Taken together, our results clarify the state of the field of model merging and provide a comprehensive and rigorous experimental setup to test new methods.
[ "model merging", "realistic evaluation", "compositional generalization" ]
Reject
https://openreview.net/pdf?id=Bq3fEAGXUL
https://openreview.net/forum?id=Bq3fEAGXUL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tHbXFeQIkD", "piugIQzvaM", "lFNvOzQS82", "dNbx7NGi8V", "URB5DWdFF7", "QpGLepnK5t", "O5dYBOJBgz", "NK2eInMJVP", "KPv4feBVUS", "8v5Fwf8AC0", "8LFR3kwDce", "7Mv1FBmgy1", "6khBpja65K", "4K1sSslfQX", "1hYE4SgMHw" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1729796415069, 1732053460947, 1732053337010, 1732053514498, 1732053116604, 1730435179386, 1732053489687, 1737523887247, 1732053367893, 1732225635028, 1732125866676, 1732053285563, 1733203958573, 1730440015559, 1734386456356 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_Djjc" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_kcSZ" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_kcSZ" ], [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_Djjc" ], [ "ICLR.cc/2025/Conference/Submission8094/Authors" ], [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_EPkg" ], [ "ICLR.cc/2025/Conference/Submission8094/Reviewer_EPkg" ], [ "ICLR.cc/2025/Conference/Submission8094/Area_Chair_Dxk2" ] ], "structured_content_str": [ "{\"summary\": \"In model merging, a pretrained model's weights are copied $K$ times, each copy is finetuned on a separate task, and the parameters of the $K$ constituent models are merged together. Model merging is popular among open-weight model enthusiasts, with some suggesting that the merged model's performance is comparable to a multitask model on the $K$ in-domain finetuning tasks, while also offering better out-of-domain performance on held-out tasks than the naively finetuned multitask model [1, 2]. As a result, many model merging methods have been proposed.\\n\\nIn this work, the authors point out a lack of systematic evaluation of such merging methods. They address this gap by evaluating 8 merging methods on 3 architectures / tasks, comparing in-domain and out-of-domain performance, computational costs, hyperparameter sensitivity, and robustness to increasing number of constituent models $K$.\", \"references\": \"[3] Koh, P. W., Sagawa, S., Marklund, H., Xie, S. M., Zhang, M., Balsubramani, A., ... & Liang, P. (2021, July). Wilds: A benchmark of in-the-wild distribution shifts. In International conference on machine learning (pp. 5637-5664). PMLR.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"- Timely problem to study: model merging is both a scientifically interesting and currently useful practice, and a thorough empirical evaluation is very helpful. In its best form, I see this paper as being positioned to make contributions by answering two main questions, both of which I believe could really be novel and substantive contributions.\\n\\n1) What is the relative ordering of merging methods? If a practitioner is interested in merging X architecture on Y task with Z budget, which method should they select?\\n2) Under which conditions does merging outperform / underperform multitask learning / the pretrained model?\\n\\n- Holistic approach: this paper considers not just accuracy gains, but also compute / data access requirements. Figure 3, for example, is very insightful in pointing out that merging performance correlates with compute. Similarly, the scaling experiments in 4.5 (Figure 5) and the hyperparameter experiments in 4.4 (Figure 4) were well done.\", \"weaknesses\": [\"My main concerns revolve around the experimental design in Section 4.1. The best version of this work would help a user understand when to use model merging and which method to use, given some features about their task (e.g. architecture, modality, finetuning strategy, compute budget). This also seems to be the authors' ambition (lines 43-48). However, the experimental design makes it difficult to draw conclusions about these questions, and I'm concerned that some of the authors' conclusions have not accounted for confounders.\", \"There are three experimental settings in the evaluation: (image classification, DomainNet, CLIP, full finetuning), (image generation, DomainNet, StableDiffusion, low-rank finetuning), and (assorted NLP tasks, assorted languages, T5, full finetuning). In Figures 2-4, we draw quite different conclusions about the relative strength of merging methods / when merging outperforms multitask learning, depending on the setting.\", \"On lines 71, 310, 484, and 487, the authors alternate attributing these differences to the modality and task: e.g. in line 487, \\\"cross-lingual generalization\\\" behaves differently than \\\"cross-domain generalization\\\"; in line 71, natural language processing behaves differently than image classification.\", \"Unfortunately, I'm not convinced that either of these conclusions are the right ones to draw, since the three experimental settings conflate model architecture, data modality / task, and finetuning strategy.\", \"As an example of why these setup issues matter, I find it difficult to interpret Figure 2 (right), where many methods underperform the multitask baseline for in-domain performance. Is the conclusion that T5 merges less well, that models merge less well on the language modality, or that the specific cross-lingual benchmark the authors set up lead to finetuned models which merge less well?\", \"The authors attribute the negative slope in in- and out-of-domain correlation to to modality in line 70 and \\\"cross-domain\\\" vs. \\\"cross-lingual\\\" generalization in lines 306-319, but this seems to ignore a potential dependence on model architecture / pretrained parameters. Further, I'm not sure \\\"cross-domain\\\" and \\\"cross-lingual\\\" are quite the right characterizations here; surely there are some domain generalization settings where merging will also have weak generalization performance, and one could argue that cross-lingual generalization is simply another domain generalization problem. Can you better characterize when we expect merging to enable generalization vs. not?\", \"In Figure 2 (middle), many methods outperform the multitask model in both in-domain and out-of-domain performance: is this because image generation as a modality is more suited for merging, because it is better to merge LoRAs instead of full parameters? Why does the multitask model have higher out-of-domain performance than the pretrained model here, unlike in the other two settings?\", \"I believe most of these issues are corrected by (1) testing more than one architecture per modality and (2) testing more than one dataset per modality. The authors might look to other domain generalization benchmarks to gather more tasks per modality, e.g. [3].\", \"On a separate note, the performances of merging methods likely have some dependence on the specific weights the constituent models arrive at after finetuning. Ideally, Figure 2 should include error bars accounting for randomness of the finetuning process: i.e., we should finetune multiple replicate models on each of the $K$ finetuning tasks, and then report merged model performance over randomly chosen replicates for each task. However, Appendix B seems to suggest that only one checkpoint was used per task, which raises some questions for me about whether results generalize across optimization randomness.\", \"To make a generalizable contribution as promised on lines 47-48 in the introduction, I would need to see the reasons behind the mixed results carefully dissected. While interesting, the current results seem specific to the settings evaluated, making it difficult to draw precise and well-justified insights. The remaining contributions (hyperparameter sensitivity, computational requirements, scaling) are useful but perhaps not substantive enough for a full ICLR submission.\"], \"questions\": \"I'm open to discussing the questions raised in the Weaknesses box and will increase my score if appropriate. Additionally, line 275 mentions analyzing how results depend on model size: I couldn't find this in the main text, but I'd be interested in this discussion.\\n\\nLastly, I had had two questions that are not in-scope for the submission, but I would love to see explored in a final camera-ready version: \\n\\n1. One missing citation (https://arxiv.org/abs/2203.05482) proposes a \\\"greedy soup,\\\" where models are merged only if they add to the in-domain performance. I'd be curious how this baseline performs in your setting --- maybe just a little bit better than \\\"Average\\\"?\\n\\n2. It would be interesting to know how merging performance scales not just with the number of models, but also with the \\\"degree\\\" to which constituent models differ. For example, if merging 5 models, does performance change if I finetune each model for 500 steps vs. 5000 steps?\", \"flagging_some_typos_to_fix_for_a_final_version\": [\"line 54: \\\"being a more challenging [setting]\\\"\", \"line 55: \\\"merging: [a]ssuming\\\"\", \"line 96: \\\"\\\\theta_i [for] i \\\\in\\\"\", \"line 193: \\\"In this work, [we] evaluate\\\"\", \"line 194: \\\"create [a] multitask model\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response Part 1\", \"comment\": \"Thanks for the detailed review, your questions have highlighted a lot of interesting future research directions and some places where additional ablations and explanations would be helpful.\\n\\nWe agree that a main goal of the paper is to help a user understand when they should reach for model merging as a solution for their problem, however, we disagree that the restricted scope of our benchmark is harmful to this goal. Rather, we think that our decisions, which were made in an attempt to meet the users where they are, makes our study more useful. In this work, we aim to compare merging methods in realistic settings while the majority of your questions and concerns are about quantifying the differences when merging in different settings.\\n\\nFor example, your suggestion of using more than one architecture per modality is a good one from the perspective of having more data points helps draw more robust conclusions and the differences in \\u201cmergeability\\u201d of different model types is an interesting question, but the models and architectures we selected are the ones that are currently being used in each setting. Transformers are the de facto standard for NLP tasks, diffusion models are the architecture of choice for image generation, and many vision transformers are the state-of-the-art architecture for image classification. \\n\\nAdditionally, the specific models we used are ones used in previous works on model merging. Those results may address the issues you raise. For example, you ask if the \\u201cIs t5 just poor at merging or is it actually the difficulty of the task?\\u201d Previous work, such as TIES [1], merge monolingual T5 models and achieve strong multitask performance. This supports our suggestion that the difficulty is in the setting itself as opposed to issues with the model itself. Similarly, you mention the same thing again when you say. \\u201cThe authors attribute the negative slope in in- and out-of-domain correlation to modality in line 70 and \\\"cross-domain\\\" vs. \\\"cross-lingual\\\" generalization in lines 306-319, but this seems to ignore a potential dependence on model architecture / pretrained parameters.\\u201d Previous work having success with merging T5 models suggests the new \\u201ccross-lingual\\u201d differences are where the difficulty comes from. As a follow on point, you note that \\u201cI'm not sure \\\"cross-domain\\\" and \\\"cross-lingual\\\" are quite the right characterizations here; surely there are some domain generalization settings where merging will also have weak generalization performance\\u201d. We agree that there could be cross-domain pairs that could be more challenging for merging. Our conjecture that cross-lingual seems harder on average doesn\\u2019t preclude the existence of hard cross-domain transfers in vision settings, but we conjecture that cross-lingual generalization could be considered an especially challenging variant of \\\"cross-domain\\\" generalization (though settling the debate as to whether a shared multilingual representation is possible and attainable is a longstanding debate that we can't hope to settle in our work). You also suggest that difficulties could be a quirk of the \\u201cspecific cross-lingual benchmark the authors set up.\\u201d We would argue that the size of this NLP benchmark provides credence to our claims: We use 5 different datasets representing 5 different tasks with an average of 4 languages per datasets. This is much larger than previous work such as [2] and [3] which use just 1 and 3 multi-lingual datasets respectively. Overall, we agree with your point that it would be valuable to confirm that within-language cross-task generalization can be attained by merging and will add an ablation experiment of merging t5 models across domains, but within the same language, to address this. \\n\\nYour concerns about the size of the NLP dataset and your suggestion of \\u201ctesting more than one dataset per modality\\u201d dovetails nicely with the second main point of the paper\\u2014namely, that previous work focuses on held-in performance, but going forward merging methods should explicitly consider compositional generalization. Prior to our work, there has been a lack of benchmarks for testing compositional generalization abilities, and we hope that our work spurs the development of even more.\\n\\nThis second point of the paper is closely related to your question, \\u201cCan you better characterize when we expect merging to enable generalization vs. not?\\u201d It is still an open question and it currently isn\\u2019t clear when merged models will be effective at compositional generalization. That\\u2019s why we argue that merging methods should be explicitly evaluated on their compositional generalization going forward.\"}", "{\"title\": \"Author Response Part 2\", \"comment\": \"> However, there are no clear trends in method dominance that generalize across the 3 task settings. This significantly reduces the impact of the paper, since it reduces the generality of the findings: we still don\\u2019t know which method is best in terms of generalization to compositionally held-out tasks, as the results differ across 3 settings. It also makes me doubt whether the results would generalize to the same exact task settings with different datasets and backbone models.\\nOn this note, ultimately, I think the value of this work will depend on the quality of the code base and whether it serves as an easy-to-use public benchmark where others can easily plug in new merging methods for comparison. I of course cannot evaluate whether this is the case, and time will tell whether it becomes a useful benchmark in the field.\\n\\nWe agree that a dominant method across all 3 settings would have been impressive, but on the contrary we think the lack of a dominant method actually expands the impact of our paper because it gives credence to the point that merging methods must either be developed with a specific target application *or* be shown to dominate across settings (which no past merging method has been able to do). In addition, the insights in our paper could still be informative to practitioners, because if one is working in some particular application, one only really cares about what works best in that application. Again, we agree a dominant method would have been useful for practitioners working in applications we don't study (since such a method could plausibly be assumed to work best in unseen settings, too), but our paper nevertheless provides the actionable and sobering insight that re-evaluation of merging methods may need to be done when attacking a new application setting for merging.\\n\\n> Minor: several typos, missing words, grammar mistakes peppered throughout. For instance, the sentence in lines 193-195 is missing words (like a subject) and has singular/plural issues. Most of the text reads fine, but please proofread again to correct the language mistakes so that the text reads well everywhere.\\n\\nThanks for pointing out specific cases! We\\u2019ll make sure they are fixed in our revision.\\n\\n> In Figure 2, are the horizontal dotted lines the average performance of models fine-tuned on that \\u201cheld-out\\u201d tasks? In other words, is there no difference between the horizontal and vertical dotted lines other than whether the task is considered \\u201cheld-out\\u201d with respect to the merged models?\\n\\nCorrect, the horizontal dotted lines illustrate the average performance of the models fine-tuned on the held-out tasks and the vertical lines are the average performance of models fine-tuned on the held-in tasks. The dotted lines provide the unattainable upper-bound performance of using a single specialized model for each task. We have mentioned this in the caption but if there's a clearer way to present it, please let us know.\\n\\n> Why not include the multi-task trained model as one of the constituent models being merged?\\n\\nMuch work on merging does not assume simultaneous access to the constituent model's fine-tuning datasets and the multitask model therefore is an unattainable baseline. One motivation for this setting is the reuse of the large number of fine-tuned models that already exist in repositories like the Huggingface Hub, whose datasets are not necessarily available. Thus, we omit it from the mixture. Including it is an interesting idea for future work, but we consider it out of scope for our work.\\n\\n> Lines 301-303 state: \\u201cWe note that Fisher Merging tends to generalize than RegMean and MaTS despite all three of methods implicitly minimizing the same objective (Tam et al., 2023).\\u201d Looking at the plot, this only seems to be true for the NLP tasks, but not image classification or image generation. Am I misinterpreting something? If not, this statement should be amended.\\n\\nThanks for pointing this out, we\\u2019ll make this more specific in our revision.\"}", "{\"title\": \"Author Response Part 3\", \"comment\": \"Your question about how the randomness in the training of the constituent models makes a lot of sense and is in-line with secondary experiments exploring things like hyperparameter robustness. However, we expect to find few differences between the final merged models which only vary due to randomness in training for a few reasons. In our work, the trends in held-in performance\\u2014between merging methods and within each modality\\u2014are consistent with previous works. As we matched the settings of previous work but trained our own models, we expect these trends\\u2014if not the exact numbers\\u2014will remain despite optimization randomness. Additionally, in the standard merging setup, each constituent model is fine-tuned from a shared initialization (the pre-trained model). Given the stability of fine-tuning and the possibility of merging in its own right, we expect to observe a very small effect of the randomness of training on the final merged model. Due to the computational resources constraints, we may not be able to run this experiment, but we do think it fits in nicely with our work. So, we will try to see if it can be finished in time for a camera-ready version of the paper.\\n\\nThanks for the question about the \\u201cgreedy soup\\u201d method. Their finding that selecting which tasks to merge can result in increases in final performance complements our finding that blindly scaling the number of tasks included in the merge leads to reduced held-in performance. We have highlighted this connection in our revision. However, we would note that applying \\\"greedy soups\\\" for compositional generalization is infeasible due to the lack of access to generalization task data.\\n\\n[1] Yadav, Prateek, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. \\u201cTIES-Merging: Resolving Interference When Merging Models.\\u201d arXiv, October 27, 2023. https://doi.org/10.48550/arXiv.2306.01708.\\n\\n[2] Pfeiffer, Jonas, Ivan Vuli\\u0107, Iryna Gurevych, and Sebastian Ruder. \\u201cMAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer.\\u201d In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7654\\u201373. Online: Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.emnlp-main.617.\\n\\n[3] Vu, Tu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, and Noah Constant. \\u201cOvercoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation,\\u201d May 25, 2022. https://doi.org/10.48550/arXiv.2205.12647.\\n\\n[4] Vu, Tu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. \\u201cExploring and Predicting Transferability across NLP Tasks.\\u201d In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7882\\u20137926. Online: Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.emnlp-main.635.\\n\\n[5] Vu, Tu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. \\u201cSPoT: Better Frozen Model Adaptation through Soft Prompt Transfer.\\u201d In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 5039\\u201359. Dublin, Ireland: Association for Computational Linguistics, 2022. https://aclanthology.org/2022.acl-long.346.\\n\\n[6] Yadav, Prateek, Colin Raffel, Mohammed Muqeeth, Lucas Caccia, Haokun Liu, Tianlong Chen, Mohit Bansal, Leshem Choshen, and Alessandro Sordoni. \\u201cA Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning.\\u201d arXiv, August 13, 2024. https://doi.org/10.48550/arXiv.2408.07057.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for the review. Your questions helped find places where we can lay out our intentions and takeaways more explicitly. We\\u2019ve answered your questions directly here and have updated our paper accordingly.\\n\\n> The messages this paper aims to deliver to the community are not super clear to me. Do we already believe model merging is the proper way toward a multitasking model and do the authors suggest it is a promising approach or not?\\n\\nOur findings confirm that model merging is a promising approach towards building a multitask model. We find that model merging performance can approach the performance of multitask models on held-in tasks in some settings. Additionally and importantly, we find that model merging can achieve better compositional generalization than multitask training. However, the absolute performance of compositional generalization is still quite poor compared to the best-possible performance. These trends indicate that model merging is a promising direction for developing performant multitask models but that more work is required to realize merging's full potential.\\n\\n> Title: I'm not super convinced that \\\"compositional generalization\\\" is the \\\"realistic\\\" goal for model merging. Many times, model merging might not be for the emerging capability of several different tasks, but just to improve on the same held-in tasks such as the original motivation of Model Soup etc.\\n\\nThanks for pointing out an unintended reading of our title. We don't mean to imply that compositional generalization is the *only* realistic application of merging. Instead, we aim to highlight that our evaluation setup provides a realistic evaluation of model merging for compositional generalization, which was underexplored in past work compared to single-task (ensemble, as in model soups) or multitask performance. We do agree that held-in task performance is an important goal for many users of model merging, and indeed we therefore included the held-in task performance of each merging method in all of our setups. However, given that merging shows promise for compositional generalization and can even outperform explicit multitask learning, we highlight compositional generalization as an important focus for evaluation. Please let us know if you think we could amend our title to make this more clear.\\n\\n> As for a conference-level study paper, I would expect it to reveal more surprising conclusions, insights, or theoretical motivations. Otherwise, it feels like this paper leans towards a workshop paper or a survey-style report.\\n\\nA primary contribution of our paper is to provide a much-needed rigorous evaluation of merging methods for the promising and important application of compositional generalization. Our findings include the surprising demonstration that held-in performance and compositional generalization can sometimes be inversely correlated, reinforcing the need to explicitly test compositional generalization abilities. Thus, when developing well-performant merging methods, the community should not just optimize for held-in performance, but for compositional generalization as well. In addition, we highlight major gaps and differences in merging methods that past papers have glossed over or omitted, such as practical requirements for applying merging and computational costs. To the best of our knowledge, all of these aforementioned insights are novel and we are optimistic that our paper will significantly shift the field towards more realistic and reliable evaluations.\"}", "{\"summary\": \"The paper benchmarked different \\u201cmodel merging\\u201d methods, which are effectively methods to aggregate the weights of many different models trained on many different downstream tasks. The comparisons included different task settings, data modalities, benchmark models, and evaluation criteria (held-out compositional task performance and compute requirements, for example). Overall I found that the paper had good scientific standards (good research question, sensible controls and evaluations, methodical analyses, etc.) but that the actual findings were of potentially limited practical use. No merging method was clearly superior to others, and results were very dependent on the task setting in a way where it is unclear whether they will generalize to author task settings, datasets, or model backbones. I note, however, that I am not familiar with the model merging literature and my assessments are to be taken with a grain of salt. I hope that the authors provide a good code base, so that others can build around their work as a standardized test-bench for novel merging methods down the road.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The focus on evaluating compositional generalization is an important contribution, as intuitively it seems like this would be the main real-world use-case for model merging. It is surprising that prior work on model merging does not focus on compositional task generalization (if I understood the authors correctly). The task constructions are also nicely controlled and systematic, both for vision and language tasks.\\n2. The baselines comparison methods (such as training on some supervised data for the target held-out task) are very sensible.\", \"weaknesses\": \"1. The paper would be stronger if it benchmarked different base models on each task in order to make sure the core results generalize (especially different architectures, such as Transformers and CNNs on the vision tasks or Transformers and SSMs on language tasks).\\n2. It feels like a baseline is missing. To evaluate generalization error on held-out tasks, one could also evaluate a single fine-tuned model (one of the models being merged) on the held out task. For instance, maybe a model fine-tuned on English Question-Answering would do better on, say, Arabic Question-Answering than some of the merged models (or better at least than the \\u201cpretrained\\u201d baseline). This would amount to a comparison between merging methods vs. fine-tuning a single model on a task that is closely related to the target held-out task. While this is not necessarily the most important baseline out there (I\\u2019m sure the top merging methods would surpass it), it would nevertheless be nice to know how this performs, if the authors have to bandwidth to add it. If the baseline always performs worse than the simple \\u201cpretrained\\u201d baseline, it could simply be left out of the results figures.\\n3. The main result is the method comparison in Section 4.1. However, there are no clear trends in method dominance that generalize across the 3 task settings. This significantly reduces the impact of the paper, since it reduces the generality of the findings: we still don\\u2019t know which method is best in terms of generalization to compositionally held-out tasks, as the results differ across 3 settings. It also makes me doubt whether the results would generalize to the same exact task settings with different datasets and backbone models.\\n 1. On this note, ultimately, I think the value of this work will depend on the quality of the code base and whether it serves as an easy-to-use public benchmark where others can easily plug in new merging methods for comparison. I of course cannot evaluate whether this is the case, and time will tell whether it becomes a useful benchmark in the field.\\n4. Minor: several typos, missing words, grammar mistakes peppered throughout. For instance, the sentence in lines 193-195 is missing words (like a subject) and has singular/plural issues. Most of the text reads fine, but please proofread again to correct the language mistakes so that the text reads well everywhere.\", \"questions\": \"1. In Figure 2, are the horizontal dotted lines the average performance of models fine-tuned on that \\u201cheld-out\\u201d tasks? In other words, is there no difference between the horizontal and vertical dotted lines other than whether the task is considered \\u201cheld-out\\u201d with respect to the merged models?\\n2. Why not include the multi-task trained model as one of the constituent models being merged?\\n3. Lines 301-303 state: \\u201cWe note that Fisher Merging tends to generalize than RegMean and MaTS despite all three of methods implicitly minimizing the same objective (Tam et al., 2023).\\u201d Looking at the plot, this only seems to be true for the NLP tasks, but not image classification or image generation. Am I misinterpreting something? If not, this statement should be amended.\\n4. Paragraph lines 306-319 talks about correlations (and anti-correlations) between a merging method\\u2019s held-in and held-out task performance. From the plots, it is difficult to tell if these correlations are strong and statistically significant (few methods are evaluated and the performances tend to cluster, with one or a few outlier methods generally driving the correlations. What are the numeric correlations and their statistical significance, and can this be included in the text for added rigour?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response Part 2\", \"comment\": \"Many of the questions you ask are interesting in themselves and ripe to be answered in future work, but are out-of-scope when trying to provide insights for a user who is deciding if merging is right for them. For example, you ask \\u201cis this because image generation as a modality is more suited for merging?\\u201d This is a great research question, but does not help answer whether an image generation practitioner should use model merging\\u2014if the task they care about is not image generation they cannot switch to doing image generation because merging may work better in that setting. Similarly, you ask if some of the cross modality differences are \\u201cbecause it is better to merge LoRAs instead of full parameters?\\u201d Again, a great question whose answer could start to change how the community adapts pre-trained models to new tasks, but is not something an end-user (who simply is given fine-tuned models, without having control of how they were fine-tuned, and aims to merge them). For example, the online communities built around adapting image generation models like Stable Diffusion have almost exclusively used LoRAs as the adaptation method of choice. Thus, users aiming to merge Stable Diffusion-based models are forced to merge LoRAs, and our choice of LoRA merging in the Stable Diffusion setting reflects this real-world constraint. Additionally, you ask \\u201c[w]hy does the multitask model have higher out-of-domain performance than the pretrained model here, unlike in the other two settings?\\u201d This is an interesting question about differences in models trained for different modalities but does not help someone answer questions like \\u201cshould I use SLERP or MaTS to merge my models when I care about compositional generalization?\\u201d.\\n\\nYour points about quantifying how the difference between tasks affects model merging is a great direction for future work, but difficult to include in this work without a massive increase in scope. There is rich prior work on using models, trained on specific tasks, to quantify the differences in tasks (task embeddings [4], SPoT [5], etc). Using that to guide the selection of models to include in the merge sounds like a great new algorithm to study but is out of scope for this work (though we would note that similar ideas have been explored for \\\"MoErging\\\"; see references in [6]). Similarly, your suggestion of investigating how the amount of training constituent models receive changes that final merging result is interesting\\u2014and one would most likely see effects given results like SPoT where the task embeddings learned early in training were very different from the ones later in training\\u2014but also out of scope.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author Response Part 3\", \"comment\": \"> Paragraph lines 306-319 talks about correlations (and anti-correlations) between a merging method\\u2019s held-in and held-out task performance. From the plots, it is difficult to tell if these correlations are strong and statistically significant (few methods are evaluated and the performances tend to cluster, with one or a few outlier methods generally driving the correlations. What are the numeric correlations and their statistical significance, and can this be included in the text for added rigour?\\n\\nThis is a good point. We\\u2019ve calculated the Pearson correlation coefficient for merging performance in each setting. For image classification, r=0.828, p=0.011; for image generation, r=0.972 p=5.266e^-5; and for NLP, r=-0.852, p=0.007. These numeric correlations confirm the insights we present in the paper. We have also added this to the paper.\\n\\n[1] Ilharco, Gabriel, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. \\u201cEditing Models with Task Arithmetic.\\u201d arXiv, March 31, 2023. https://doi.org/10.48550/arXiv.2212.04089.\\n\\n[2] Yadav, Prateek, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. \\u201cTIES-Merging: Resolving Interference When Merging Models.\\u201d arXiv, October 27, 2023. https://doi.org/10.48550/arXiv.2306.01708.\\n\\n[3] Tam, Derek, Mohit Bansal, and Colin Raffel. \\u201cMerging by Matching Models in Task Parameter Subspaces.\\u201d arXiv, April 13, 2024. https://doi.org/10.48550/arXiv.2312.04339.\\n\\n[4] Jang, Joel, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, and Minjoon Seo. \\u201cExploring the Benefits of Training Expert Language Models over Instruction Tuning.\\u201d arXiv, February 9, 2023. https://doi.org/10.48550/arXiv.2302.03202.\\n\\n[5] Vu, Tu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, and Noah Constant. \\u201cOvercoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation,\\u201d May 25, 2022. https://doi.org/10.48550/arXiv.2205.12647.\"}", "{\"comment\": \"Thank you for your replies and for taking my feedback into consideration. Most of my specific questions have been addressed. For instance, I agree with the authors' reasoning for not including other base models as a result of convergent performance at scale.\\n\\nMy primary concern is still the degree to which the results help push the field forward, given that no merging method clearly came out on top. The suggestion about trying different architectures was also made with this concern in mind, to see if the findings are general enough. I agree with the authors that it might be the case that some merging methods work better for certain tasks than others, and that knowing this would be useful for practitioners. But the question is more generally about whether the results even tell us that that is the case. Let me perhaps phrase it this way. I see two potential interpretations of the main results in Section 4.1:\\n1. The results are robust and reliable; the merging methods that were found to work better for, say, image generation will always work better for image generation. Those that work better on NLP tasks will always work better on NLP tasks. This helps us know which method to use and when.\\n2. Alternatively, perhaps the results are more driven by noise and idiosyncrasies of the tasks/datasets. For instance, consider image generation. What if we tried the exact same approach, but with different image generation datasets? Would the results be the same? For the NLP tasks, what if the languages the fine-tuned models came from were different? Would the results be the same? For the image classification tasks, what if we used fine-tuning image datasets that were much larger? Would the results... you get the point.\\n\\nNow, I'm not necessarily asking the authors to run such experiments in a few days. All I'm saying is that when the results are very variable like in this case, more consistency in at least one aspect (e.g., consistency among more variable NLP tasks and datasets) would give me more confidence that we have at least learned *something* that is general and extends beyond the specific setups considered in the paper. The uncertainty about interpretation (1) or (2) above is why I stated in my original review that:\\n\\n>the actual findings were of potentially limited practical use. No merging method was clearly superior to others, and results were very dependent on the task setting in a way where it is unclear whether they will generalize\\n\\nI think that still reflects my overall opinion of the paper, which is why I am still comfortable with my original score of 6.\"}", "{\"comment\": \"Thanks to the authors for taking the time to respond to the review! I've read it carefully and responded to some comments below.\\n\\nOverall, I still hold to my original recommendations. I do not think my comments about the experimental design have been addressed, and I still find it challenging to believe with confidence that results will generalize to new datasets, models, or finetuning setups.\\n\\n----\\n\\n> ...we think that our decisions, which were made in an attempt to meet the users where they are, makes our study more useful. In this work, we aim to compare merging methods in realistic settings while the majority of your questions and concerns are about quantifying the differences when merging in different settings...the models and architectures we selected are the ones that are currently being used in each setting...\\n\\nIf, in practice, users were *only* merging the three specific models studied on the three specific datasets studied, I would agree with the authors' argument that their evaluation captures all that is useful to users. Unfortunately, this is not true. On the language side, most applications of `mergekit` that I've encountered have been on an array of decoder-only transformers, rather than T5. (See a quick slice [here](https://huggingface.co/mergekit-community).) Further, users merge across many model scales and types (e.g. base vs. instruction tuned), and not all are interested in cross-lingual generalization specifically, as opposed to other compositional cross-domain generalization settings. Of course, it would be impossible to evaluate methods on every model or compositional task structure that users are interested in. This is why Reviewer kcSZ and I both pushed for more evidence that these results generalize to different models through additional backbone experiments, and different tasks through additional dataset experiments. A sound, conference-level evaluation paper should offer insight into these new settings.\\n\\n> Similarly, you ask if some of the cross modality differences are \\u201cbecause it is better to merge LoRAs instead of full parameters?\\u201d Again, a great question whose answer could start to change how the community adapts pre-trained models to new tasks, but is not something an end-user (who simply is given fine-tuned models, without having control of how they were fine-tuned, and aims to merge them). \\n\\nIt's not clear to me why we should assume the user has no control over the finetuning process. If this is the assumed problem setup, this should be made explicit in the text and justified. \\n\\nAssuming then that the user has control over the finetuning process (even if it were subject to computational constraints), my next comment is this: although I understand that Stable Diffusion is typically finetuned with LoRA, the user can still set hyperparameters that make the adaptation process more/less similar to full finetuning (e.g. higher rank). Thus, the finetuning method remains a confounder in these results.\\n\\nMy original intention for raising the LoRA question was a bit different. If the reverse trends for image generation are actually because of LoRA rather than the model, task distribution, or modality, then one could similarly finetune language models with LoRA and potentially see the same reversal in trends.\\n\\n\\n> previous work focuses on held-in performance, but going forward merging methods should explicitly consider compositional generalization. \\n\\nI agree with the authors that the *compositional* generalization angle is new to this work, but this sentence is slightly overstated. The effect of model merging on in-domain vs. out-of-domain robustness has a long line of work, with other evaluations focused on more than just held-in performance. Granted, these papers were written before some of the more recent merging methods and didn't evaluate all the baselines you have. \\n\\n> However, we would note that applying \\\"greedy soups\\\" for compositional generalization is infeasible due to the lack of access to generalization task data.\", \"the_authors_misunderstand_the_greedy_soup_setup\": \"the idea is to add models only if they increase the *in-distribution* performance. The original paper finds that this can actually outperform a uniform soup in the out-of-distribution setting. There is no need to evaluate on the out-of-distribution generalization task data.\"}", "{\"title\": \"Author Response Part 1\", \"comment\": \"Thanks for the review! Your questions helped highlight places where our paper is unclear. We\\u2019ve answered your questions directly here and have updated our paper to provide more clarity for future readers.\\n\\n> The paper would be stronger if it benchmarked different base models on each task in order to make sure the core results generalize (especially different architectures, such as Transformers and CNNs on the vision tasks or Transformers and SSMs on language tasks).\\n\\nWe agree that it would be ideal to test multiple models in each setting, but we chose to design our benchmark around a single model for a few reasons: \\nIn modern practice, state-of-the-art models for each setting are remarkably and increasingly uniform\\u2014i.e., it is widespread to use transformers for image classification and NLP and to use a diffusion model for image generation.\", \"the_specific_models_we_used_in_each_setting_are_the_same_as_those_used_in_past_merging_evaluations\": \"the CLIP vision encoder is the same model as the 8-task vision benchmark from [1] and the mT5 model used for the language experiments is from the same model family as the 7 and 8 task NLP benchmark used in [2].\\nThe trends we notice for the held-in setting matches the trends in the current evaluation benchmark landscape as reported in [3]. \\nIncluding additional models in each setting increases the computational cost of the benchmark, making it less likely to be adopted. \\nWe consider it likely that the insights gained on standard Transformer architectures will be similar on less popular architectures\\nHowever, if there is a specific architecture you think would lead to different insights and/or would like to see results on, please let us know and we will do our best to evaluate it and include it in an updated draft. \\n\\n> It feels like a baseline is missing. To evaluate generalization error on held-out tasks, one could also evaluate a single fine-tuned model (one of the models being merged) on the held out task. For instance, maybe a model fine-tuned on English Question-Answering would do better on, say, Arabic Question-Answering than some of the merged models (or better at least than the \\u201cpretrained\\u201d baseline). This would amount to a comparison between merging methods vs. fine-tuning a single model on a task that is closely related to the target held-out task. While this is not necessarily the most important baseline out there (I\\u2019m sure the top merging methods would surpass it), it would nevertheless be nice to know how this performs, if the authors have to bandwidth to add it. If the baseline always performs worse than the simple \\u201cpretrained\\u201d baseline, it could simply be left out of the results figures.\\n\\nThank you for pointing out this possible baseline. We chose not to include the \\\"individual model generalization performance\\\" for a few reasons:\\nAs you suspect, in the settings we consider, a single-task fine-tuned model typically generalizes worse to unseen tasks than the underlying pre-trained model. This is due in part to the familiar problem of catastrophic forgetting and the fact that the base pre-trained models we consider are already reasonably competent at the target tasks. While there have been some works showing beneficial cross-task generalization of single-task models (e.g. [4] showed that a language model fine-tuned on CosmosQA generalized well to many unseen tasks), this rarely happens when there is a substantial shift between the held-in and generalization tasks (as in the English to Arabic QA example you proposed; see e.g. [5], where it was shown that same-task-different-language can be especially harmful).\\nThis baseline either could be reported as the \\\"average performance of held-in task models on generalization tasks\\\" (which, as discussed previously, would result in poor performance) or \\\"best-case performance among held-in task models on generalization tasks\\\". This latter option requires access to generalization task data for evaluation, which makes it an unattainable \\\"oracle\\\" baseline in practice. Since the held-out task models themselves are likely a stronger baseline, we consider them more important to include.\\nWe hope this clarifies why we didn't include this baseline. We will add some discussion of it to the paper to clear up further questions.\"}", "{\"comment\": \"I thank the authors for the rebuttal. As originally mentioned in the strengths section, I acknowledge the authors' rigorous evaluation study.\\nHowever, my concerns about the weakness remain. For the model ensemble, it is not surprising that it achieves a trade-off between held-in performance and generalization. This paper shows a good validation for a more specific case in terms of model merging (as a way of ensemble) and compositional generalization. Therefore I decided to keep the score.\"}", "{\"summary\": \"This paper provides an empirical study of model merging. Specifically, it focuses on the compositional generalization of capabilities, with control of many experiment variables. By comparing many merging methods on several tasks like image classification, image generation, and NLP, this study provides the community with some takeaways for model merging.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The related works and their summary of the methods are great\", \"I appreciated the detailed study of the experiments\", \"Overall writing is clear\"], \"weaknesses\": [\"Title: I'm not super convinced that \\\"compositional generalization\\\" is the \\\"realistic\\\" goal for model merging. Many times, model merging might not be for the emerging capability of several different tasks, but just to improve on the same held-in tasks such as the original motivation of Model Soup etc.\", \"As for a conference-level study paper, I would expect it to reveal more surprising conclusions, insights, or theoretical motivations. Otherwise, it feels like this paper leans towards a workshop paper or a survey-style report.\"], \"questions\": \"The messages this paper aims to deliver to the community are not super clear to me. Do we already believe model merging is the proper way toward a multitasking model and do the authors suggest it is a promising approach or not?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"(a) summary\", \"This paper investigates how to evaluate the compositional generalization capability of model merging methods. It proposes a shared experimental setting for conducting empirical studies of the performance, computational cost, and scalability of merging methods. Experimental results identify the requirements, and relative characteristics of different methods for better practice in the future work.\", \"(b) strengths\", \"The paper provides a good summery of related work on merging methods.\", \"It presents rigorous evaluation of merging methods on various models and tasks.\", \"The writing is clear in general.\", \"(c) weaknesses\", \"It provides inconsistent conclusions for different experimental settings.\", \"The experimental design does not help users to get insights from the results.\", \"It is not clear how to generalize the results to different settings.\", \"The findings from the study has limited potential practical use: there is no conclusion which method is the best one for CG.\", \"(d) decision\", \"This paper presents an empirical study for model merging. Although the experimental results on various models and tasks are useful, the contributions are not substantive enough for a full ICLR submission. More insights or theoretical motivations will make the paper stronger.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers acknowledged the authors' efforts on rigorous evaluation study of model merging methods, however, they shared the concerns on the experimental design, inconsistent conclusions from different settings, and generalizability of the results. All these concerns affect the potential practical use from the findings. The authors rebuttal addressed some concerns, but the reviewers still think it is difficult to draw precise and well-justified insights from the study and the paper in its current form is not ready for publication.\"}" ] }
BpyHIrpUOL
PolyhedronNet: Representation Learning for Polyhedra with Surface-attributed Graph
[ "Dazhou Yu", "Genpei Zhang", "Liang Zhao" ]
Ubiquitous geometric objects can be precisely and efficiently represented as polyhedra. The transformation of a polyhedron into a vector, known as polyhedra representation learning, is crucial for manipulating these shapes with mathematical and statistical tools for tasks like classification, clustering, and generation. Recent years have witnessed significant strides in this domain, yet most efforts focus on the vertex sequence of a polyhedron, neglecting the complex surface modeling crucial in real-world polyhedral objects. This study proposes \textbf{PolyhedronNet}, a general framework tailored for learning representations of 3D polyhedral objects. We propose the concept of the surface-attributed graph to seamlessly model the vertices, edges, faces, and their geometric interrelationships within a polyhedron. To effectively learn the representation of the entire surface-attributed graph, we first propose to break it down into local rigid representations to effectively learn each local region's relative positions against the remaining regions without geometric information loss. Subsequently, we propose PolyhedronGNN to hierarchically aggregate the local rigid representation via intra-face and inter-face geometric message passing modules, to obtain a global representation that minimizes information loss while maintaining rotation and translation invariance. Our experimental evaluations on four distinct datasets, encompassing both classification and retrieval tasks, substantiate PolyhedronNet's efficacy in capturing comprehensive and informative representations of 3D polyhedral objects.
[ "polygon", "polyhedron", "polygonal representation", "representation learning", "graph neural networks" ]
Accept (Poster)
https://openreview.net/pdf?id=BpyHIrpUOL
https://openreview.net/forum?id=BpyHIrpUOL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jPomMitcJj", "gTxXq6Gf0O", "LzoENZHNQV", "KqkAfGxByO", "Jl8Y3LhaZo", "J6qtDWwz5f", "7HjsohBLYV" ], "note_type": [ "official_comment", "decision", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1732519825192, 1737524266607, 1731233214017, 1731036507733, 1730359676046, 1730610885154, 1734684626767 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13538/Area_Chair_kkDz" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13538/Reviewer_GCC3" ], [ "ICLR.cc/2025/Conference/Submission13538/Reviewer_tXyF" ], [ "ICLR.cc/2025/Conference/Submission13538/Reviewer_3fMs" ], [ "ICLR.cc/2025/Conference/Submission13538/Reviewer_8zui" ], [ "ICLR.cc/2025/Conference/Submission13538/Area_Chair_kkDz" ] ], "structured_content_str": [ "{\"title\": \"Please check the authors' responses\", \"comment\": \"Dear reviewers,\\n\\nCould you please check the authors' responses, and post your message for discussion or changed scores?\\n\\nbest,\\n\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper presents a novel graph representation for polyhedral shapes and a novel message-passing approach over this representation to perform polyhedral representation learning.\\nthe proposed surface-attributed graphs maintains an hyperedge per face connecting the ordered boundary edges of said face, and then characterize said graph with a local rigid representation where the connectivity and metric structure is encapuled by the series of two-hop paths from each vertex.\\nThe flow of information from one face to another is captured by splitting said paths into inter-face flows where the path changes face, and \\n intra-face flow, capturing information of a single face.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Novel rand effective representation, relatively novel redefinition of the message-passing over the representation.\\nVery strong classification and retrieval tasks.\\nThe paper is relatively well written, although in several places key concepts (such as the local rigid representation) are implicitly defined inline, while a separate formal definition would have made the read easier.\", \"weaknesses\": \"The work has its niche and works well within it.I don't see any clear weaknesses except perhaps the impact within and outside the niche.\", \"questions\": \"no further question\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a framework named PolyhedronNet for effective representation learning of 3D polyhedron. By introducing the Surface-Attributed Graph, the method designs a Local Rigid Representation based on Graph Neural Networks and further develops PolyhedronGNN to aggregate these local representations, ultimately achieving a robust global polyhedral representation invariant to rotation and translation. Experimental results demonstrate significant performance improvements on four datasets, particularly in classification and retrieval tasks.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"PolyhedronNet introduces the novel concept of Surface-Attributed Graph for polyhedral representation learning, addressing the limitations of traditional graph representations that cannot capture the face attributes of polyhedra.\", \"The experimental results on four datasets are thorough, demonstrating superior performance in both classification and retrieval tasks compared to existing methods.\"], \"weaknesses\": [\"Some definitions are expressed quite confusingly, making it difficult to accurately understand what the authors intend to convey. For instance, the notation \\u03d5_(i,j,k) in Equation 1 and the corresponding illustration in Figure 2(b), as well as the description of a_(j,i)and a_(k,j)in the calculation of g^((l)) in Equation 2.\", \"The article emphasizes the importance of modeling face attributes, but it does not clearly indicate how the attribute set a in the surface-attributed graph G=(V,E,F,a) is obtained.\", \"While the paper compares with several methods, it would be beneficial to include comparisons with more recent state-of-the-art methods in geometric deep learning.\", \"The ablation study is limited to the effect of face attributes. Further ablation studies on different components of the framework could provide more insights.\"], \"questions\": [\"As pointed out in Weakness 2, could you please specify how you modeled the face attributes?\", \"In the experiments on polyhedral digit recognition shown in Figures 4, how do face attributes effectively differentiate similar digits, such as \\\"6\\\" and \\\"9\\\"?\", \"In section 5.7, you mentioned that \\\"The ability to discern different parts of objects through attributes like color is particularly effective in complex cases involving multi-part objects such as loudspeakers, knives, lamps, and benches, facilitating accurate feature assembly.\\\" However, from the ablation studies in Tables 3 and 4, it seems that the inclusion of face attributes does not significantly affect the ShapeNet dataset, but it severely affects the MNIST-C dataset. Why is that?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors propose a novel method for polyhedra representation learning. The design is straightforward and has been proven to be effective.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This design is very straightforward, easy to understand, and has been proven to be effective in the experimental part.\", \"weaknesses\": \"I am not an expert in this domain; however, I have an understanding of the author's design and experiments. My inquiries primarily stem from the following two aspects:\\na)There is an absence of a comparison experiment with Reference [1] mentioned below. It appears that both are engaged in polyhedron representation learning.\\nb)What are the advantages of this face-based representation method in comparison to the point cloud-based representation method, particularly in numerical experiments? (Given that this paper explicitly mentions the shortcomings of the point cloud-based method.)\\n\\n[1]Yu, D., Hu, Y., Li, Y., & Zhao, L. (2024, August). PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4012-4022).\", \"questions\": \"Please add experiments for comparison with PolygonGNN.\\nPlease add experiments for comparison with modern point cloud methods, especially the works in recent years rather than just PointNet.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents PolyhedronNet, a novel framework for representation learning of polyhedra using surface-attributed graphs (SAG). The authors aim to address the limitations of existing methods that predominantly focus on vertex sequences, failing to capture the intricate surface characteristics of 3D polyhedral objects. The proposed approach involves decomposing the SAG into local rigid representations, which effectively maintain geometric relationships while minimizing information loss. The authors introduce PolyhedronGNN, a graph neural network designed to hierarchically aggregate these local representations, achieving a global representation that is invariant to rotation and translation. Experimental results on multiple datasets demonstrate the effectiveness of PolyhedronNet in classification and retrieval tasks, significantly outperforming other methods\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper incorporates face attributes into the model, enhancing the understanding of geometric and semantic information and improving representation learning.\\n2. The authors introduce two message passing mechanisms (inner-face and cross-face) that facilitate information flow within and between faces, strengthening local information aggregation and sensitivity to geometric relationships. \\n3. Results across multiple datasets show that PolyhedronNet achieves notable performance gains in classification and retrieval tasks, validating the proposed methods and their applicability in 3D object representation learning.\", \"weaknesses\": \"1. While the paper presents experiments on four datasets, the diversity of polyhedral object types is somewhat limited. I recommend that the authors include a broader range of datasets to enhance the generalizability of the results and to provide a more comprehensive evaluation of the model's performance.\\n2. The proposed PolyhedronGNN appears to be computationally intensive, especially when applied to large polyhedral datasets. It would be beneficial for the authors to discuss the model's scalability and potential optimization strategies for handling larger datasets efficiently, as this would greatly enhance its practical applicability.\", \"questions\": \"1. I recommend adding additional baseline comparisons, such as HGT, HAN, and PolygonGNN, to provide a more comprehensive evaluation of PolyhedronNet's performance. Additionally, incorporating some datasets like DBSR for validation could enhance the robustness of the results.\\n2. The paper employs a sum operation for aggregating vertex features. It would be beneficial to provide experimental results for alternative aggregation methods, such as average and max, to examine their impact on model performance.\\n3. Could you conduct ablation experiments to compare the inner-face and intra-face geometric message passing modules? \\n4. Please provide an analysis of the time complexity of the proposed methods. If the complexity is too high, are there any strategies or optimizations that could be implemented to address this issue?\\n5. Given that the experimental results demonstrate significant improvements, can you provide the open-source code?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper aims to learn the representation of 3D polyhedral objects, by proposing the a PolyhedronNet that models the vertices, edges, faces and their geometric interrelationships. The major technical contributions include the local rigid representation, geometric message passing, for learning the global representations while having rotation and translation invariance. The proposed approach was evaluated for the four datasets, and results showed its effectiveness. This proposed PolyhedronNet works on the 3D polyhedral shapes, with some novel designs over this representation, which is a valuable contribution. Considering the overall positive comments after the rebuttal, the paper can be accepted. Since the reviewers suggested on more experimental comparisons and analysis, the paper should include these revisions in the final version.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer GCC3 did not point out significant weakness, and Reviewer tXyF concerns on some definitions and insufficient comparisons with sota methods and ablation studies. Reviewer 8zui commented on the diversity of polyhedral object types in datasets, model's scalability to larger datasets, adding more baselines methods, time complexity, etc. Reviewer 3fMs suggested on more comparisons with the point cloud network methods. The rebuttal mostly addressed these above concerns, and all reviewers rated score of 6.\"}" ] }
Bpn8q40n1n
ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer
[ "Zhen Han", "Zeyinzi Jiang", "Yulin Pan", "Jingfeng Zhang", "Chaojie Mao", "Chen-Wei Xie", "Yu Liu", "Jingren Zhou" ]
Diffusion models have emerged as a powerful generative technology and have been found to be applicable in various scenarios. Most existing foundational diffusion models are primarily designed for text-guided visual generation and do not support multi-modal conditions, which are essential for many visual editing tasks. This limitation prevents these foundational diffusion models from serving as a unified model in the field of visual generation, like GPT-4 in the natural language processing field. In this work, we propose ACE, an All-round Creator and Editor, which achieves comparable performance compared to those expert models in a wide range of visual generation tasks. To achieve this goal, we first introduce a unified condition format termed Long-context Condition Unit (LCU), and propose a novel Transformer-based diffusion model that uses LCU as input, aiming for joint training across various generation and editing tasks. Furthermore, we propose an efficient data collection approach to address the issue of the absence of available training data. It involves acquiring pairwise images with synthesis-based or clustering-based pipelines and supplying these pairs with accurate textual instructions by leveraging a fine-tuned multi-modal large language model. To comprehensively evaluate the performance of our model, we establish a benchmark of manually annotated pairs data across a variety of visual generation tasks. The extensive experimental results demonstrate the superiority of our model in visual generation fields. Thanks to the all-in-one capabilities of our model, we can easily build a multi-modal chat system that responds to any interactive request for image creation using a single model to serve as the backend, avoiding the cumbersome pipeline typically employed in visual agents.
[ "Image Generation and Editing", "Diffusion Transformer", "Instruction Following", "Unified Framework" ]
Accept (Poster)
https://openreview.net/pdf?id=Bpn8q40n1n
https://openreview.net/forum?id=Bpn8q40n1n
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmkIrwso5b", "u6jviIDAma", "tk23KP68k4", "kXUc1sZ3fG", "kFhZfEBfl2", "jV1vIB6knN", "i4IG5EMJVd", "ecS3jjAqz4", "bf5D1ZMrmF", "bIVk4ZjutE", "YvLanBrZKJ", "Ut7V2TYefS", "TqTQz3j2Vi", "R7HHx8W7gp", "Ou6KLDX4KF", "AwmayvXgFg", "AHjfcxwbNe", "7BxETnEyE3", "4yPI7O7yzR", "4LhYjlQcyh", "2HteMKyGjm", "0iQeYs7d7q" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733189709576, 1730721580926, 1730561256800, 1732707615486, 1732202333688, 1733189689656, 1734594632627, 1732204080846, 1732202911048, 1732701185761, 1732707572391, 1730733484180, 1732203169052, 1730711131431, 1732706906675, 1737523927577, 1732201701019, 1730373379636, 1732204740556, 1733189662986, 1732207210718, 1732776945924 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_vGPZ" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_bd3V" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Area_Chair_kfBz" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_NXSb" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_NXSb" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_vndP" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_vGPZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Reviewer_7X1X" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Authors" ], [ "ICLR.cc/2025/Conference/Submission8711/Area_Chair_kfBz" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 7X1X,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. \\n\\nWe have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns. Please let us know if you have any remaining questions or require additional clarification.\\n\\nBest regards,\\n\\nAuthors of Submission 8711\"}", "{\"summary\": \"This work proposes an All-round Creator and Editor as a unified foundation model for visual generation tasks. The main technical contribution lies in introducing a Long-context Condition Unit that standardizes diverse input formats. Built upon diffusion transformers, the architecture incorporates condition tokenizing, image indicator embedding, and long-context attention blocks to achieve unified visual generation capabilities. To address the scarcity of training data, the authors develop a data collection pipeline that combines synthesis/clustering-based approaches. Additionally, they establish a comprehensive benchmark for evaluating model performance across various visual generation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1) The framework unifies multiple image generation and editing tasks through a single model, avoiding the hassle of calling separate specialized models. The proposed LCU provides a structured approach to incorporating historical context in visual generation.\\n\\n2) The paper presents systematic methodologies for data collection and instruction construction, which contributes to the development of all-in-one visual generative foundation models.\\n\\n3) The evaluation benchmark provides comprehensive coverage across diverse image manipulation and generation tasks, enabling thorough performance assessment.\", \"weaknesses\": \"Technical Issues:\\n1. Formatting inconsistencies: in lines 417-418, the image placement obscures instruction text.\\n\\n2. The authors are encouraged to provide discussions on task-specific performance trade-offs during training, specifically how optimizing for one task might affect the performance of others.\\n\\n3. It would be helpful to provide methodological details regarding parameters in data preparation (lines 321-325), such as cluster number determination and data cleaning criteria.\\n\\n4. The qualitative results in Figure 5 reveal some limitations. 1) Row 1 (left): ACE generates a distorted hand. 2) Row 2 (right) and Row 4 (left): The model exhibits undesired attribute modifications not specified in the instructions, including unintended gesture alterations / head rotation changes, and camera perspective shifts.\", \"questions\": \"1. Regarding Figure 6, the authors are encouraged to elaborate on the empirical or theoretical basis for the chosen data distribution and its specific advantages for the ACE model.\\n\\n2. The paper would benefit from addressing the practical challenges of model updates. Specifically, how might one efficiently incorporate new functionalities without complete model retraining? This consideration is crucial for the model's practical deployment and ongoing development.\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"The paper mentions that the internal dataset was used for training, which may involve issues related to portraits and copyrighted images.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a unified visual generation and editing framework that supports a wide range of predefined tasks. To train and evaluate the proposed ACE, this work also introduces a data curation pipeline and an overall benchmark. Experimental results and numerous use cases demonstrate the superiority of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method provides a unified visual generation and editing framework that supports a wide range of predefined tasks.\\n The benchmark is comprehensive, designed to evaluate visual generation and editing models effectively.\", \"weaknesses\": \"The paper lacks some ablation studies to help readers understand the authors' design choices. Additionally, the results in Table 2 may not be entirely fair, as the superiority of ACE might be attributed to the scale of the data.\", \"questions\": \"1. What would happen if the Text Encoder T5 were replaced with an LLM? Would it be able to understand more diverse instructions?\\n2. Will the collected data be made public?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for your time and effort in reviewing our work.\"}", "{\"comment\": \"Dear Reviewer NXSb,\\n\\nThank you for acknowledging our framework. We address your concerns as follows:\\n\\n---\\n\\n**Q1: Task Interdependence**\\n\\nThank you for your suggestions. We have added a discussion on task interdependence in the *\\\"Supplementary Materials, Section IMPLEMENTATION DETAILS, SubSection Task Interdependence\\\"*. The main details are as follows: \\n\\nIn an all-in-one visual generation model, there exist multiple interactions between tasks, similar to that in large language models, and this relationship can be viewed as a complex balancing action.\\n\\ni) Complementarity between tasks: The combined influence of various tasks can lead to a certain degree of generalized behavior across tasks. For instance, in the style transfer task, our prepared data and training process focus on pixel-aligned global image transfer. However, by incorporating learnings from other tasks related to mask guidance or subject guidance, the model can acquire the ability to perform style transfer in localized areas. (as in Fig. 29)\\n\\nii) Competition between tasks: As the scale of tasks increases, the potential for competition also grows, particularly in scenarios where user instructions are ambiguous. For example, when adding the text \\\"APPLE\\\" to an image, it is essential to specify that it is text to be added; otherwise, due to semantic ambiguity, the result may instead involve the addition of an object depicting an apple. (as in Fig. 29)\\n\\nTo achieve optimal performance balance, we first focus on adjusting the data sampling rates for each task in a phased manner during the training process, monitoring this through a validation set. Additionally, more detailed descriptions of instructions are needed in the preparation of training data to prevent semantic confusion between tasks. Through these methods, we aim to ensure that the model can fully leverage the complementarity between different tasks while controlling for any potential negative impacts.\\n\\nHowever, the relationships between different tasks still require further exploration to better optimize the model's performance. Future work will also focus on how to effectively evaluate and adjust these influencing factors to achieve a more balanced and comprehensive execution of tasks.\\n\\n---\\n\\n**Q2: Task Selection**\\n\\nWe aim for ACE to encompass all visual generation tasks as possible, which is why we have not conducted a unified consideration specifically focused on reasonable tasks. Specifically, we propose the LCUs to unify various modal conditions, enabling the model to be compatible with different tasks. Additionally, if new visual generation tasks need to be supported, they can also be processed and fine-tuned accordingly through this paradigm.\\n\\n---\"}", "{\"comment\": \"Dear Reviewer bd3V,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. \\n\\nWe have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns. Please let us know if you have any remaining questions or require additional clarification.\\n\\nBest regards,\\n\\nAuthors of Submission 8711\"}", "{\"metareview\": \"This paper proposes an all-in-one model that supports a wide range of visual generation and editing tasks. Reviewers recognize the contribution of the unified framework and extensive evaluations. Questions are raised regarding design choices, more analysis, and experiments. The authors addressed most of the concerns during the rebuttal, and all reviewers give positive scores. Therefore, the area chair recommends accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed the reviewer questions on design choices, more analysis (such as task independence), and experiments during the rebuttal. Two reviewers replied in the rebuttal that there concerns are addressed. The other three reviewers did not reply (although the authors and area chair have asked these reviewers for multiple times). The area chair checked the reviewer's response and believes that the authors have adequately addressed major concerns proposed by those reviewers.\"}", "{\"comment\": \"Dear Reviewer vndP,\\n\\nThank you for your time and suggestions. We address your concern as follows:\\n\\n---\\n\\n**Q1: About Drawing**\\n\\nThank you for your feedback. We hope that the chosen text and colors can help readers better understand the content without causing any misunderstandings. Throughout the paper, we have adopted a consistent font and specific shades (blue, yellow, and green) as the baseline pattern, while using lighter and darker corresponding shades as accents to ensure visual harmony and aesthetics. All figures adhere to this principle as much as possible.\\n\\nAdditionally, Reviewer vGPZ and bd3V believe that our presentation is *good*, while Reviewer 7X1X considers it *excellent* and further notes that \\\"*it excels in writing and figure drawing, with clear diagrams and rigorous logic, providing an excellent reading experience for the audience*\\\". We will continue to strive to improve our visual presentation and appreciate your understanding.\\n\\n---\\n\\n**Q2: ACE vs. Other All-in-One**\\n\\nACE is an All-round **Creator** and **Editor**, that supports a wide range of visual generation tasks, including 8 basic types: Text-guided Generation, Low-level Visual Analysis, Controllable Generation, Semantic Editing, Element Editing, Repainting, Layer Editing, and Reference Generation. \\n\\nThe low-level tasks you mentioned are only **a very small part** of what we focus on. Furthermore, the Low-level Visual Analysis described in the manuscript includes *Image Segmentation, Depth Estimation, Human-pose Estimation, Image Mosaic, Image Degradation/Super-Resolution, Image Grayscale, Edge Detection, Doodle Extraction, Contour Extraction, and Scribble Extraction*. I have not been able to find any existing work that uses a single model to handle all of these tasks.\\n\\nIn addition to handling low-level tasks, we also have many other applications, such as basic text-to-image generation, comprehensive controllable generation, instruction-based editing, reference-guided generation, and long-context-guided generation. To support all of these functions, an all-in-one approach needs to be redesigned and cannot be directly referenced from the low-level domain you mentioned.\\n\\nThere were some all-in-one controllable generation methods, such as Uni-Controlnet[1] and Controlnet-Union-SDXL[2]. However, these methods merely provide a controllable generation all-in-one model from a unified perspective, which only accounts for a part of our tasks. Furthermore, we also compared several general editing methods, as detailed in Fig. 5, including IP2P[3], MagicBrush[4], CosXL[5], SEED-X[6], and UltraEdit[7].\\n\\nAs you mentioned, this is a lot of work, as we need to manage the data for various tasks separately, especially since acquiring data for the more advanced tasks is much more challenging compared to low-level tasks (*\\\"Supplementary Material, Section DATASETS DETAIL*\\\"). At the same time, we have to design a unified method that covers these tasks. \\n\\nWe appreciate your feedback and look forward to it.\\n\\n[1] Zhao et al. \\\"Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models.\\\" NeurIPS 2023.\\n\\n[2] xinsir, et al. \\\"controlnet-union-sdxl-1.0.\\\" Hugging Face.\\n\\n[3] Brooks et al. \\\"InstructPix2Pix: Learning To Follow Image Editing Instructions.\\\" CVPR2023.\\n\\n[4] Zhang et al. \\\"MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing.\\\" \\nNeurIPS 2023.\\n\\n[5] StabilityAI. \\\"CosXL.\\\" Hugging Face.\\n\\n[6] Ge et al., \\\"SEED-Data-Edit Technical Report: A Hybrid Dataset for Instructional Image Editing.\\\" arXiv 2024.\\n\\n[7] Zhao et al. \\\"UltraEdit: Instruction-based Fine-Grained Image Editing at Scale.\\\" arXiv 2024.\\n\\n---\\n\\n**Q3: Further Information**\\n\\nFor more information, please refer to the *\\\"Supplementary Material\\\"*. In this material, we have dedicated a significant amount of space to provide a more detailed description of the methods and to showcase additional visual results. \\n\\nRegarding the computing resources you mentioned, we used A800 as the base hardware and dynamically adjusted the quantity used according to the training to meet the expected batch size. For further details, please refer to *\\\"Supplementary Material, Section IMPLEMENTATION DETAILS\\\"*. Additionally, related work can also be found in *\\\"Supplementary Material, Section RELATED WORK\\\"*.\\n\\n---\"}", "{\"comment\": \"Dear Reviewer vGPZ,\\n\\nThank you for acknowledging our contributions and your valuable comments. We address your concern as follows:\\n\\n---\\n\\n**Q1: Formatting**\\n\\nThanks. We have completed the corrections in the manuscript.\\n\\n--- \\n\\n**Q2: Task Interdependence**\\n\\nThank you for your suggestions. We have added a discussion on task interdependence in the *\\\"Supplementary Materials, Section IMPLEMENTATION DETAILS, SubSection Task Interdependence\\\"*. The main details are as follows: \\n\\nIn an all-in-one visual generation model, there exist multiple interactions between tasks, similar to that in large language models, and this relationship can be viewed as a complex balancing action.\\n\\ni) Complementarity between tasks: The combined influence of various tasks can lead to a certain degree of generalized behavior across tasks. For instance, in the style transfer task, our prepared data and training process focus on pixel-aligned global image transfer. However, by incorporating learnings from other tasks related to mask guidance or subject guidance, the model can acquire the ability to perform style transfer in localized areas. (as in Fig. 29)\\n\\nii) Competition between tasks: As the scale of tasks increases, the potential for competition also grows, particularly in scenarios where user instructions are ambiguous. For example, when adding the text \\\"APPLE\\\" to an image, it is essential to specify that it is text to be added; otherwise, due to semantic ambiguity, the result may instead involve the addition of an object depicting an apple. (as in Fig. 29)\\n\\nTo achieve optimal performance balance, we first focus on adjusting the data sampling rates for each task in a phased manner during the training process, monitoring this through a validation set. Additionally, more detailed descriptions of instructions are needed in the preparation of training data to prevent semantic confusion between tasks. Through these methods, we aim to ensure that the model can fully leverage the complementarity between different tasks while controlling for any potential negative impacts.\\n\\nHowever, the relationships between different tasks still require further exploration to better optimize the model's performance. Future work will also focus on how to effectively evaluate and adjust these influencing factors to achieve a more balanced and comprehensive execution of tasks.\\n\\n---\\n\\n**Q3: Details of Data Processing**\\n\\nWe provide a detailed analysis and parameter description regarding this issue in the *\\\"Supplementary Materials, Section IMPLEMENTATION DETAILS, SubSection Data Preprocessing Details.\\\"*.\\n\\nIn the data preparation stage, our main considerations for parameter selection are computational efficiency and ensuring high data relevance. The designed hierarchical aggregation pipeline for pairing content-related images involves clustering, identifying first-level disjoint sets, and determining second-level disjoint sets. \\n\\nInitially, data is clustered into 10, 000 clusters using K-means clustering based on SigLip features, allowing the Union-Find algorithm to be executed more efficiently by keeping the data scale under 100K one node in the parallel execution. First-level disjoint sets are formed by analyzing the similarities of SigLip features within these clusters, using a SigLip similarity matrix and thresholds for data pruning to ensure strong internal connections. Second-level disjoint sets are established through task-specific correlations, with specialized models used for various tasks such as background alterations or ID preservation, applying different similarity measures and thresholds to maintain the necessary correlation. This process utilizes advanced data mining and correlation models tailored to specific tasks, employing techniques like binary classification with the ViT-B-16-SigLIP and cosine distance for facial features. \\n\\n---\"}", "{\"comment\": \"I thank the authors for the response. I believe an in-depth analysis of the relation between tasks would strengthen the paper, so Nevertheless, the paper has some contributions to the community. Thus, I will keep my original score.\"}", "{\"comment\": \"Thank you for acknowledging our contributions, as well as the time and effort invested.\"}", "{\"summary\": \"The paper presents a method to train a unified model for 8 different tasks: Text-guided Generation, Low-level Visual Analysis, Controllable Generation, Semantic Editing, Element Editing, Repainting, Layer Editing and Reference Generation. The idea is intuitive. The main contribution of the paper is the framework for generating paired training data. The source of the data generation comes from two aspects: 1. synthetic generation and 2. from publicly available datasets (LAION-5B). To verify the results of this task, authors also create a new benchmark called ACE Benchmark.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The dataset generated in this paper is beneficial to the community. This will help other researchers follow this series of research works.\\n2. A unified model for all tasks is also more efficient compared to have several individual models specific to certain type of tasks.\", \"weaknesses\": \"1. It seems the author does not have clear discussions on how those tasks affect each other. Are they beneficial to each other? Or some of the tasks are reducing the performance of other tasks? How to select the most reasonable tasks that should be unified with the single model? I believe adding this type of discussion with corresponding experiments will make the paper more solid.\", \"questions\": \"Indeed, as I mentioned in the weakness, how those tasks affect each other?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"---\\n\\n**Q4: Results in Fig.5**\\n\\ni\\uff09As stated in the Limitations subsection of the *\\\"Supplementary Materials, Section DISCUSSION\\\"*, the quality of our model's generation is constrained by its scale (only 0.6B), which may lead to some common issues often encountered in image generation tasks. Increasing the model's scale can effectively alleviate this issue.\\n\\nii\\uff09ACE can intentionally control whether the generation is aligned or unaligned through instructions. For example, methods like InstanceID and CosXL are aligned, keeping the character's pose mostly unchanged while altering their style or background. In contrast, IP-Adapter and FaceChain retain the core facial features, with other content being controlled by the text. By using descriptive instructions, we can achieve more precise control over the generated results or present a diverse array of content.\\n\\n---\\n\\n**Q5: Training Data Distribution**\", \"the_training_data_used_for_ace_depends_on_the_following_aspects\": \"ii\\uff09Ease of Data Acquisition: Tasks such as low-level Visual Analysis and Repainting rely on on-the-fly processing flows that can be easily obtained, while conditional generation data can be derived from various conditional models. In contrast, tasks such as semantic editing and element editing depend on more complex pipelines, and obtaining data for multi-image tasks is even more challenging.\\n\\nii\\uff09Use of High-Quality Data: During the model training phase, we divided the process into Instruction Alignment and Aesthetic Improvement. Higher-quality data helps us achieve a better-quality model.\\n\\niii\\uff09Data Scaling Law: It has been proven that data scaling laws are often simple yet effective, and we are continuously working on data construction.\\n\\n---\\n\\n**Q6: Model Update**\\n\\nRegarding how the model can be quickly updated, there are two considerations:\\n\\ni\\uff09Data-Driven: When the constructed dataset reaches a sufficient scale and high quality, combining and training it with the current state-of-the-art generative models can yield a high-quality model.\\n\\nii\\uff09Model-Driven: Once a foundational editing model is adequately trained, the model itself possesses a certain level of generalization capability. Adapting to new tasks can also be achieved through the rapid application of fine-tuning strategies (such as LoRA, etc.), allowing for plug-and-play support.\\n\\n---\"}", "{\"summary\": \"1. Propose ACE, a unified foundational model framework that supports a wide range of visualgeneration tasks, achieve a best task coverage.\\n2. Define the CU for unifying multi-modal inputs across different tasks and incorporate long context CU.\\n3. Design specific data construction pipelines for various tasks to enhance the quality and eff-ciency of data collection.\\n4. Establish a more comprehensive evaluation benchmark compared to previous ones, cover-ing the most known visual generation tasks. \\n\\nIt's a lot of work, from method to data to data construction pipelines to benchmark, very systematic and complete work.\\nAnd all-in-one models are really interesting and is consistent with the general trend of generate model development.\\nBut,\\ndrawings are terrible, and the method is a little weak. Maybe you could use the all-in-one methods in low-level works as reference.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Propose ACE, a unified foundational model framework that supports a wide range of visualgeneration tasks, achieve a best task coverage.\\n2. Define the CU for unifying multi-modal inputs across different tasks and incorporate long context CU.\\n3. Design specific data construction pipelines for various tasks to enhance the quality and eff-ciency of data collection.\\n4. Establish a more comprehensive evaluation benchmark compared to previous ones, cover-ing the most known visual generation tasks\\n5.Analyze and categorize these conditions from textual and visual modalities respectively, includeTextual modality and Visual modality.\", \"weaknesses\": \"1. The drawings are terrible!!! In particular, Figure 3. Incongruous text proportions and strange colour scheme...It's in the lower-middle range of T2I work.\\n2. The method is a little weak. All-in-one methods have been far dicussed in the field of low-level and it's ripe for the picking. Compared with them, the ACE module is not that impressive.\", \"questions\": \"Please DRAW better.\\nI don't find the computing resource? I think it would be big, maybe you could have a discuss.\\nRelated work? i think there should be other works that make try building an all-in-one visual generation model. Maybe you could list them clearly, I'm not an expert on this.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors for the response. It has addressed most of my concerns and I will maintain my original scores.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear all,\\n\\nWe would like to express our gratitude to our reviewers for their valuable comments. For positive comments, \\n- significant contribution (R-vGPZ, R-7X1X),\\n- unified & broad task support & efficient framework (R-NXSb, R-vGPZ, R-bd3V),\\n- systematic and complete work (R-vGPZ, R-vndP),\\n- comprehensive evaluation benchmark (R-vGPZ, R-bd3V, R-7X1X),\\n- well writing and drawing with excellent reading experience (R-7X1X)\\n\\nwe appreciate them and will carry them forward.\\n\\nWe would like to further clarify our goal and contributions, address the common concerns raised by the reviewers, and outline the revisions made to the manuscript in response to these comments.\\n\\n**1. Goal**\\n\\nWe aim to create an all-in-one model that supports a wide range of visual generation and editing tasks, which we have named ACE: an All-round Creator and Editor. Currently, it covers eight basic types: Text-guided Generation, Low-level Visual Analysis, Controllable Generation, Semantic Editing, Element Editing, Repainting, Layer Editing, and Reference Generation. To achieve this, we define a universal LCU input paradigm, design specific data construction pipelines, and propose a comprehensive evaluation benchmark. We hope to continually expand task capabilities and improve generation quality, providing momentum for the development of the open-source community.\\n\\n**2. Task Interdependence**\\n\\nWe add a discussion about this in the *\\\"Supplementary Materials, Section IMPLEMENTATION DETAILS\\\"*. Below, we briefly describe the issue:\\nIn an all-in-one visual generation model, there exist multiple interactions between tasks, similar to that in large language models, and this relationship can be viewed as a complex balancing action. To achieve an optimal performance balance, we handle the data preparation and model training processes to ensure that the model can fully leverage the complementarity between different tasks while controlling for any potential negative impacts.\\n\\n**3. Design choices**\\n\\nWe add an additional section in the *\\\"Supplementary Material, Section ARCHITECTURE DESIGN\\\"* to clarify our design considerations and provide relevant visual analyses, which, in brief, include the following modules:\\nLong-context Attention Block is specifically designed to handle input image sequences of varying lengths, addressing the limitations of the conventional attention block employed in DiT, which cannot effectively manage sequences of disparate lengths. \\nImage Indicator Embeddings facilitate the alignment between images in the sequence and their corresponding mentions within the text prompt.\\n\\n**4. More implementation details**\\n\\nBased on the reviewers' comments, we have supplemented relevant content in the *\\\"Supplementary Material, Section IMPLEMENTATION DETAILS\\\"*, including a further description of the training process and adding the corresponding parts of data preprocessing, computational efficiency, checkpoints evaluation, and visualization of the editing process to address the reviewers' concerns.\\n\\nFor other concerns, we address them in the respective comments to the reviewers.\\n\\n\\nThanks and best regards,\\n\\nAuthors of Submission 8711\"}", "{\"summary\": \"This paper introduces ACE (All-round Creator and Editor), a unified foundational model capable of handling a diverse array of visual generation tasks. By incorporating Long Contextual Units (LCU) and an efficient multimodal data collection methodology, ACE demonstrates exceptional performance in multi-task joint training, encompassing a wide range of tasks from text-guided generation to iterative image editing. Experimental results indicate that ACE significantly outperforms existing methods across multiple benchmark tests, showcasing its robust potential for practical applications.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"ACE introduces LCU, a novel approach that unifies various modal conditions, enabling the model to handle complex multimodal tasks. LCU allows ACE to flexibly adapt to different tasks, including generation and editing, which is lacking in current models. By integrating historical information into LCU, ACE can handle multi-turn editing tasks, enhancing its practicality in continuous interaction scenarios. ACE covers eight basic generation tasks and supports multi-turn and long-context tasks, establishing a comprehensive evaluation benchmark, significantly outperforming existing methods, especially in image editing tasks. User studies show that ACE is more in line with human perception. This paper not only makes significant contributions and proposes a practical and innovative solution but also excels in writing and figure drawing, with clear diagrams and rigorous logic, providing an excellent reading experience for the audience.\", \"weaknesses\": \"Model Efficiency and Scalability:\", \"the_paper_should_include_a_more_detailed_discussion_on_the_computational_efficiency_and_scalability_of_the_model\": \"It is important to evaluate the model's performance when processing large-scale data to understand its practical applicability.\", \"in_depth_analysis_of_specific_tasks\": \"For key tasks, could you offer a detailed comparison with state-of-the-art models specifically designed for those tasks? This would provide a clearer picture of the model's relative performance.\", \"data_annotation_quality\": \"While MLLM-assisted annotation improves efficiency, the quality of automatic annotations may not always be on par with manual annotations.\\nA quantitative analysis of the data annotation quality would enhance the credibility of the paper.\", \"questions\": \"Discussion on Model Efficiency and Scalability: Could you provide more details on the model's performance across different scales of data? This would help in understanding its computational efficiency and scalability.\", \"enhancing_model_interpretability\": \"Could you explore the decision-making process of the model and provide an interpretability analysis of the generated results? This would help in understanding how the model arrives at its outputs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer bd3V,\\n\\nThank you for acknowledging the proposed method and experiments. We address your concerns as follows:\\n\\n---\\n\\n**Q1: Architectural Design**\\n\\nThank you for your suggestions. We have outlined the key architectural design in the *\\\"Supplementary Material, Section ARCHITECTURE DESIGN\\\"*.\", \"our_model_is_anchored_by_two_principal_components\": \"the Long-context Attention Block and Image Indicator Embeddings. The Long-context Attention Block is specifically designed to handle input image sequences of varying lengths, addressing the limitations of the conventional attention block employed in DiT, which cannot effectively manage sequences of disparate lengths. Meanwhile, the Image Indicator Embeddings facilitate the alignment between images in the sequence and their corresponding mentions within the text prompt. A qualitative analysis of these components is also provided in the ARCHITECTURE DESIGN Section.\\n\\n---\\n\\n**Q2: Results in Tab. 2**\\n\\nUnlike previous methods that primarily focus on limited editing tasks, our approach is designed as a foundational editing model capable of addressing a broad spectrum of editing tasks. Consequently, leveraging large-scale data is a logical choice to enhance our model's performance. Moreover, most existing edit models are derived from foundational text-to-image generation frameworks, which have been trained on extensive data. Therefore, conducting a fair comparison under equivalent training data conditions is not feasible.\\n\\n---\\n\\n**Q3: Replacement of Text Encoders**\\n\\nIn the *\\\"Supplementary Material, Section DISCUSSION, SubSection Future Work\\\"*, we also mentioned that introducing LLMs or MLLMs could potentially help us better understand user intentions within general instructions.\\n\\nRegarding the design of text encoders, there are currently two design approaches. One is based on diffusion models specifically designed for text-to-image tasks, such as the MagicBrush and CosXL methods that utilize models from the SD series; these models primarily rely on CLIP or T5 for text encoding. The other approach is based on multimodal models, such as Llama and Phi, which are pre-trained to acquire generative and editing capabilities, like Seed-X, Seed-X Edit methods. The former naturally has the capacity to generate high-quality images, while the latter exhibits superior semantic understanding. Our future exploration will focus on how to combine and enhance the strengths of both approaches.\\n\\n---\\n\\n**Q4: Public Content**\\n\\nWe will make the model, training code, inference code, chatbot, and evaluation benchmark publicly available. However, due to organizational policies, we are unable to disclose the training data.\\n\\n---\"}", "{\"comment\": \"Dear Reviewer vndP,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. \\n\\nWe have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns. Please let us know if you have any remaining questions or require additional clarification.\\n\\nBest regards,\\n\\nAuthors of Submission 8711\"}", "{\"comment\": \"Dear Reviewer 7X1X,\\n\\nThank you for acknowledging the contributions and presentations. We address your concerns as follows:\\n\\n---\\n\\n**Q1: Discussion on Model Efficiency and Scalability**\\n\\nWe incorporate more information about computational efficiency during training and inference in the *\\\"Supplementary Material, Section IMPLEMENTATION DETAILS\\\"*. \\n\\nIn general, the training and inference efficiency is related to visual sequence length and input image number (*\\\"SubSection Computational Efficiency\\\"*). This is the reason why we training the model with a multi-stage training strategy, which is training with a small visual sequence length and fewer input images at first and increasing the length and the number of input images in the following stages. (see in Fig.28-a)\\n\\nWe further conduct evaluations of our intermediate model checkpoints on the MagicBrush benchmark to evaluate the impact of data scale in the *\\\"SubSection Checkpoints Evaluation\\\"*. Generally, the model\\u2019s performance improves when trained with more data. (see in Fig.28-b)\\n\\n---\\n\\n**Q2: In-depth Analysis of Specific Tasks**\", \"we_conduct_detailed_quantitative_comparisons_with_specifically_designed_state_of_the_art_methods_for_key_tasks\": \"facial editing and local text rendering, please refer to *\\\"Supplementary Material, Section MORE EXPERIMENTS, SubSection Facial Editing and Local Text Render\\\"* for the details. More qualitative comparisons for inpainting and controllable generation are added to *\\\"Supplementary Material, Section MORE EXPERIMENTS, SubSection More Qualitative Comparison\\\"*.\\n\\n---\\n\\n**Q3: Data Annotation Quality**\\n\\nAs shown in Fig. 4, we used Qwen-VL for the initial construction of instructions and obtained more accurate instruction descriptions through manual annotations. We then utilized this high-quality data for training the InternVL model. It is worth noting that the entire process is conducted in an iterative update manner, allowing us to continually refine our annotated data and iteratively enhance the performance of the fine-tuned InternVL. Furthermore, based on our sampling evaluation, the accuracy of the model's instruction annotations has reached over 92 %, which is sufficient for use as training pair data. For further improvement in annotation accuracy, it is necessary to address the deficiencies in detailed image descriptions within the multimodal model itself, which is also a challenge faced by current pure image labeling models.\\n\\n--- \\n\\n**Q4: Enhancing Model Interpretability**\\n\\nIn *\\\"Supplementary Material, Section ARCHITECTURE DESIGN\\\"*, we outline our key architectural design: Long-context Attention Block and Image Indicator Embeddings, as well as a qualitative analysis of these components. This may partially explain the model's functionality. We also visualize the editing process by decoding intermediate model outputs during the de-noising process and try to explain the model's behavior at each step in *\\\"Supplementary Material, Section IMPLEMENTATION DETAILS, SubSection Visualization of Editing Process.\\\"*. When we use the instruction \\u201cLet a car appear in {image}.\\u201d to edit the image, the model identifies the area to be edited in the initial steps and subsequently copies the unchanged regions from the input image in the following steps. In the steps leading up to the final stage, additional details are incrementally added to the edited area until completion. \\n\\n---\"}", "{\"title\": \"Reviewer feedback and discussion\", \"comment\": \"Dear Reviewers,\\n\\nAs the discussion period will end next week, please take some time to read the authors' rebuttal and provide feedback as soon as possible. For reviewers vndP, bd3V, and 7X1X, did the author address your concerns, and do you have further questions?\\n\\nThanks,\\n\\nArea Chair\"}" ] }
BpfsxFqhGa
Animate Your Thoughts: Reconstruction of Dynamic Natural Vision from Human Brain Activity
[ "Yizhuo Lu", "Changde Du", "Chong Wang", "Xuanliu Zhu", "Liuyun Jiang", "Xujin Li", "Huiguang He" ]
Reconstructing human dynamic vision from brain activity is a challenging task with great scientific significance. Although prior video reconstruction methods have made substantial progress, they still suffer from several limitations, including: (1) difficulty in simultaneously reconciling semantic (e.g. categorical descriptions), structure (e.g. size and color), and consistent motion information (e.g. order of frames); (2) low temporal resolution of fMRI, which poses a challenge in decoding multiple frames of video dynamics from a single fMRI frame; (3) reliance on video generation models, which introduces ambiguity regarding whether the dynamics observed in the reconstructed videos are genuinely derived from fMRI data or are hallucinations from generative model. To overcome these limitations, we propose a two-stage model named Mind-Animator. During the fMRI-to-feature stage, we decouple semantic, structure, and motion features from fMRI. Specifically, we employ fMRI-vision-language tri-modal contrastive learning to decode semantic feature from fMRI and design a sparse causal attention mechanism for decoding multi-frame video motion features through a next-frame-prediction task. In the feature-to-video stage, these features are integrated into videos using an inflated Stable Diffusion, effectively eliminating external video data interference. Extensive experiments on multiple video-fMRI datasets demonstrate that our model achieves state-of-the-art performance. Comprehensive visualization analyses further elucidate the interpretability of our model from a neurobiological perspective. Project page: https://mind-animator-design.github.io/.
[ "Video reconstruction", "Brain-computer Interface (BCI)." ]
Accept (Poster)
https://openreview.net/pdf?id=BpfsxFqhGa
https://openreview.net/forum?id=BpfsxFqhGa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y5vKtP7GB7", "rEyeuFBOyz", "r9x6LygkJI", "qizWEA14XI", "oYNl5XLAtv", "ndJzdvRhyu", "ij4ki21cpz", "i9nhZ2KIc7", "gPao9WhBct", "fzUg0lK7t2", "bJGfquucCi", "bEVlgHX67W", "ZknFoq3TRE", "ZIkS8k3iOI", "XyDpElbssP", "XVyMTBjopu", "WsV5DtUURj", "WoMVKJ0s5S", "TuKhN8UIlU", "RBd41DfT4e", "Qcw0PTq1lO", "OhUTLWy2UB", "NLhwkJ3VKg", "MtyWar4zGK", "MA5o1kICWS", "Iq5Jxdzmsm", "IXN41d8lxx", "FSjosXu3yB", "DcKZ73TZRG", "CiYkB4Mhbs", "CJmzjVSdHZ", "9QgNDGJVCx", "3KSyL3T9vW", "1N6yJYR8E9", "1Gzc45ODzb", "0MJMcTdEiu", "07bWrlwviO" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732007233096, 1732507681533, 1732163647850, 1732462789235, 1732426236285, 1732782757565, 1732101182077, 1732781099033, 1733191919617, 1732371624230, 1732008129082, 1732175271153, 1732557605646, 1732664211603, 1732781610809, 1734669561537, 1732006593769, 1737523581058, 1732159594434, 1732781705502, 1732508059127, 1732780801160, 1729693413939, 1732009471363, 1730625046253, 1732005754288, 1732665910733, 1733179659934, 1732781011656, 1729810204238, 1732346965309, 1732469632504, 1732592094841, 1730627204385, 1732097369575, 1732159952466, 1732157343027 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_fFuu" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_fFuu" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_1xQ5" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_5KvP" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Area_Chair_VmHG" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_fFuu" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_GXPV" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_5KvP" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_5KvP" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_5KvP" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_1xQ5" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Reviewer_1xQ5" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ], [ "ICLR.cc/2025/Conference/Submission3520/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 1xQ5 (cont.)\", \"comment\": \"# Soundness:\\n## 1. Further ablation studies on the CMG module are needed to confirm that the motion information originates from the fMRI rather than the training videos.\\n## Response:\\nThank you for your constructive suggestion. Our team has also been considering whether more direct evidence could demonstrate that the motion information originates from the fMRI rather than the training videos. Following your advice, we removed the fMRI guidance during the training of the CMG module by replacing the cross-attention in the Spatial module with self-attention, while keeping the rest of the architecture and hyperparameters unchanged. Due to time and computational resource constraints, we conducted experiments only on sub1 and sub2 of the CC2017 dataset. **The results for sub1 with t-test have been incorporated into Table 6 of the main text, while the training and validation loss curves, as well as the results for sub2, are provided in Appendix E.5.**\\n\\nBased on the experimental results, we draw the following observations and conclusions:\\n\\n- From the training and validation loss curves, it is evident that removing fMRI guidance reduces the generalization ability of the CMG module.\\n\\n- As shown in Table 6 and Table 12, removing fMRI guidance significantly affects the decoding of motion information, as evidenced by a substantial decrease in CLIP-pcc and a significant increase in EPE. This confirms that the motion information originates from the fMRI rather than the training videos.\\n\\n- Removing fMRI guidance has minimal impact on some semantic- and structural-level metrics, and even leads to a significant increase in PSNR, a measure of video signal-to-noise ratio. We hypothesize that this is because the inherently low signal-to-noise ratio of fMRI, while introducing motion information to video reconstruction, also introduces adverse effects such as reducing the generation quality of Stable Diffusion. This presents a challenge worth addressing in future research.\\n\\n## 2.Issues and suggestions with the analysis in 6.1.\\n### (1) The labeling error on the y-axis\\n### Response:\\n\\nWe apologize for the labeling error on the y-axis, which may have caused confusion in your reading. First, we would like to clarify that the values in the bar chart did not exceed 1.0. The y-axis scale exceeding 1.0 occurred because, when using matplotlib to plot the graph, we set the legend to appear in the upper left corner, which caused the y-axis to be automatically extended. We have corrected this issue in the latest version of the figure.\\n\\n### (2) Why the authors are examining the structural metrics for the shuffle test instead of solely the spatio-temporal metrics?\\n### Response:\\nWe believe that shuffling the video frames does not affect the semantic-level metrics. Therefore, our original plan was to measure the shuffle test results for all structure-level and ST-level metrics. However, during the shuffle test experiments, our Table 2 only included 7 evaluation metrics: 3 semantic-level metrics, 3 structure-level metrics, and CLIP-pcc as the only ST-level metric. Towards the submission deadline, following our advisor's suggestion, we added the EPE evaluation results to Tables 2 and 6, but we forgot to include the EPE results in the shuffle test. Based on your suggestion, we have now removed the structure-level metrics and retained only the ST-level metrics (CLIP-pcc and EPE) in the latest version of the paper.\\n\\n### (3) Using a better baseline to compare against\\n### Response:\\nThank you for your valuable suggestion. However, we believe that using the noise ceiling as a baseline is more reliable than directly using the ground truth. Specifically, we input the semantic feature $c$ and motion feature $z_{1:8}$ from the test set directly into Inflated Stable Diffusion and use the generated results as the noise ceiling for video reconstruction. We conducted a shuffle test on the noise ceiling for both CLIP-pcc and EPE, and the estimated p-values were 0.09 \\u00b1 0.01 and 0.005 \\u00b1 0.004, respectively, showing that:\\n\\n- Even for the noise ceiling reconstruction results, the p-value from the shuffle test on CLIP-pcc is significantly greater than 0.05. We believe this is related to the calculation method of CLIP-pcc, which measures semantic similarity between adjacent frames, focusing more on frame-to-frame consistency rather than the order of all frames. Therefore, this metric is not sensitive to shuffling video frames. Nonetheless, even in this case, the p-value with CMG is still significantly smaller than without CMG, indicating that our CMG can capture correlations between frames.\\n\\n- The p-value for the noise ceiling reconstruction results in EPE is significantly smaller than 0.05. This is because EPE calculates the distance between the reconstructed result and the ground truth optical flow trajectories, considering the order of all frames. Therefore, EPE serves as a better metric for evaluating motion decoding capability.\"}", "{\"title\": \"Thanks for your suggestions!\", \"comment\": \"We sincerely appreciate the time you spent reviewing our responses and the revised manuscript, as well as your valuable feedback.\\n\\nIn response to your comment regarding \\\"the complete removal of the EV results,\\\" we have made the following improvements: Due to space limitations, we have moved the results of fine-tuning Stable Diffusion on the video dataset used in our model to Appendix E 6.1, Table 14, and referenced it in the main text's Table 2. Additionally, regarding the naming issue, we have made the necessary corrections in both Table 2 and Table 14. In Table 2, we use the symbol $\\\\dagger$ for Mind-video to denote \\\"using Stable Diffusion fine-tuned on video data.\\\" In Table 14, we have removed \\\"Ours\\\" as per your suggestion and replaced it with \\\"SD-video-finetuning\\\".\\n\\nIn regard to the comparison between \\\"w/o Motion\\\" and \\\"w/o fMRI guidance,\\\" we have added the following explanation in lines 431\\u2013436 of Section 5.2:\\n\\n\\\"Meanwhile, comparing the removal of the whole CMG module (w/o Motion) with the removal of fMRI guidance from the CMG (w/o fMRI guidance), it is observed that the latter contributes to the majority of the impact of the former on ST-level metrics. Specifically, in the CLIP-pcc metric, 86% of the decrease observed in the w/o Motion scenario can be attributed to the absence of fMRI guidance, while in the EPE metric, 90% of the decrease is due to the removal of fMRI guidance. This further emphasizes the critical role of fMRI guidance in decoding accurate motion information from brain signals.\\\"\\n\\nAll the points mentioned above have been addressed in the manuscript, and the updated PDF has been provided for your review. We greatly appreciate your valuable feedback and look forward to your response and further discussion.\"}", "{\"title\": \"Response to Reviewer 5KvP (cont.)\", \"comment\": \"## 8. Clarification on the role of MT in semantic processing is needed to explain why MT shows significant activation for semantics.\\n## Response:\\nThank you for pointing out this important issue. After reviewing relevant literature in the field of neuroscience, we have found the following reasonable explanation:\\n\\nAlthough the dorsal and ventral streams clearly make up two relatively separate circuits, the anatomical segregation between the two streams is by no means absolute. Recently, the dorsal stream was shown to be divided into two functional streams in primates to mediate different behavioural goals: **the dorsal-dorsal and ventral-dorsal streams [1] .** The dorsal-dorsal pathway concerned with the control of action and the ventral-dorsal pathway concerned with action understanding (the recognition and understanding of actions) [2] [3] [5] [6] . Our finding aligns with the latter.\\n\\nThe MT area may be activated when the brain processes motion dynamics related to objects or actions in a stimulus video, aiding in the perception of motion patterns critical for interpreting the video's semantics, particularly those related to actions and relationships [2] . \\nThus, although the MT area is not directly responsible for semantic processing, it plays a crucial role in handling motion information related to the scene, contributing to the understanding of the video's semantic content [4] . **This differentiates it from how the brain processes static image information.**\\n\\nWe have provided further clarification on this phenomenon in Lines 522-527 of the manuscript. **Additionally, we included relevant neuroscience knowledge in Appendix G to assist readers in understanding the context.**\", \"references\": \"[1] David J Ingle, Melvyn A Goodale, Richard JW Mansfield, et al. Analysis of visual behavior. Mit Press Cambridge, MA, 1982.\\n\\n[2] Jonathan J Nassi and Edward M Callaway. Parallel processing strategies of the primate visual system. Nature reviews neuroscience, 10(5):360\\u2013372, 2009.\\n\\n[3] Giacomo Rizzolatti and Massimo Matelli. Two different streams form the dorsal visual system: anatomy and functions. Experimental brain research, 153:146\\u2013157, 2003.\\n\\n[4] JH Maunsell and DAVID C van Essen. The connections of the middle temporal visual area (mt) and their relationship to a cortical hierarchy in the macaque monkey. Journal of Neuroscience, 3(12): 2563\\u20132586, 1983.\\n\\n[5] Gene J Blatt, Richard A Andersen, and Gene R Stoner. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area lip) in the macaque. Journal of Comparative Neurology, 299(4):421\\u2013445, 1990.\\n\\n[6] Richard A Andersen, C Asanuma, G Essick, and RM Siegel. Corticocortical connections of anatomically and physiologically defined subdivisions within the inferior parietal lobule. Journal of Comparative Neurology, 296(1):65\\u2013113, 1990.\\n\\n\\nThank you again for your valuable suggestions to improving our work. We believe that, under your review, our manuscript will be significantly improved in terms of clarity and experimental design. We look forward to your feedback and further discussions.\"}", "{\"comment\": \"Dear Authors,\\n\\nThanks for your detailed response and additional experiments to improve your work. It is greatly appreciated.\\n\\nBests,\\n\\nReviewer fFuu\"}", "{\"title\": \"Response to Reviewer fFuu\", \"comment\": \"Thank you for your response. We have incorporated the results of the ablation experiments into Appendix E. 3 (page 27) of the revised manuscript.\"}", "{\"title\": \"Supplementary Results on Statistical Assessments\", \"comment\": \"The results presented in Tables 2, 3, and 4 were obtained by first averaging across subjects within each dataset, followed by significance testing on the averaged results. Following the suggestion of Reviewer 5KvP, we have additionally conducted per-subject significance testing, as shown in Tables 15, 16, and 17. These updates have been incorporated into the manuscript and uploaded.\"}", "{\"title\": \"Response to Reviewer GXPV (cont.)\", \"comment\": \"## 5. Include more baseline comparisons and position the retrieval results more prominently within the paper.\\n\\n## Response:\\n\\nThank you for your valuable feedback. We have added \\\"Wen18\\\" [1], \\\"Kupershmidt22\\\" [2] and \\\"Mind-video\\\" [3] as comparison methods in Table 5 and moved the table to Section 5.1 of the main text. Additionally, we have highlighted the description and interpretation of the table in red for clarity.\\n\\n| Model | Test set | **Subject 1** | | **Subject 2** | | **Subject 3** | | **Average** | |\\n|---------------|----------|---------------|-----------|---------------|-----------|---------------|-----------|--------------|-----------|\\n| | | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 |\\n| Wen [1] | Small | 2.17* | 19.50* | 3.33* | 19.17* | \\u2014\\u2014 | \\u2014\\u2014 | 2.75* | 19.33* |\\n| Kupershmidt [2] | Small | 1.09* | 8.57* | 0.92* | 8.24* | 0.84* | 8.24* | 0.95* | 8.35* |\\n| Mind-video [3] | Small | **3.22*** | 19.08* | 2.75* | 16.83* | 3.58* | 22.08* | 3.18* | 19.33* |\\n| **Ours** | Small | 3.08 | **22.58** | **4.75** | **26.90** | **4.50** | **24.67** | **4.11** | **24.72** |\\n| Wen [1] | Large | 1.41* | 11.58* | 2.08* | 9.58* | \\u2014\\u2014 | \\u2014\\u2014 | 1.75* | 10.58* |\\n| Kupershmidt [2] | Large | 0.17* | 2.94* | 0.17* | 2.77* | 0.25* | 2.18* | 0.19* | 2.63* |\\n| Mind-video [3] | Large | 1.75* | 7.17* | 0.83* | 5.17* | 1.25* | 9.00* | 1.28* | 7.11* |\\n| **Ours** | Large | **2.17** | **12.50** | **2.25** | **17.00** | **2.75** | **16.42** | **2.39** | **15.31** |\\n\\n*Note: For the 'small test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.83% and 8.3%, respectively. For the 'large test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.24% and 2.4%, respectively. The metrics are evaluated using 100 bootstrap trials. * denotes our performance is significantly better than the compared method (Wilcoxon test for paired samples, p<0.05).*\\n\\n\\nIt is worth noting that the evaluation metrics used in \\\"Wen18\\\" and \\\"Kupershmidt22\\\" are simpler compared to ours. Specifically, \\\"Wen18\\\" reported classification results in their paper, where video stimuli from the CC2017 dataset were divided into 15 categories, achieving a top-10 accuracy with a chance level of 66.7%. \\\"Kupershmidt22\\\" on the other hand, employed a 100-way Identification Test, which measures whether the corresponding video can be retrieved from a pool of 100 videos, with a top-10 accuracy chance level of 10%. In contrast, our study requires retrieving the corresponding video from 1,200 video clips (small set) and an expanded set of 4,240 video clips (Large set), with top-10 accuracy chance levels of 0.83% and 0.24%, respectively. Thus, our evaluation is more challenging and better reflects the reconstruction performance of the model.\\n\\nWe recalculated the evaluation metrics for \\\"Kupershmidt22\\\", \\\"Wen18\\\" and \\\"Mind-video\\\" using their reconstruction results on the CC2017 dataset, as recorded in Table 5. Statistical significance tests across three subjects demonstrate that our model significantly outperforms the comparison methods.\\n\\nReferences\\uff1a\\n\\n[1] Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, and Zhongming Liu. Neural encoding and decoding with deep learning for dynamic natural vision. Cerebral cortex, 28(12): 4136\\u20134160, 2018.\\n\\n[2] Ganit Kupershmidt, Roman Beliy, Guy Gaziv, and Michal Irani. A penny for your (visual)\", \"thoughts\": \"Self-supervised reconstruction of natural movies from brain activity. arXiv preprint\", \"arxiv\": \"2206.03544, 2022.\\n\\n[3] Zijiao Chen, Jiaxin Qing, and Juan Helen Zhou. Cinematic mindscapes: High-quality video reconstruction from brain activity. Advances in Neural Information Processing Systems, 36, 2024.\\n\\nThank you again for your valuable suggestions to improving our work. We believe that, under your review, our manuscript will be significantly improved in terms of clarity and experimental design. We look forward to your feedback and further discussions.\"}", "{\"title\": \"Supplementary Results and Explanations on Statistical Assessments (3/4)\", \"comment\": \"| Sub ID | Models | Semantic-level \\u2191 | | | Pixel-level \\u2191 | | | ST-level | | |\\n|----------|-------------|--------------------|--------------------|--------------------|----------------|----------------|----------------|------------------|------------------|------------------|\\n| | | 2-way-I | 2-way-V | VIFI-score | SSIM | PSNR | Hue-pcc | CLIP-pcc \\u2191 | EPE \\u2193 |\\n| sub 01 | Mind-video | 0.702*** | 0.761*** | 0.568*** | 0.135*** | 8.642*** | 0.794*** | 0.277*** | 8.368*** |\\n| | Ours | **0.722** | **0.790** | **0.599** | **0.401** | **10.088** | **0.824** | **0.439** | **4.420** |\\n| sub 02 | Mind-video | 0.698*** | **0.769** | 0.573*** | 0.132*** | 9.004*** | 0.773*** | 0.265*** | 7.458*** |\\n| | Ours | **0.734** | 0.765 | **0.596** | **0.465** | **10.932** | **0.796** | **0.425** | **3.806** |\\n| sub 03 | Mind-video | **0.701***** | 0.729*** | 0.564*** | 0.117*** | 8.796*** | 0.806*** | 0.271*** | 7.659*** |\\n| | Ours | 0.679 | **0.794** | **0.591** | **0.466** | **11.089** | **0.863** | **0.397** | **3.406** |\\n| sub 04 | Mind-video | 0.665*** | 0.785*** | 0.556*** | 0.126*** | 8.439*** | 0.811*** | 0.254*** | 8.011*** |\\n| | Ours | **0.673** | **0.810** | **0.587** | **0.479** | **11.410** | **0.848** | **0.381** | **3.089** |\\n| sub 05 | Mind-video | 0.664*** | 0.757*** | 0.529*** | 0.140*** | 8.597*** | 0.792** | 0.263*** | 8.124*** |\\n| | Ours | **0.689** | **0.810** | **0.592** | **0.458** | **10.814** | **0.807** | **0.406** | **3.237** |\\n| sub 06 | Mind-video | 0.690* | 0.751*** | 0.549*** | 0.137*** | 9.011*** | 0.795*** | 0.266*** | 7.431*** |\\n| | Ours | **0.709** | **0.783** | **0.597** | **0.489** | **11.337** | **0.834** | **0.446** | **3.399** |\\n| sub 07 | Mind-video | **0.687** | 0.721*** | 0.574* | 0.109*** | 8.409*** | 0.783*** | 0.209*** | 7.652*** |\\n| | Ours | 0.681 | **0.802** | **0.578** | **0.458** | **10.889** | **0.857** | **0.329** | **3.845** |\\n| sub 08 | Mind-video | 0.658*** | 0.764*** | 0.590 | 0.114*** | 8.251*** | 0.817 | 0.204*** | 6.597*** |\\n| | Ours | **0.709** | **0.802** | **0.592** | **0.467** | **10.893** | **0.820** | **0.376** | **3.757** |\\n| sub 09 | Mind-video | 0.679*** | 0.780* | **0.609**** | 0.117*** | 8.673*** | 0.784*** | 0.267*** | 8.102*** |\\n| | Ours | **0.731** | **0.788** | 0.594 | **0.502** | **11.310** | **0.820** | **0.400** | **3.551** |\\n| sub 10 | Mind-video | 0.663*** | 0.770* | 0.563*** | 0.108*** | 8.912*** | 0.809*** | 0.185*** | 7.524*** |\\n| | Ours | **0.684** | **0.777** | **0.590** | **0.465** | **11.128** | **0.858** | **0.408** | **3.533** |\\n\\n*Quantitative comparison of reconstruction results across ten subjects from the **Algonauts2021 dataset**. For the 2-way-I and 2-way-V metrics, 100 repetitions were conducted, while other metrics were evaluated using 100 bootstrap trials. All metrics are averaged over the entire test set. The superior results are highlighted in bold. Asterisks indicate statistical significance (Wilcoxon test for paired samples) compared to our model. p<0.0001(\\\\*\\\\*\\\\*), p<0.01(\\\\*\\\\*), p<0.05(\\\\*).*\"}", "{\"title\": \"Thanks for your recognition!\", \"comment\": \"Thank you for taking the time to review our rebuttal. We are pleased that your concerns have been addressed, and we are honored to have received your support and recognition. We believe that, under your guidance, our manuscript has reached a higher level in terms of content clarity and experimental rigor.\"}", "{\"comment\": \"Thanks for your ablation experiments, please include them in your revision.\\n\\nBests.\"}", "{\"title\": \"Response to Reviewer 1xQ5 (cont.)\", \"comment\": \"### (4) The p-values are very high and much higher than 0.05\\n### Response:\\n\\nFollowing your suggestion, we have removed the structure-level evaluation metrics and retained only CLIP-pcc and EPE. The p-value estimated from the shuffle test for EPE is significantly smaller than 0.05, indicating that we have indeed decoded some motion information from fMRI.\\n\\n### (5) Why the results are vastly different across the 3 subjects ?\\n### Response:\\n\\nIn the revised figure, the estimated p-values of shuffle test for the EPE metric across all three subjects are significantly smaller than 0.05, showing consistent results. However, there are vast differences in the p-values for CLIP-pcc and the previously tested structure-level metrics between sub1, sub2, and sub3. We believe the following factors may explain these variations:\\n\\n- Due to individual differences in brain structure and function, even when subjects watch the same stimulus video, their brain responses can differ significantly [1].\\n\\n- According to Kupershmidt et al.'s paper, 'A Penny for Your (Visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity' [2] , in the final paragraph of the Appendix, they calculated the signal-to-noise ratio (SNR) of the fMRI data from the three subjects in the CC2017 dataset. The results were $SNR_{sub1} = 1.16$, $SNR_{sub2} = 0.96$, and $SNR_{sub3} = 0.63$. Therefore, the substantial differences observed across subjects in various metrics may also be attributed to noise in the fMRI data.\\n\\n## 3. In Section 5.2, why the structure metric Hue-pcc is increasing significantly when the structure module is removed ?\\n## Response:\\nThank you for pointing out this issue. \\nFirstly, as noted in Table 6, removing the structure decoder leads to a significant degradation in 7 out of 8 metrics. Overall, we believe this result is acceptable when considered comprehensively. \\n\\nTo address your concern regarding the notable increase in Hue-PCC, we first explain the extraction process of structure features. In this study, structure features are obtained by extracting the first-frame features of videos using a VQ-VAE model pre-trained on a natural image dataset. Since VQ-VAE is not explicitly trained to disentangle and preserve color information in its latent space, reconstructing hue information from fMRI data is inherently challenging. To overcome this, prior research [3] has proposed explicitly extracting image color information using spatial color palettes, achieving promising results. However, this study focuses on a different goal: decoding motion information from fMRI. Thus, we did not specifically emphasize hue recovery, which will be considered as a potential direction for future improvement.\\n\\nWe have added a discussion on the anomalous changes in this metric at the corresponding position in Section 5.2 of the main text.\", \"references\": \"[1] Haxby J V, Guntupalli J S, Connolly A C, et al. A common, high-dimensional model of the representational space in human ventral temporal cortex[J]. Neuron, 2011, 72(2): 404-416.\\n\\n[2] Ganit Kupershmidt, Roman Beliy, Guy Gaziv, and Michal Irani. A penny for your (visual) thoughts: Self-supervised reconstruction of natural movies from brain activity. arXiv preprint arXiv:2206.03544, 2022.\\n\\n[3] Xia W, de Charette R, Oztireli C, et al. Dream: Visual decoding from reversing human visual system[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024: 8226-8235.\"}", "{\"title\": \"Response to Reviewer fFuu\", \"comment\": \"We thank you for the strong support and the positive comments on our work. Your inspiring questions and comments are valuable for our future work. We have carefully revised the manuscript in accordance with your suggestions, **with the changes highlighted in purple**, and have submitted the updated version. Our point-by-point responses are as follows.\\n\\n## 1. How did the author chose\\u00a0$\\\\lambda_1$\\u00a0and\\u00a0$\\\\lambda_2$ in equation 4? \\n## Response:\\n\\nAs shown in Figure 12 of Appendix E3, setting either $\\\\lambda_1$ or $\\\\lambda_2$ to 0 prevents the semantic decoder from converging during training. Therefore, $\\\\lambda_1$\\u00a0and\\u00a0$\\\\lambda_2$ were empirically determined to balance the magnitudes of $L_{Semantic}$ and $L_{Projection}$ as much as possible during the early stages of training.\\n\\n## 2. Can the author provide the ablation study/ discussion on how these choices affect the downstream performance?\\n## Response:\\n\\nWe conducted a sensitivity analysis on the selection of $\\\\lambda_1$ and $\\\\lambda_2$ using sub1 from the CC2017 dataset. As shown in Figure 12, when either $\\\\lambda_1$ or $\\\\lambda_2$ is set to 0, the semantic decider fails to converge, indicating that both the contrastive loss and projection loss play crucial roles in decoding semantic information. In Table 9, we set $\\\\lambda_1 = 0.01$ and $\\\\lambda_2 = 0.5$ (Ours) to ensure that both loss terms are balanced during optimization. We then introduced small perturbations to $\\\\lambda_1$ and $\\\\lambda_2$ to observe how they affect downstream performance. \\n\\n\\n| Model | 2-way-I | 2-way-V | VIFI-score | SSIM | PSNR | Hue-pcc | CLIP-pcc\\u2191 | EPE\\u2193 |\\n|----------------------------------|---------|---------|------------|-------|--------|---------|-----------|-------|\\n| $\\\\lambda_1 = 0.01$, $\\\\lambda_2 = 0.25$ | 0.765 * | 0.825 * | 0.581 * | 0.318 | **10.199** * | **0.780** | 0.396 * | 6.025 *|\\n| $\\\\lambda_1 = 0.005$, $\\\\lambda_2 = 0.5$ | 0.786 * | 0.824 * | 0.591 * | 0.320 | 9.109 | 0.776 | 0.407 | 5.898 *|\\n| $\\\\lambda_1 = 0.01$, $\\\\lambda_2 = 0.5$ (Ours) | **0.812** | **0.839** | **0.604** | **0.319** | 9.116 | 0.778 | **0.413** | **5.572** |\\n\\n*Note: * denotes our performance is significantly better than the compared method (paired t-test, p<0.05).*\\n\\n\\nFrom the table above, it can be observed that adjusting either $\\\\lambda_1$ or $\\\\lambda_2$ and disrupting this balance does not affect the structural-level metrics of the reconstruction results but does influence the semantic and spatiotemporal-level metrics significantly.\\n\\n\\nThank you again for your valuable suggestions to improving our work. We believe that, under your review, our manuscript will be significantly improved in terms of clarity and experimental design. We look forward to your feedback and further discussions.\"}", "{\"title\": \"Score update\", \"comment\": \"Thank you for these additional revisions, much appreciated!\\n\\nThe score has been updated to 8, as the concrete improvements with respect to Soundness and Presentation reinforce this paper's contributions, and make it a good paper to this reviewer's opinion (see also updated comment in Questions part of the original review).\"}", "{\"comment\": \"Thanks for incorporating the statistical assessments.\\nI feel only partially satisfied by this analysis and related changes, as follows:\\n1) Reading data directly from the table, I find it very difficult to map it to your conclusions (e.g., outperforming in 6/8 or 3/4 metrics) given the presentation style, which highlights significance even when it supports your method being actually significantly inferior.\\n2) You used t-test though I am not sure your data is Gaussian. Perhaps consider Mann-Whitney or Wilcoxon.\\n3) I am missing simple details on how you created the distributions used to compute significance (Algorithm 2 looks too cryptic to me). Related, it says all metrics are averaged across subjects in Table 2 - why is this the case? Is significance assessed on such pooled data as well? Method assessment should be made per-subject as these are subject-specific analyses.\\n4) Focus of your work not being on semantics. I believe this works aims simultaneously reconstruct of visual structure, semantics, and motion patterns. Not trade one aspect with another.\\n5) pretraining + fine-tuning likely contributing to Chen et al. (2024) superiority in some cases. That's a conjecture that has to be tested and properly compared apples-to-apples. This cannot, by itself, exempt this newly proposed method from proving its superiority.\\n6) New challenging retrieval tasks. Are there missing asterisks on this table, and if not, would it be fair to say there are no significant gains in favor of the proposed method at the _per-subject_ level? The average analysis appears irrelevant given that the focus of the entire paper and analyses is subject-specific. \\n7) Not outperforming Chen et al. (2024) on the 2-Way-I metric is acceptable. The metrics should either be taken as valid to demonstrate or refute gains or not be used at all. Using them to evaluate your method, and then deeming them invalid post-hoc doesn't make sense to me.\", \"title\": \"Statistical assessments\"}", "{\"title\": \"Supplementary Results and Explanations on Statistical Assessments (4/4)\", \"comment\": \"## 4. The hypothesis testing results for the three subjects on the Retrieval task.\\n## Response:\\nWe apologize for providing only the hypothesis testing results averaged across the three subjects in our previous rebuttal, which may have caused some misunderstanding. We greatly appreciate you pointing out this issue. We have now added the individual subject hypothesis testing results for the Retrieval task.\\n\\n| Model | Test set | **Subject 1** | | **Subject 2** | | **Subject 3** | | **Average** | |\\n|---------------|----------|---------------|-----------|---------------|-----------|---------------|-----------|--------------|-----------|\\n| | | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 |\\n| Wen [1] | Small | 2.17* | 19.50* | 3.33* | 19.17* | \\u2014\\u2014 | \\u2014\\u2014 | 2.75* | 19.33* |\\n| Kupershmidt [2] | Small | 1.09* | 8.57* | 0.92* | 8.24* | 0.84* | 8.24* | 0.95* | 8.35* |\\n| Mind-video [3] | Small | **3.22*** | 19.08* | 2.75* | 16.83* | 3.58* | 22.08* | 3.18* | 19.33* |\\n| **Ours** | Small | 3.08 | **22.58** | **4.75** | **26.90** | **4.50** | **24.67** | **4.11** | **24.72** |\\n| Wen [1] | Large | 1.41* | 11.58* | 2.08* | 9.58* | \\u2014\\u2014 | \\u2014\\u2014 | 1.75* | 10.58* |\\n| Kupershmidt [2] | Large | 0.17* | 2.94* | 0.17* | 2.77* | 0.25* | 2.18* | 0.19* | 2.63* |\\n| Mind-video [3] | Large | 1.75* | 7.17* | 0.83* | 5.17* | 1.25* | 9.00* | 1.28* | 7.11* |\\n| **Ours** | Large | **2.17** | **12.50** | **2.25** | **17.00** | **2.75** | **16.42** | **2.39** | **15.31** |\\n\\n*Note: For the 'small test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.83% and 8.3%, respectively. For the 'large test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.24% and 2.4%, respectively. The metrics are evaluated using 100 bootstrap trials. * denotes our performance is significantly better than the compared method (Wilcoxon test for paired samples, p<0.05).*\\n\\n## 5. Analysis of experimental results\\n## Response:\\nAfter improving the hypothesis testing for each participant, we analyzed the experimental results of our model and those of Chen et al. (2024) in the tables above, leading to the following conclusions:\\n\\n### (1) Pixel-level metrics:\\n Our model significantly outperforms Chen et al. (2024) **across all Pixel-level metrics on three datasets (16 participants in total)**, highlighting the effectiveness of incorporating structural feature decoding in video reconstruction.\\n\\n### (2) ST-level metrics: \\nFor CLIP-pcc, our model is significantly weaker than Chen et al. (2024) only for sub 01 in the HCP dataset, comparable for sub 02, and **significantly better for the remaining 14 participants**. For EPE, our model **significantly outperforms Chen et al. (2024) across all three datasets**. Notably, on the Algonauts2021 dataset, our model exceeds Chen et al. (2024) by more than **two times** for all 10 subjects in terms of EPE. This result underscores the significant advantage of our model over prior SOTA model in motion pattern reconstruction.\\n\\n### (3) Semantic-level metrics:\\n For 2-Way-I, our model is significantly weaker than Chen et al. (2024) for 2 subjects, comparable for 3 subjects, and **significantly better for the remaining 11 subjects**. For 2-Way-V, our model is significantly weaker for 4 subjects, comparable for 1 subject, and **significantly better for the other 11 subjects**. For VIFI-score, our model is comparable to Chen et al. (2024) for sub 02 in the HCP dataset and sub 08 in the Algonauts2021 dataset, while **outperforming Chen et al. (2024) for the remaining 14 subjects**.\\n\\n### (4) Retrieval task:\\n The results from the CC2017 dataset for 3 subjects indicate that our model is significantly weaker than Chen et al. (2024) in top-10 accuracy for sub 01 when the test set is configured as \\\"Small.\\\" However, our model outperforms Chen et al. (2024) for all other participants and settings.\\n\\nIt is important to note that the amount of training data used by our model is significantly smaller than that of Chen et al. (2024): Chen et al. used **600,000 segments for pretraining and 18 segments for fine-tuning**, while our model only used **18 segments for training**. Despite this disparity in data volume, our model still achieves competitive results in semantic pattern reconstruction, suggesting that it holds a certain advantage over the previous SOTA model.\"}", "{\"metareview\": \"The paper presents a framework for reconstructing videos from fMRI signals, where the proposed method first decomposes the signals to characterize the semantics, structure, and dynamics of content in the videos, the final reconstruction is produced by passing the decoded video signals through an inflated Stable Diffusion model. Experiments are presented on three datasets and show promising results.\\n\\nThe paper received overall positive scores with two accepts and two borderline accepts. The reviewers liked the overall approach, especially the disentanglement of the fMRI features and the competitive reconstruction improvements on public benchmarks.\", \"additional_comments_on_reviewer_discussion\": \"The paper received a long discussion between the authors and the reviewers. There were several key concerns raised by the reviewers on several aspects of the paper, namely:\\n* Clarity and soundness in the technical details (Reviewers 1xQ5, 5KvP) and \\n* Qualitative/substantiative improvements or comparisons (Reviewers GXPV, 5KvP)\\n* Missing ablation studies (Reviewer fFuu)\\n\\nAuthors revised the paper to fix the issues pointed out by the reviewers, presented qualitative results through an anonymous website that showed reasonable reconstructions, and provided additional numerical results supporting the need for various components in the model, as well as new results comparing the method to prior methods (such as Mind-Video). Overall, the reviewers were satisfied through the discussion.\\n\\nAC agrees with the reviewers sentiment that the paper makes an interesting attempt at reconstructing video from fMRI signals. The idea of decomposing the signals to the three constituents and extracting motion information to produce videos using a diffusion model is interesting. However, upon independent reading of the revised draft, AC finds several technical issues remaining in the paper. For example, Eq (1) the arguments are not specified in the LHS and the two components in the RHS appear the same, the notation **f** is overloaded and inconsistent across the paper, the text features **t** are not precisely defined, there are issues with the \\\\hat notation through out, and most importantly how the diffusion model is used on the video features is not clearly stated in a mathematically precise manner. Authors are encouraged to fix these issues in the camera-ready paper. As such, the paper is accepted.\"}", "{\"title\": \"Response to Reviewer 1xQ5 (cont.)\", \"comment\": \"## 7. In Section 3.3, the description of the VQ-VAE decoder and the inflation process is insufficiently detailed.\\n## Response:\\nWe apologize for the oversight that may have caused confusion for the readers. Thank you for pointing out this issue. \\n\\n**The inflation process refers to utilizing a Stable Diffusion model pre-trained on 2D data (images) to directly process 3D data (videos).** The specific process involves: \\n\\nAfter the motion features $\\\\Phi(\\\\mathbf{v}_{i}) \\\\in \\\\mathbb{R}^{B \\\\times f \\\\times 3 \\\\times \\\\frac{H}{8} \\\\times \\\\frac{W}{8}}$ are decoded, they are reshaped ($(B, f, 3, \\\\frac{H}{8}, \\\\frac{W}{8}) \\\\rightarrow (B \\\\cdot f, 3, \\\\frac{H}{8}, \\\\frac{W}{8})$) and input into the \\n Stable Diffusion U-Net for reverse denoising. \\n\\n**The result is then mapped back to pixel space through the VQ-VAE decoder** and reshaped ($(B \\\\cdot f, 3, H, W) \\\\rightarrow (B , f, 3, H, W)$) to yield the final video $\\\\mathbf{v}_{i} \\\\in \\\\mathbb{R}^{1 \\\\times f \\\\times 3 \\\\times H \\\\times W}$. In this context, $B$ denotes the batch dimension, with $B = 1$ during inference.\\n\\nTo facilitate the readers' understanding, we have also provided a more detailed description of these two techniques in Section 3.3.\\n\\n## 8.In Section 5.2, \\u00a0there is no mention of which dataset the ablation study is on.\\n## Response:\\nThe ablation study in Table 6 was conducted on sub1 of the CC2017 dataset, while the experimental results for sub2 and sub3 are presented in Table 12 and Table 13 of the Appendix. Due to time and space constraints, the parameter sensitivity analysis was only performed on sub1 of the CC2017 dataset, as shown in Table 10 and Table 11.\\n\\nWe have added the relevant explanations in the captions of Table 6, 10, 11, 12, and 13.\\n\\n## 9. Prior work is sometimes too generically described\\n## Response:\\nHan et al. mapped fMRI data to a VAE [1] pretrained on the ImageNet ILSVRC2012 dataset [2] to reconstruct a single frame, while Wen et al. mapped fMRI data to the feature space of AlexNet [3] and used a deconvolutional neural network [4] to reconstruct a single frame. Wang et al. developed an f-CVGAN that learns temporal and spatial information in fMRI through separate discriminators [5].\\n\\nDue to space limitations, these details are not included in the main text but are fully provided in Appendix A.\\n\\n## 10. In Section 6.2, there is inconsistent wording.\\n## Response:\\nThank you for pointing out this issue. Following your suggestion, we have replaced 'visual cortices' with 'visual cortical areas' in the relevant sentences of Section 6.2.\\n\\n\\nReferences\\uff1a\\n\\n[1] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. International Conference on Learning Representations, 2014.\\n\\n[2] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211\\u2013252, 2015.\\n\\n[3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu\\u0002tional neural networks. Advances in neural information processing systems, 25, 2012.\\n\\n[4] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In 2010 IEEE Computer Society Conference on computer vision and pattern recognition, pp. 2528\\u20132535. IEEE, 2010.\\n\\n[5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. Communications of the ACM, 63(11):139\\u2013144, 2020.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer 5KvP (cont.)\", \"comment\": \"## 2. Reintroducing external videos introduces a bias that this paper aims to avoid.\\n## Response:\\n\\nTo ensure a fair comparison with Chen et al. (2024) and \\\"Kupershmidt22\\\", we incorporated EV for further fine-tuning. However, we would like to clarify that EV was used only when comparing model performance on the CC2017 dataset. In all other parts of the paper, we report experimental results without using EV, ensuring that it does not introduce a bias that this paper aims to avoid.\\n\\nTo address potential misunderstandings stemming from the inclusion of EV, we have removed all EV-related experiments from Table 2 and omitted any corresponding descriptions from Section 5.1 in the revised manuscript.\\n\\n## 3. The claim of improved motion pattern reconstruction lacks sufficient support and should include a meaningful comparison with prior work.\\n## Response:\\n\\nThank you for pointing this out. To comprehensively evaluate the reconstruction performance of our model, we propose three types of evaluation metrics (eight in total): Semantic-level, Structure-level, and Spatiotemporal-level, with the latter specifically designed to assess motion pattern reconstruction.\\n\\nThe Spatiotemporal-level metrics include $CLIP-pcc$ and $End-Point$ $ Error$ $(EPE)$ :\\n\\n- $EPE$ measures the Euclidean distance between the endpoints of the predicted and ground truth trajectories for each corresponding frame. It provides a quantitative assessment of the similarity between the **motion trajectories** of the predicted and ground truth videos and is widely used in motion-sensitive tasks such as optical flow estimation [1] .\\n\\n- $CLIP-pcc$ calculates the CLIP image embeddings for each frame in the predicted videos and reports the average cosine similarity between all pairs of adjacent frames. This metric evaluates the **coherence of consecutive frames** in the video and is commonly applied in video generation and editing [2] .\\n\\nAs shown in Tables 2, 15, and 16, our model significantly outperforms previous methods on both metrics in three datasets, demonstrating its improved capability for motion pattern reconstruction.\\n\\n| **Models** | **Dataset** | **CLIP-pcc\\u2191** | EPE\\u2193 |\\n|:------------------:|:-------------:|:-------------:|:---------:|\\n| Wang et al. (2022) | CC2017 | 0.399 * | 6.344 * |\\n| Chen et al. (2024) | CC2017 | 0.409 * | 6.125 * |\\n| Ours | CC2017 | **0.425** | **5.422** |\\n| Chen et al. (2024) | HCP | 0.499 * | 9.290 * |\\n| Ours | HCP | **0.511** | **7.080** |\\n| Chen et al. (2024) | Algonauts2021 | 0.246 * | 7.693 * |\\n| Ours | Algonauts2021 | **0.401** | **3.264** |\\n\\n*Note: * denotes our performance is significantly better than the compared method (paired t-test, p<0.05).*\\n\\nTo further clarify for readers, we have provided additional explanations of these two metrics in Section 4.2, specifically in lines 319\\u2013323 of the revised manuscript, with the updates highlighted in green.\\n\\n\\n## 4. Stronger evidence is needed for the claim regarding improved recovery of motion patterns.\\n## Response:\\n\\nThank you for raising this issue. In addition to the shuffle test, we have incorporated an ablation study on the CMG module in Table 6, as suggested by Reviewer 1xQ5 (Soundness 1), to provide stronger evidence.\\n\\nSpecifically, we removed the fMRI guidance in the CMG module (replacing the fMRI input in cross-attention with the token from the previous frame) while keeping all other model structures and hyperparameters unchanged. We then computed the motion-related metrics of the reconstruction results. The table below presents the results:\\n\\n| **Models** | **Subject** | **CLIP-pcc\\u2191** | **EPE\\u2193** |\\n|:-----------------:|:-----------:|:-------------:|:---------:|\\n| w/o fMRI guidance | sub 1 | 0.381 * | 6.293 * |\\n| Full Model | sub 1 | **0.413** | **5.572** |\\n| w/o fMRI guidance | sub 2 | 0.343 * | 7.571 * |\\n| Full Model | sub 2 | **0.423** | **5.329** |\\n\\n*Note: * denotes our performance is significantly better than the compared method (paired t-test, p<0.05).*\\n\\nAs shown in the table, removing the fMRI guidance results in a significant deterioration in both CLIP-pcc and EPE, demonstrating that the motion information in our reconstruction results originates from the fMRI data rather than the video training set.\\n\\nReferences\\uff1a\\n\\n[1] John L Barron, David J Fleet, and Steven S Beauchemin. Performance of optical flow techniques. International journal of computer vision, 12:43\\u201377, 1994.\\n\\n[2] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7623\\u20137633, 2023.\"}", "{\"title\": \"Thanks for your suggestions!\", \"comment\": \"Finally, we sincerely appreciate the time and effort you have dedicated to providing constructive feedback on our manuscript. We are truly honored by your thoughtful suggestions. If you have any further questions or additional recommendations, please do not hesitate to reach out. We look forward to your continued guidance and feedback.\"}", "{\"title\": \"Response to Reviewer fFuu\", \"comment\": \"Dear Reviewer fFuu,\\n\\nThank you very much for your kind words and feedback. We greatly appreciate your time and effort in reviewing our work and are glad that the additional experiments and revisions have improved the manuscript. \\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Supplementary Results and Explanations on Statistical Assessments (1/4)\", \"comment\": \"Thank you very much for taking the time to review our rebuttal and for your careful critique of the shortcomings in our hypothesis testing setup. In Tables 2, 3, and 4 of the main text, we present results averaged across multiple subjects, and the hypothesis tests were conducted after averaging across subjects as well.\\n\\nFollowing your suggestion, we have added the Wilcoxon signed-rank test for the individual subject paired t-tests in Tables 5, 15, 16, and 17. (Note that Wang et al. (2022) only publicly reported the mean SSIM and PSNR values for the HCP dataset and did not release the full set of reconstruction results, thus preventing us from conducting hypothesis testing for their results.)\\n\\nBelow, we provide further clarifications in response to your comments and suggestions.\\n\\n## 1. A brief explanation of Algorithm 2.\\n## Response:\\nTaking the 2-way top-1 accuracy used in this study as an example, the calculation of this metric is as follows: For each reconstruction result ($recons$), a non-ground truth video ($gt*$) is randomly selected from the test set to form a triplet {$recons$, $gt$, $gt*$}. A classification model is then used to compute the logits for the three components and determine whether the $recons$ can be classified in the same category as the $gt$. \\n\\nFor 2-Way-I, the classification model used is ViT-base-patch16-224, with results computed and averaged across all frames of the video. For 2-Way-V, the model used is VideoMAE. Since the selection of non-ground truth videos is random, this process is repeated for 100 trials. The mean of the 100 trials is reported in the tables, and a paired t-test is performed using the results from these 100 trials for hypothesis testing.\\n\\n\\n\\n\\n\\n## 2. The construction of data distributions for hypothesis testing.\\n## Response:\\nFor evaluation metrics with random sampling, such as 2-Way-I and 2-Way-V, we created the distributions by repeating the experiment 100 times.\\n \\nFor other metrics, we used the bootstrap method to create the distributions. Taking the SSIM for Subject 1 from the CC2017 dataset as an example: the test set contains 1200 samples. When reporting the results in the tables, we directly calculate the SSIM between the reconstruction and ground truth and report the average over the 1200 samples. For hypothesis testing, we performed bootstrap sampling with replacement from the 1200 samples, recording the mean of the metric each time, and repeating this process 100 times to obtain 100 means. A paired t-test was then performed using the results from these 100 trials.\"}", "{\"summary\": \"This work addresses the problem of reconstructing high-quality video from fMRI data. The authors suggest that the key to achieving high-quality video reconstruction lies in decoupling and modeling semantic, structural, and motion information, also carefully handling the frequency discrepancy between fMRI data and videos. To this end, the authors develop a tri-modal contrastive learning scheme along with a next-frame prediction task. Lastly, to ensure that the generated videos are derived purely from the fMRI data, the input is fed into an untuned inflated Stable Diffusion model. Empirical evaluations show promising results and strong interpretability performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"[Quality, Significance]:\\n\\nThe structure of this work is very easy to understand and follow, with sufficient context and supported citations. The model notations and figure presentations are excellent. \\n\\nThe authors provides comprehensive empirical metric evaluations and compared their results with many other state-of-the-art approaches, and provide comprehensive ablation study and interoperability results.\\n\\n[Novelty]: One particular point that I find this work novel is that the authors do not fine-tune the stable diffusion, ensuring us that the videos that we see are not coming from the overfitting.\", \"weaknesses\": \"I do not find this work particularly having major weaknesses.\", \"questions\": \"1. How did the author chose $\\\\lambda_1$ and $\\\\lambda_2$ in equation 4? Can the author provide the ablation study/ discussion on how these choices affect the downstream performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1xQ5 (cont.)\", \"comment\": \"## 4. In the analysis in Section 6.2, why the weight proportion of V1 in the motion information is almost double that of the next highest-weighted areas?\\n## Response:\\n\\nWe sincerely thank you for pointing out this important observation. After reviewing the relevant literature, we identified the reason why V1 plays a dominant role in decoding motion features: \\n\\n- Parallel processing is a key characteristic of the visual system [1] , and in the dorsal pathway, motion information is not processed strictly hierarchically [2] . **Experimental evidence suggests that the direct pathway from V1 to MT primarily conveys information about motion speed and direction.** Additionally, several indirect pathways originating from V1 (e.g., through V2 and V3) also transmit related information to MT [3] [4] . **(Figure 26(b) in Appendix G provides a more intuitive illustration of this process.)**\\n\\n- The earliest paper in the field of video reconstruction utilized neural encoding to map cortical projections [5] , as illustrated in Figure 2(c) of their work. Their results also demonstrated that, compared to other brain regions, V1 exhibited the strongest activation in response to motion information. Following your suggestion, we have discussed this phenomenon in Section 6.2 of the main text, which has significantly improved our manuscript by enhancing its comprehensiveness and strengthening its contribution to the field of video reconstruction.\\n\\n\\nTherefore, the observation shown in Figure 8, where V1 contributes the most to motion information decoding, is reasonable.\\n\\nAdditionally, to facilitate readers' understanding of the neuroscience background, we have included two illustrative figures in Appendix G and provided a concise explanation of how the human visual cortex processes visual information.\", \"references\": \"[1] Jonathan J Nassi and Edward M Callaway. Parallel processing strategies of the primate visual system. Nature reviews neuroscience, 10(5):360\\u2013372, 2009.\\n\\n[2] Edward M Callaway. Structure and function of parallel pathways in the primate early visual system. The Journal of physiology, 566(1):13\\u201319, 2005.\\n\\n[3] Semir Zeki and Stewart Shipp. The functional logic of cortical connections. Nature, 335(6188): 311\\u2013317, 1988.\\n\\n[4] Carlos R Ponce, Stephen G Lomber, and Richard T Born. Integrating motion and depth via parallel\\npathways. Nature neuroscience, 11(2):216\\u2013223, 2008.\\n\\n[5] Shinji Nishimoto, An T Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L Gallant. Reconstructing visual experiences from brain activity evoked by natural movies. Current biology, 21(19):1641\\u20131646, 2011.\\n\\n\\n\\n\\nFinally, we would like to express our sincere gratitude for raising important questions and providing valuable suggestions for improving our manuscript. We also appreciate your time and patience in thoroughly reviewing our responses. We believe that, under your review, our manuscript will be significantly improved in terms of clarity and experimental design. We look forward to your feedback and further discussions.\"}", "{\"summary\": \"Paper presents an approach the reconstruct video clips for fMRI brain recordings.\", \"the_reconstruction_is_broken_to_3_streams\": \"structure, semantic and motion.\\nFirst the fMRI signal is transformed to align with image embeddings, the fMRI embedding is used with pretrained image diffusion models to generate reconstructions.\\nThe results are compared against multiple previous works on a variety of metrics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Competitive results on multi evaluation metrics, the provided reconstruction look good visually.\", \"weaknesses\": [\"Full reconstruction of the video clips not provided, this would make the work more transparent, and would allow future works to easily compare on new metrics.\", \"No comparison to other methods is provided for retrieval metric (Which I think is one of the most objective/relevant metrics)\", \"Authors don't show results for sequences with actual motion, give that one of the focuses of the work is motion, it would make sense to show the ability to reconstruct motion. (for example the clip with the soldier)\"], \"questions\": [\"Figure 7 color scheme is not consistent across plots(relevant to all similar plots)\", \"I think it makes sense to put retrieval results in a more centric place in the paper.\", \"The work \\\" Kupershmidt22\\\" provides a retrieval metric for itself and \\\"Wen18\\\", you should consider adding this results.\", \"It would be helpful to add standard error/ statistical tests to the comparisons between the models.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1xQ5\", \"comment\": \"We sincerely appreciate your recognition of the contribution and novelty of our work. Thank you for taking the time to point out the shortcomings in the presentation and experimental design. We have thoroughly revised the manuscript based on the suggestions provided by you, **with the changes highlighted in blue**, and have submitted the updated version. Our point-by-point responses to your comments are as follows.\\n\\n# Presentation: \\n## 1. What is the text condition $c_i$ in Section3.2 ?\\n## Response:\\nWe appreciate your pointing out this issue. The Stable Diffusion model used in this study is a text-to-image model, which operates by first encoding the input text into a text condition $c_i$ using its built-in text encoder. The corresponding image is then generated based on this text condition. In the semantic decoding phase, our goal is to fit the fMRI data to the text condition $c_i$ . However, $c_i$ has a very high dimensionality (1x20x768), and directly fitting the two would lead to severe overfitting. Therefore, we chose the CLIP representation space, which has lower dimensionality (1X512) and strong generalization capability, as an intermediary. First, we align the fMRI representation f with the CLIP text representation t and visual representation v using a multi-modal contrastive loss ($L_{Semantic}$). Then, we map the fMRI representation f to the text condition $c_i$ using a projection loss ($L_{Projection}$). Therefore, the ground truth text condition is obtained by inputting the text description into the text encoder of Stable Diffusion.\\n\\nWe have made the corresponding revisions in Section 3.2 and Figure 3.\\n\\n\\n## 2. llustration issues (Figure 2(c), Figure 3, Figure 4).\\n## Response:\\nWe sincerely appreciate your careful attention to these issues. In order to enhance the clarity of our work for readers, we have made the following revisions: In Figure 2(c), we have adjusted the arrows to point to \\\"Motion\\\"; in Figure 3, we have modified the layout of the rightmost panel following your suggestion; and in Figure 4, we have added the positional encoding vectors $E_{pos}$ .\\n\\n\\n## 3. In Section 5.1, it is not clear why the videos are external in the \\u201cExternal Videos (EV)\\u201d case.\\n## Response:\\nWe apologize for the confusion caused by our phrasing. Our model consists of two stages: the fMRI-to-feature stage and the feature-to-video stage. Although we used a video-fMRI dataset in the fMRI-to-feature stage, to ensure that the motion information in the reconstructed video comes solely from the fMRI data, we employed Inflated Stable Diffusion rather than a text-to-video model in the feature-to-video stage. Therefore, the term \\\"external videos\\\" refers to Stable Diffusion in this context. Specifically, the generative model we used was trained only on image datasets and has never been exposed to \\\"external videos.\\\" This issue was similarly pointed out by Reviewer 5KvP. To avoid any potential misunderstanding among readers, we have decided to remove the experiments related to \\\"Ours with EV\\\" from the main text.\\n\\n## 4. In Section 3.2, the description of the structure regarding the Semantic & Structure decoder is insufficiently detailed.\\n## Response:\\nWe apologize for this oversight in the manuscript. The Semantic decoder is a 3-layer MLP, while the Structure decoder is a 2-layer MLP. We have now included a detailed explanation of this in Section 3.2 of the latest version of the manuscript.\\n\\n## 5. In Section 3.2, how the frames are aggregated to create\\u00a0$v$ ?\\n## Response:\\nFor each video, we input each frame into the CLIP visual encoder and then compute the average across all frames to obtain $v$. To clarify, we have provided additional details in Section C1 of the Appendix.\\n\\n## 6. In Section 4.2, the description and citations of End-Point-Error (EPE), Hue-pcc, and CLIP-pcc are missing.\\n## Response:\\nEnd-Point Error (EPE) is calculated as the Euclidean distance between the endpoints of the predicted and ground truth trajectories for each corresponding frame, providing a measure of the similarity between the motion trajectories of the predicted and ground truth videos. The Hue-PCC (Hue-based Pearson Correlation Coefficient) is calculated by first converting the frames to the HSV color space, then computing the Pearson correlation coefficient (PCC) between the hue values of the two frames. This metric measures the linear relationship between the hue distributions of the frames, capturing their color similarity.\\n\\nWe have added a supplementary description of EPE and citations for these three evaluation metrics in the relevant section of Section 4.2.\"}", "{\"title\": \"The claim of improved motion pattern reconstruction lacks sufficient support and should include a meaningful comparison with prior work.\", \"comment\": \"Thanks for the clarification.\\n\\nTables 15, 16 show no statistical assessments. Table 2 and I think also the table sent here is a cross-subject average statistic being analyzed. Arguably, the gain should hold per subject in a subject-specific analysis.\"}", "{\"comment\": \"I believe that the authors' thorough responses and revisions introduced make this paper and its results much more compelling.\\nMuch appreciated.\\nMy scores are now revised accordingly.\"}", "{\"title\": \"Supplementary Results and Explanations on Statistical Assessments (2/4)\", \"comment\": \"## 3. Comparison with Chen et al. (2024) 's results on the CC2017, HCP, and Algonauts 2021 datasets and the hypothesis testing results for each subject.\\n## Response:\\nAfter constructing the data distributions as described above, and to account for the potential non-Gaussian nature of the distributions, we followed your suggestion and used the Wilcoxon signed-rank test for paired comparisons for each subject. The comparison with Chen et al. (2024) on the three datasets is shown in the table below. **The full experimental results have been added to Tables 15, 16, and 17, and we have submitted the updated PDF.**\\n\\n| Sub ID | Models | Semantic-level \\u2191 | | | Pixel-level \\u2191 | | | ST-level | | |\\n|----------|-------------|--------------------|--------------------|--------------------|----------------|----------------|----------------|------------------|------------------|------------------|\\n| | | 2-way-I | 2-way-V | VIFI-score | SSIM | PSNR | Hue-pcc | CLIP-pcc \\u2191 | EPE \\u2193 |\\n| sub 01 | Mind-video | 0.792*** | **0.853***** | 0.587*** | 0.171*** | 8.662*** | 0.760*** | 0.408*** | 6.119*** |\\n| | Ours | **0.812** | 0.841 | **0.602** | **0.321** | **9.124** | **0.774** | **0.425** | **5.580** |\\n| sub 02 | Mind-video | 0.789*** | **0.842***** | 0.595*** | 0.172*** | 8.929*** | 0.773*** | 0.409*** | 6.062*** |\\n| | Ours | **0.811** | 0.827 | **0.615** | **0.292** | **9.250** | **0.791** | **0.429** | **5.329** |\\n| sub 03 | Mind-video | **0.811**** | **0.848***** | 0.597*** | 0.187*** | 9.013*** | 0.771*** | 0.410** | 6.193*** |\\n| | Ours | 0.792 | 0.823 | **0.607** | **0.349** | **9.287** | **0.794** | **0.421** | **5.356** |\\n\\n*Quantitative comparison of reconstruction results across three subjects from the **CC2017 dataset**. For the 2-way-I and 2-way-V metrics, 100 repetitions were conducted, while other metrics were evaluated using 100 bootstrap trials. All metrics are averaged over the entire test set. The superior results are highlighted in bold. Asterisks indicate statistical significance (Wilcoxon test for paired samples) compared to our model. p<0.0001(\\\\*\\\\*\\\\*), p<0.01(\\\\*\\\\*), p<0.05(\\\\*).*\\n\\n| Sub ID | Models | Semantic-level \\u2191 | | | Pixel-level \\u2191 | | | ST-level | | |\\n|----------|-------------|--------------------|--------------------|--------------------|----------------|----------------|----------------|------------------|------------------|------------------|\\n| | | 2-way-I | 2-way-V | VIFI-score | SSIM | PSNR | Hue-pcc | CLIP-pcc \\u2191 | EPE \\u2193 |\\n| sub 01 | Mind-video | 0.798** | 0.752*** | 0.605*** | 0.123*** | 9.302*** | 0.774*** | **0.486*** | 12.746*** |\\n| | Ours | **0.819** | **0.783** | **0.613** | **0.325** | **10.757** | **0.820** | 0.476 | **7.825** |\\n| sub 02 | Mind-video | **0.761** | **0.777*** ** | **0.611** | 0.115*** | 9.414*** | 0.804*** | 0.483 | 7.358*** |\\n| | Ours | 0.756 | 0.759 | 0.609 | **0.371** | **11.894** | **0.834** | **0.485** | **6.624** |\\n| sub 03 | Mind-video | 0.779 | 0.778*** | 0.612*** | 0.118*** | 9.109*** | 0.803*** | 0.529*** | 7.767*** |\\n| | Ours | **0.781** | **0.793** | **0.634** | **0.336** | **11.018** | **0.834** | **0.573** | **6.792** |\\n\\n*Quantitative comparison of reconstruction results across three subjects from the **HCP dataset**. For the 2-way-I and 2-way-V metrics, 100 repetitions were conducted, while other metrics were evaluated using 100 bootstrap trials. All metrics are averaged over the entire test set. The superior results are highlighted in bold. Asterisks indicate statistical significance (Wilcoxon test for paired samples) compared to our model. p<0.0001(\\\\*\\\\*\\\\*), p<0.01(\\\\*\\\\*), p<0.05(\\\\*).*\"}", "{\"summary\": \"This paper addresses the challenge of video reconstruction from fMRI data, identifying three key components for an effective decoder: semantics, visual structure, and motion patterns. Building on prior works that successfully capture semantic content using CLIP and Stable Diffusion, the authors aim to improve fidelity to the visual structure and motion dynamics present in the original videos. They propose a decoupled, multi-step reconstruction approach, utilizing distinct decoders and specialized reconstruction criteria. Their method combines tri-modal CLIP (fMRI, image, text), single-frame Stable Diffusion (T2I), and a uniquely learned internal motion prior that avoids biases from external video datasets. Applied to three publicly available movie-fMRI datasets, the approach demonstrates moderate gains in some metrics across different configurations, with notable improvements in preserving visual structure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors\\u2019 effort to achieve simultaneous reconstruction of visual structure, semantics, and motion patterns represents a significant and timely contribution. Many prior works prioritize visually appealing reconstructions or focus on semantic fidelity, often overlooking finer alignment with the ground truth video structure. Advancing fidelity in this regard is important; without it, decoders risk becoming general natural video generators with limited adherence to fMRI cues.\\n\\nThe primary improvement over previous methods appears in the preservation of visual structure, evidenced both qualitatively and quantitatively, though this improvement is moderate. The authors\\u2019 thorough analysis across three datasets, coupled with an extensive benchmarking on various metrics assessing semantics and visual structure, adds robustness to the work. Additionally, the website offers a helpful visual supplement to this complex work, making it more accessible and digestible.\", \"weaknesses\": \"The results, both qualitatively and quantitatively, lack compelling evidence of substantial improvement over prior work, particularly when compared to Chen et al. (2024). In fact, it appears that with proper statistical analysis (e.g., Table 2-4), there may be no significant gains. Methodologically, the advancement over Chen et al. (2024) seems minimal, with certain versions of the authors' approach even reintroducing external videos\\u2014the very bias this paper aims to avoid. Furthermore, the claim of improved motion pattern reconstruction is insufficiently supported, as the authors compare only against shuffled frames. A meaningful comparison should have been made with prior works, similar to the comparisons on semantic and visual structure fidelity.\", \"additional_comments\": [\"[36-38] Statement is unclear; consider rephrasing.\", \"[54-59] The text appears to confuse BOLD integration duration with fMRI sampling rate. The BOLD signal integrates neural activity over a period greater than 10 seconds (~300 video frames), which is a major limitation in recovering any motion patterns potentially encoded in neural activity. The fMRI sampling rate is limited for other technical reasons but even if was sampling at a higher rate this would likely not resolve this fundamental issue.\", \"[100-101] The claim about enabling video reconstruction may be overstated, as prior methods have also achieved this to an extent.\", \"[102-104] Similar to above, there is a mix-up regarding temporal aspects. Clarification of this contribution would help.\", \"[104-106] The visualizations mentioned are not novel to this work; they have been used in prior studies and are reintroduced here rather than proposed anew.\", \"[107-110] It remains unclear what metric is used to evaluate successful recovery of motion patterns. Detailing this metric is necessary for interpreting the results.\"], \"questions\": \"1) Consider adding robust statistical analyses and uncertainty estimates for the primary results. This would clarify the significance of the observed gains, if any.\\n\\n2) For the claim regarding improved recovery of motion patterns, stronger evidence is needed. As presented, the motion generator appears modest/limited in comparison to the tools used for recovering structure and semantics. Since improvement in motion fidelity is a key goal, it would be impactful to demonstrate and emphasize substantial gains in this area. If stronger evidence cannot be shown, it may be best to reconsider this claim.\\n\\n3) The authors note that MT, an area associated with motion processing, shows significant activation for semantics, typically associated with the ventral stream. If this outcome supports the motion recovery, it could benefit from additional context. Some clarification on MT\\u2019s role in semantics would be helpful for the reader\\u2019s interpretation of these findings.\\n\\n4) For Figure 8, consider enhancing its visual accessibility to make it easier to interpret. Improvements in layout or clarity could aid the reader in understanding the figure\\u2019s contribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewers,\\n\\nThank you for your valuable time, insightful comments, and useful suggestions. We have made thorough revisions in the latest PDF submission based on your feedback. To address each reviewer\\u2019s comments, **we have highlighted the changes in different colors: Reviewer 1xQ5 in blue, Reviewer GXPV in red, Reviewer 5KvP in green, and Reviewer fFuu in purple.**\\n\\nOur point-by-point response to the reviewers\\u2019 comments has been added to the individual chat box for each reviewer. We are confident that, thanks to the insightful suggestions and constructive feedback from the reviewers, our manuscript will experience substantial improvements in both its clarity and the thoroughness of the experiments.\"}", "{\"comment\": \"Thank you for carefully considering and implementing all suggestions.\\n\\nRegarding the presentation issues, the paper is now much more readable and should not confuse a future reader (at least on the points raised in this review). \\nThe only objection would be on the complete removal of the EV results, as they are still valuable to compare with competitors and also offer transparency. Nevertheless, this reviewer also agrees with reviewer 5KvP that it is not fair to include them as part of \\\"Ours\\\", as an integral part of \\\"Ours\\\" is the lack of video finetuning. This could be debated some more on the basis of \\\"are the videos really external, as they are used to train other parts of the pipeline as well\\\", but the claim of a completely video-agnostic generator is still a big part of the paper. Recommendations would be to either (1) keep them in the main table, but put them above the line and don't call them \\\"Ours\\\", or (2) put them in the Appendix (and reference it in main text). In either case, EV should be named something else (e.g. SD-video-finetuning), as the name EV points to videos external to CC2017.\\n\\nAs for the soundness related points, the ablation removing the fMRI guidance alleviates this reviewer's main concerns and strengthens the papers' argument considerably, especially when considering the motion metrics. Comparing the removal of the whole CMG module (w/o Motion) with the removal of fMRI guidance from the CMG (w/o fMRI guidance), it is observed that the latter makes up most of the impact of the former (i.e. the whole CMG module improves performance by 0.037 in CLIP-pcc, out of which 0.032 comes from the fMRI guidance, and by 0.802 in EPE, out of which 0.721 comes from the fMRI guidance). An additional suggestion would be to explicitly mention this comparison. \\nThe rest of the soundness points were also improved to satisfaction.\"}", "{\"title\": \"Thanks for your recognition!\", \"comment\": \"We would like to express our sincere gratitude to the reviewer for spending time and effort in providing constructive suggestions for our manuscript and for recognizing our work. We believe that under your guidance, our manuscript has achieved a higher level in terms of content readability and experimental integrity.\"}", "{\"summary\": \"In this work, the authors present a novel method for video reconstruction from fMRI recordings. There are two core components to this novelty, (1) the method is interpretable and learns separate semantic, structural, and motion features from the fMRI - later used to generate video frames, and (2) the motion component of the generated video is solely based on the motion predicted from the fMRI because the video generator is an inflated image diffusion model. The videos are generated in two stages, fMRI-to-feature which has trainable components, and feature-to-video which is completely frozen. At training time of stage 1, a Semantic decoder using frozen CLIP encoders, a Structure decoder using a frozen VQ-VAE, and a custom transformer-based Consistency Motion Generator with a masked causal frame prediction task are trained to learn the semantic, structural, and motion features respectively. The performance of the reconstruction model is evaluated on 3 public fMRI datasets with 8 different (semantic, pixel, and spatiotemporal) evaluation metrics, compared against previous work, analyzed with ablation studies, and assessed under two different interpretability analyses. The model is found to exceed state-of-the-art both in quantitative and qualitative metrics, yield sensible component contributions in the ablation, and offer interpretability insights that include neurobiological plausibility.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"This paper has good quality, with sound methodology and extensive experiments. It is an original, systematic, and rational approach that addresses important issues in previous work; namely the entanglement of different types of features that are decoded from fMRI (semantic, structural, motion), and the entanglement of motion information learned by external training of the video generator with the motion information derived from fMRI. The improvements brought forward by this approach are clearly indicated in the quantitative and qualitative results and consist significant contributions in the research aiming to recover, as well as understand, dynamic visual information from brain recordings.\", \"weaknesses\": [\"**Presentation** could be more clear at many points throughout the paper. There are technical parts that require more explanation or are currently creating some confusion, which are outlined as follows in order from more major to more minor issues.\", \"In section 3.2, it is not mentioned at all how the ground truth text condition $\\\\bf{c}_i$ is obtained - are these from the training dataset of the (frozen) inflated image generator or are these the same as the video captions $\\\\bf{t}$? It would help to have an intuitive explanation of this in figure 3 as well ($\\\\bf{c}$ is not shown at all in the training pipeline).\", \"In section 5.1, it is not clear why the videos are external in the \\u201cExternal Videos (EV)\\u201d case; This is described as \\u201cwe further fine-tuned the image diffusion model using videos from the CC2017 dataset (with the training set)\\u201d which is still videos from within the video-fMRI dataset, already seen during training in earlier stages of the pipeline.\", \"In section 3.2, authors are not explicit enough in the description of the trainable modules Semantic & Structure decoder; From the notation it can be assumed that the Semantic decoder is just one trainable vector $\\\\bf{f}$ initialized by the fMRI vector, and the Structure decoder is an MLP $D_{Structure}$ (of unknown number of layers) but it needs to be outlined explicitly.\", \"In the same section, it is also not explicit how the frames are aggregated to create $\\\\bf{v}$ - is it the average of the CLIP visual embedding across all frames or another aggregation?\", \"In section 4.2, the description of End-Point-Error(EPE) is missing - the reader has no clue what it is, if it is something existing in the literature (missing citation) or introduced by the authors (missing formula). There are also missing citations for some of the other metrics (Hue-pcc, CLIP-pcc).\", \"In section 3.3, a mention to the VQ-VAE decoder depicted in the figure is missing - the reader is left to wonder about this. Additionally, this section would benefit from more information on the inflation process (e.g. at least an in-line equation).\", \"In section 5.2, there is no mention of which dataset the ablation study is on.\", \"In section 3.2, figure 4 is missing $\\\\bf{E_{pos}}$ and would benefit from a more informative caption. Also, the abbreviation LDM used in the text is probably not familiar to all readers (either remove or expand it).\", \"In the rightmost box of the training pipeline in figure 3 (CMG), the bracket of $\\\\bf{L_{consistency}}$ is placed in a misleading way - a suggestion would be to move the mask matrix to the bottom and have the input and output frames on the right, with $\\\\bf{L_{consistency}}$ connecting the original (masked) future frames with the predicted future frames.\", \"In figure 2c the arrow for the noise would be more accurate if it pointed to the Motion instead of the Structure box.\", \"Prior work is sometimes too generically described e.g. \\u201cSubsequently, Han et al. (2019), Wen et al. (2018) and Wang et al. (2022) map brain responses to the feature spaces of deep neural network (DNN) to reconstruct video stimuli.\\u201d - what DNN?\", \"In section 6.2, where authors describe \\u201cvisual cortices\\u201d, probably a more adept term is to say \\u201cvisual cortical areas\\u201d or \\u201careas of the visual cortex\\u201d.\", \"**Soundness** related issues exist in parts of the methodology, or sometimes its interpretation.\", \"The reader is not yet completely convinced that motion information is from the fMRI and not from the training videos. It would help to see an ablation with the CMG trained without the fMRI entirely (output from temporal module is then the Q, K, and V of spatial module), and see if the next frames can be predicted at inference from the structural latent fMRI embedding of the first frame. If the CMG module is still good, then the motion information is not from the fMRI but from the videos in the training set.\", \"The reader finds several issues with the analysis in 6.1. First the y-axes should end at 1.0 as the maximum value of $\\\\sum _i \\\\delta _i$ is 100. Additionally the y-axes are different across the 3 panels which is also misleading. Second, it seems that although p-values are lower with the CMG, they are still very high and much higher than 0.05, meaning that the order of the generated frames does not matter significantly for these metrics. This is not commented on at all by the authors. Here, the reader\\u2019s suggestion for a better baseline to compare against (instead of the standard threshold of 0.05) is the p-value of the shuffle test with the ground-truth video, as it is not certain that even this would be below 0.05. Third, it is not clear why the authors are examining the structural metrics for the shuffle test instead of solely the spatio-temporal metrics, and why for the latter only CLIP-pcc is shown and not EPE. In the view of the reader, only CLIP-pcc and EPE are relevant and should be shown. Finally, the results are vastly different across the 3 subjects which is also not (and should be) commented on in the text.\", \"In section 5.2, table 5, it is observed that the structure metric Hue-pcc is increasing significantly when the structure module is removed. This seems like an important inconsistency in the results, yet the authors do not comment on it. It is expected to provide some sort of explanation, perhaps based on how this specific metric works and on how the w/o Structure videos look, since the other pixel-level metrics seem to decrease.\", \"In the analysis in 6.2, it is noticeable that the weight proportion of V1 in the motion information is almost double that of the next highest-weighted areas (e.g. TPOJ, MT, V2, V3), which seems very significant and is not (and should be) commented on - this effect is hidden when the whole of LVC weights are added up.\"], \"questions\": \"The reviewer would like the authors to address the points outlined in the \\u201cWeaknesses\\u201d section, in each case either by making the suggested change or another change that to the authors\\u2019 opinion fixes the issue better, or lastly by giving a sufficient (and convincing) explanation of why no change is needed. This way the reviewer\\u2019s opinion of the paper would be improved, to a rating of 6 or above.\\n\\n**Update after rebuttal:** The score has been updated from an initial 5 to an 8, since all of this reviewer's points were addressed in a very careful, timely, and complete manner. More specifically, the crucial points outlined above in \\\"Soundness\\\" were fixed and the additional ablation experiments strengthen the paper and alleviate this reviewers concerns, which on its own brings the paper rating above acceptance threshold. On top of this, the paper's presentation was substantially improved, making it a consistent and clear read, and overall a good paper (8).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The paper would benefit from an \\u201cEthics Statement\\u201d (which does not count toward page limit and is placed at the end) addressing the issue of potentially harmful use cases of \\u201cmind reading\\u201d (combined with more portable neuroimaging methods).\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer GXPV\", \"comment\": \"Thank you for recognizing and supporting our work, as well as for taking the time to provide valuable suggestions for improvement. We have thoroughly revised the manuscript based on your feedback, **with changes highlighted in red**, and have submitted the updated version. Below are our detailed responses to your comments.\\n\\n## 1. Provide the full reconstruction of the video clips to enhance the transparency of the work.\\n\\n## Response:\\n\\nThank you for your suggestion. We agree with your perspective that open-sourcing all experimental results would facilitate evaluation and comparison using new metrics for other researchers. We also appreciate the generosity of the works compared in Table 2, which kindly provided their reconstruction results for evaluation. However, due to OpenReview's file size limitation for supplementary materials (no more than 100MB), it is challenging to upload all our reconstruction results, which total 32GB across the three datasets, at this stage. Nevertheless, we assure you that once the paper is accepted, we will make all the preprocessed datasets, code, and reconstruction results publicly available.\\n\\n\\n\\n## 2. Authors don't show results for sequences with actual motion.\\n\\n## Response:\\nWe have provided some reconstructed videos on the anonymous project homepage (https://mind-animator-design.github.io/). On the homepage's first page, several examples demonstrate a strong consistency between the reconstructed results and the ground truth in terms of motion information. For instance, in the sixth video of the second row, we successfully reconstructed a crowd walking forward, and in the fourth video of the fourth row, two people looking up and laughing. Additionally, in the \\\"More samples\\\" section, you can observe reconstructions such as an airplane flying from right to left, a fish swimming from right to left, and a car driving along a road. These results exhibit actual motion and closely align with the ground truth.\\n\\n\\n## 3. Figure 7 color scheme is not consistent across plots.\\n## Response:\\nThank you for your careful attention to this issue. We have corrected the color schemes in Figure 7(c), 22(c), and 24(c) in the latest submitted manuscript to ensure consistency across all similar figures.\\n\\n## 4. Add standard error/ statistical tests to the comparisons between the models.\\n## Response:\\nThank you for your suggestion. In the latest submitted manuscript, we have added statistical tests to the experimental results in Tables 2, 3, 4, and 5. The results show that our model significantly outperforms the previous comparison methods on most metrics, further highlighting its superior performance.\"}", "{\"title\": \"Response to Reviewer 5KvP (cont.)\", \"comment\": \"# Writing Issues\\n\\n## 1. Line 36-38: Statement is unclear.\\n## Response:\\nThank you for pointing this out. To improve clarity, we have revised the manuscript accordingly, with the changes highlighted in green in the latest version.\\n\\n## 2. Line 54-59: The text appears to confuse BOLD integration duration with fMRI sampling rate.\\n## Response:\\nThank you for pointing out the issue with our phrasing. We have made revisions in the latest version of the manuscript, with the changes highlighted in green in Lines 54-59\\uff1a\\n\\n\\\" Due to the inherent nature of fMRI, which relies on the slow blood oxygenation level dependent (BOLD) signal, neural activity is integrated over a period exceeding 10 seconds (~300 video frames). This integration delay poses a fundamental challenge in capturing rapid motion dynamics.\\\" \\n\\n## 3. Line 100-101: The claim about enabling video reconstruction may be overstated, as prior methods have also achieved this to an extent.\\n## Response:\\nWe acknowledge that we are not the first to enable video reconstruction from fMRI. However, we are the first to reconstruct videos by decoupling semantic, structural, and motion information.\\nWe have made revisions in the latest version of the manuscript, with the changes highlighted in green in Lines 100-101.\\n\\n## 4. Line 102-104: Similar to above, there is a mix-up regarding temporal aspects.\\n## Response:\\nThank you for pointing out this issue. We have made revisions in the latest version of the manuscript, with the changes highlighted in green in Lines 102-104\\uff1a\\n\\n\\\"This model decodes subtle yet significant motion patterns through a next-frame token prediction task despite the limitations imposed by the slow BOLD signal integration in fMRI.\\\" \\n\\n## 5. Line 104-106: The visualizations mentioned are not novel to this work.\\n## Response:\\nThank you for pointing out the overclaim issue. We have revised the manuscript by changing \\\"propose\\\" to \\\"use\\\" in the relevant sections.\\n\\n## 6. Line 107-110: It remains unclear what metric is used to evaluate successful recovery of motion patterns.\\n## Response:\\nWe have provided a detailed explanation of the two metrics used to evaluate the successful recovery of motion patterns in Section 4.2, Lines 319-323:\\n\\n- **End-Point Error (EPE)** measures the Euclidean distance between the endpoints of the predicted and ground truth trajectories for each corresponding frame. It provides a quantitative assessment of the similarity between the **motion trajectories** of the predicted and ground truth videos and is widely used in motion-sensitive tasks such as optical flow estimation and video editing.\\n\\n- **CLIP-pcc** calculates the CLIP image embeddings for each frame in the predicted videos and reports the average cosine similarity between all pairs of adjacent frames. This metric evaluates the **coherence of consecutive frames in the video** and is commonly applied in video generation and editing.\\n\\n## 7. Consider improving the visual accessibility of Figure 8 to enhance its interpretability.\\n## Response:\\n\\nTo complement the bar chart in Figure 8, we normalized the importance values of each ROI in decoding the three features (semantic, structure, motion) and visualized the results in Figure 21 in Appendix F2. Additionally, we provided an explanation for Figure 21 in Section F2, highlighted in green.\"}", "{\"title\": \"Response to Reviewer 5KvP\", \"comment\": \"We sincerely appreciate your recognition of the contributions of our work. Thank you for taking the time to point out the areas for improvement in both the presentation and experimental design. We have carefully revised the manuscript in accordance with your suggestions, with **the changes highlighted in green**, and have submitted the updated version. Below are our point-by-point responses to your comments.\\n\\n# Experimental Issues\\n## 1. The experimental results lack compelling evidence of significant improvement over prior work (particularly when compared to Chen et al. (2024)) and would benefit from robust statistical analyses.\\n## Response:\\n\\nThank you for identifying this issue. We acknowledge that without robust statistical analyses, it is challenging to determine whether the improvements achieved by our model are significant. Therefore, we have incorporated t-test results into Tables 2, 3, 4, and 5. For these tables, the experimental results were first averaged across three subjects before conducting statistical analyses.\\n\\nFrom Tables 2, 3, and 4, it can be observed that our model significantly outperforms Chen et al. (2024) on 6 out of 8 metrics in the CC2017 dataset, 3 out of 4 metrics in the HCP dataset, and all 4 metrics in the Algonauts 2021 dataset.\\n\\nWe note, however, that our model falls short of outperforming Chen et al. (2024) on the 2-Way-Image Identification accuracy (2-Way-I) metric, which measures the semantic similarity between reconstructed results and ground truth. We provide the following explanations for this outcome:\\n\\n- $ Focus$ $of$ $our$ $work $: Chen et al. (2024), as an influential work in reconstructing semantically meaningful videos from fMRI, overlooks structural and motion information, which are the primary focus of our study. As a result, our model was not explicitly designed for semantic decoding.\\n\\n- $ Pretraining$ $vs.$ $Random$ $ Initialization $ : Chen et al. (2024) leverages pretraining on a large, unpaired fMRI dataset (HCP) to learn intrinsic representations from fMRI, followed by fine-tuning on the CC2017 dataset. In contrast, our model was randomly initialized and trained directly on the CC2017 dataset. This \\\"pretraining + fine-tuning\\\" paradigm likely contributes to their superior performance in semantic decoding and represents a potential direction for our future work.\\n- $Retrieval$ $tasks$ $with$ $varying$ $difficulty$ : The 2-Way-Image Identification accuracy is computed by retrieving the corresponding ground truth from two videos (one being the ground truth of the reconstruction and the other randomly selected), with a chance-level accuracy of 50%. This task is relatively simple. To comprehensively assess semantic decoding performance, we evaluated two more challenging retrieval tasks: (i) retrieving the ground truth from 1,200 test videos in the CC2017 dataset (Small) and (ii) retrieving the ground truth from an expanded set of 4,240 videos (Large). The chance-level accuracies for the top-10 retrieval in these tasks are 0.83% and 0.24%, respectively, making them significantly more challenging. Using the reconstruction results from Chen et al. (2024), we evaluated these tasks and found that our model significantly outperformed Chen et al. (2024), as shown in the table below.\\n\\n| Model | Test set | **Subject 1** | | **Subject 2** | | **Subject 3** | | **Average** | |\\n|---------------|----------|---------------|-----------|---------------|-----------|---------------|-----------|--------------|-----------|\\n| | | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 | top-10 | top-100 |\\n| Chen et al. (2024) | Small | **3.22** | 19.08 | 2.75 | 16.83 | 3.58 | 22.08 | 3.18* | 19.33* |\\n| **Ours** | Small | 3.08 | **22.58** | **4.75** | **26.90** | **4.50** | **24.67** | **4.11** | **24.72** |\\n| Chen et al. (2024) | Large | 1.75 | 7.17 | 0.83 | 5.17 | 1.25 | 9.00 | 1.28* | 7.11* |\\n| **Ours** | Large | **2.17** | **12.50** | **2.25** | **17.00** | **2.75** | **16.42** | **2.39** | **15.31** |\\n\\n*Note: For the 'small test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.83% and 8.3%, respectively. For the 'large test set', the chance-level accuracies for top-10 and top-100 accuracy are 0.24% and 2.4%, respectively. * denotes our performance is significantly better than the compared method (paired t-test, p<0.05).\\n\\n- $Limitations$ $of$ $the$ $metric$ : The 2-Way-I metric evaluates the average semantic similarity between individual video frames and the ground truth, without considering the inter-frame relationships. This paper, however, focuses on decoding the motion associations between frames. Thus, we believe that not outperforming Chen et al. (2024) on this metric is acceptable.\"}" ] }
BpKbKeY0La
AddSR: Accelerating Diffusion-based Blind Super-Resolution with Adversarial Diffusion Distillation
[ "Rui Xie", "Ying Tai", "Chen Zhao", "Kai Zhang", "Zhenyu Zhang", "Jun Zhou", "Xiaoqian Ye", "qian Wang", "Jian Yang" ]
Blind super-resolution methods based on Stable Diffusion (SD) demonstrate impressive generative capabilities in reconstructing clear, high-resolution (HR) images with intricate details from low-resolution (LR) inputs. However, their practical applicability is often limited by poor efficiency, as they require hundreds to thousands of sampling steps. Inspired by Adversarial Diffusion Distillation (ADD), we incorporate this approach to design a highly effective and efficient blind super-resolution method. Nonetheless, two challenges arise: First, the original ADD significantly reduces result fidelity, leading to a perception-distortion imbalance. Second, SD-based methods are sensitive to the quality of the conditioning input, while LR images often have complex degradation, which further hinders effectiveness. To address these issues, we introduce a Timestep-Adaptive ADD (TA-ADD) to mitigate the perception-distortion imbalance caused by the original ADD. Furthermore, we propose a prediction-based self-refinement strategy to estimate HR, which allows for the provision of more high-frequency information without the need for additional modules. Extensive experiments show that our method,~\name, generates superior restoration results while being significantly faster than previous SD-based state-of-the-art models (e.g., $7\times$ faster than SeeSR).
[ "Image super-resolution" ]
https://openreview.net/pdf?id=BpKbKeY0La
https://openreview.net/forum?id=BpKbKeY0La
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oqqHUlDIGT", "jpd3t2gJK0", "Y4rE5f4mvc", "Xbpaj5Tqoi", "XQ0Ma4gC42", "LwwPOEv96g", "A2fu0CCdw1" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1729936701850, 1732699326764, 1730711362545, 1731035802955, 1731482423149, 1730359708127, 1732699292830 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6935/Reviewer_aDzM" ], [ "ICLR.cc/2025/Conference/Submission6935/Authors" ], [ "ICLR.cc/2025/Conference/Submission6935/Reviewer_jczQ" ], [ "ICLR.cc/2025/Conference/Submission6935/Reviewer_hXqt" ], [ "ICLR.cc/2025/Conference/Submission6935/Reviewer_dfRH" ], [ "ICLR.cc/2025/Conference/Submission6935/Reviewer_hSbP" ], [ "ICLR.cc/2025/Conference/Submission6935/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a model named AddSR, which addresses the challenge of blind super-resolution by leveraging the capabilities of stable diffusion. The authors propose prediction-based self-refinement and adversarial diffusion distillation methods to optimize the model, resulting in significantly improved efficiency and image quality. AddSR demonstrates superior performance on various degradation scenarios and real-world low-quality images, showcasing its effectiveness on tasks such as image restoration within a remarkably reduced number of inference steps.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces AddSR, a model that significantly advances blind super-resolution by employing stable diffusion, showcasing impressive generative capabilities for reconstructing high-resolution images from low-resolution inputs.\", \"Two innovative aspects of the paper are the prediction-based self-refinement strategy, which efficiently incorporates high-frequency details, and the adversarial diffusion distillation approach, which accelerates the model's inference speed while maintaining quality.\", \"The authors have meticulously designed experiments that thoroughly evaluate AddSR's performance across a variety of datasets, demonstrating the model's robustness and effectiveness in different degradation scenarios.\", \"The writing is clear and well-structured, making the complex technical details accessible and the methodology easy to follow, which is commendable.\"], \"weaknesses\": \"- Could you point out the extract reasons about that directly applying ADD in the BSR task leads to reduced fidelity? The description in this paper is confusing, only through the observation of experimental results, lack of theoretical analysis.\\n\\n- The latest diffusion-based super resolution methods[1,2,3,4] have accelerated the inference process into single step, but this paper only obtains the optimal results with 4-steps, It seems that the method in this paper does not have advantages and practicality. could you compare with them?\\n\\n- PSR sounds insteresting, but when the estimated HR images are rough and distorted in the bigger steps, PSR will transfer the distortion error to the next step, LR is necessary, How to reduce the error?\\n\\n- Time-step aware weighting is usefully, but the functions d(s, t) is too empirical, the times-step in TAD-SR[2] shows more simpler and more effective, similar to diffusion time-step embedding.\\n\\n[1] Wu, Rongyuan, et al. \\\"One-Step Effective Diffusion Network for Real-World Image Super-Resolution.\\\" arXiv preprint arXiv:2406.08177 (2024).\\n\\n[2] He, Xiao, et al. \\\"One Step Diffusion-based Super-Resolution with Time-Aware Distillation.\\\" arXiv preprint arXiv:2408.07476 (2024).\\n\\n[3] Noroozi, Mehdi, et al. \\\"You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation.\\\" arXiv preprint arXiv:2401.17258 (2024).\\n\\n[4] Zhang, Aiping, et al. \\\"Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors.\\\" arXiv preprint arXiv:2409.17058 (2024).\", \"questions\": \"Please referring to the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents AddSR, a blind super-resolution method using Timestep-Adaptive ADD (TA-ADD) to address challenges of perception-distortion imbalance and high-frequency detail restoration. AddSR achieves high-quality image restoration more efficiently, demonstrating a 7x speed advantage over existing models in experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The task is meaningful and engaging.\\n\\n2. Achieves good performance on certain non-reference metrics.\\n\\n3. High efficiency compared to SeeSR.\", \"weaknesses\": \"Performance Drop: There is a noticeable performance gap between the teacher and student models, with LPIPS dropping from 0.2124 to 0.2953\\u2014a significant decrease. From my experience, LPIPS is a more crucial metric than the non-reference metrics, where the proposed method shows improvement.\", \"notation_error\": \"The notation is incorrect. In L271-272, the authors state, \\\"'*' indicates that the metric is non-reference,\\\" and label LPIPS as non-reference, although it is actually reference-based.\", \"missing_lpips_and_comparison\": \"LPIPS is absent from Table 3, and the fidelity performance appears much lower than existing SOTA models, such as ResShift, SeeSR, and even current single-step SR models. Comparisons with these methods are also missing.\", \"complexity_and_missing_details\": \"The proposed method seems somewhat complex, and certain details are unclear. For instance, the caption for Fig. 2 lacks essential information, details on PSR are not provided, and the input for Fig. 3 is unspecified.\", \"questions\": \"See the weekness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work focuses on enhancing the efficiency of SD-based BSR methods by incorporating the Adversarial Diffusion Distillation (ADD) technique. Additionally, this study points out the perception-fidelity imbalance issue and the impact of image condition when applying ADD, proposing TA-ADD loss and prediction-based self-refinement (PSR) mechanism to address them, respectively.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Some qualitative results demonstrate high-quality details.\\n2. The leading results in perceptual-oriented metrics.\", \"weaknesses\": \"1. The main focus of this work is to emphasize acceleration through ADD, but most of the discussions revolve around addressing the limitations of ADD, then introducing certain tricks (i.e., loss function, improving image condition) to tackle the problems, without demonstrating noticeable effects in improving efficiency.\\n\\n2. Another focus of this paper is to resolve the perception-distortion trade-off using the proposed TA-ADD loss. However, from Tables 1 and 3, it is evident that this framework fails to push the boundary of this trade-off. Instead, this framework achieves improved perceptual quality by heavily sacrificing fidelity, as indicated by low PSNR/SSIM scores.\\n\\n3. The lack of comparison with other efficient SD-based methods (e.g., OSEDiff) makes it difficult to verify whether the effectiveness of this framework under similar efficiency.\\n\\n4. On main focus of this work is to address perception-fidelity imbalance. In practice, SD-based SR methods can trade off between fidelity perceptual quality by employing inference tricks (e.g., manually added text prompts). However, this paper does not specify the inference setup of each baseline, nor does it achieve promising results with an acceptable level of fidelity drop.\\n\\n5. In Table 1, the performance of this framework is not comparable with existing SD-based methods in several metrics. Specifically, while the proposed method shows better visual quality, it significantly underperforms in fidelity-oriented metrics such as PSNR and SSIM.\\n\\n6. In Table 2, this framework doesn't deliver similar perceptual quality scores as SUPIR, such that comparing other settings at this point does not adequately demonstrate the advantages of this approach.\\n\\n7. Confusing terminology. For example, in the caption of Figure 3, the term \\\"predicted HR\\\" image is used. Typically, \\\"HR image\\\" refers to ground truth. Since your method also uses the ground truth HR image as a condition for the teacher model, mixing the use of the term \\\"HR\\\" might lead to misunderstanding.\", \"questions\": \"1. This paper emphasizes that the proposed PSR can utilize the HR output to \\\"control\\\" the final output, but the experiment about how to control the intermediate results and influence the final output is missing.\\n\\n2. PSR leverages the intermediate SR result as the condition for the next iteration. However, as shown in Figure 5(b), SD-based SR methods often have hallucination issues. Did you consider error accumulation in this scenario? Did you apply any additional processing to the intermediate output (e.g., blurring) to prevent error accumulation?\\n\\n3. Line 230: \\\"The $\\\\hat{x_0}$ in each step has more high-frequency information to better control the model output.\\\" However, in Figure 5(b), the hallucinated animal head also contains high-frequency components. How do you prove that high-frequency components necessarily provide better guidance?\\n\\n4. This work adopts SeeSR as the backbone and teacher model. Did you use manually added prompts when generating images, such as \\\"clean,\\\" \\\"high-resolution,\\\" or \\\"8k\\\" as used in SeeSR? Since these prompts greatly influence the perception-fidelity trade-off in SR output, their use may also affect your model's performance when intermediate results are regarded as crucial conditions. Could you compare the impact on performance with and without these prompts? \\n\\n5. Why does your method, which uses the same framework as SeeSR and reduces the number of steps from 50 to 4, only achieve a 7$\\\\times$ speedup (which ideally should be 12.5$\\\\times$)?\\n\\n6. The entire training scheme appears to be overfitting to the 4-step setting, however, the ablation about the choice of this hyperparameter is missing. What will happen if you use the trained student model for 5 or more steps during inference?\\n\\n7. In the design of the $L_{ta-dis}$ loss, why do you use the output of the student model as the input of the teacher model, then enforcing similarity between the teacher output and student output? This implies that the purpose of the $L_{ta-dis}$ is for the teacher to learn identity rather than to improve the student. However, since the teacher does not have any learnable layers, this design cannot derive such effect.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents AddSR, an efficient blind image super-resolution method based on Adversarial Diffusion Distillation (ADD). It introduces a timestep-adaptive adversarial diffusion distillation (TA-ADD) loss to address the perception-distortion imbalance inherent in ADD-based approaches. Additionally, it proposes a prediction-based self-refinement strategy to improve high-resolution (HR) estimation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. PSR does not use additional modules to pre-clean LR images. Instead, it uses HR images estimated by Equation (2).\\n\\n2. AddSR\\u2019s teacher model substitutes $x_{LR}$ with $x_0$ to regulate the output, which could force the student model to learn the high-frequency information of HR images implicitly.\\n\\n3. AddSR can produce images with better perceptual quality than most diffusion-based SR models, requiring fewer inference steps and less time.\", \"weaknesses\": \"1. The texts in Figure 6 and Table 1 are too small.\\n\\n2. This paper does not provide the type of GPUs, and inference time when testing.\\n\\n3. Lack of visual comparisons with SUPIR[1].\\n\\n4. The visual comparisons are not thorough.\\n\\n\\n[1] Gu Jinjin, et al. Pipal: a large-scale image quality assessment dataset for perceptual image restoration.\", \"questions\": \"1. The texts in Figure 6 and Table 1 are too small. Please reformat them.\\n\\n2. What type of GPU are the testing experiments done?\\n\\n3. Could you provide the visual comparison results with SUPIR [1]?\\n\\n4. Could you provide the inference times when comparing with SotAs in Table 1?\\n\\n5. Please provide more visual comparisons with SotA methods as supplements.\\n\\n6. How to use HR images as conditions when training prediction-based self-refinement? What are the differences between training and inference when using HR as a condition?\\n\\n7. Since there is no HR input during inference, what is the significance of the *Effectiveness of Refined Training Process* in the ablation study?\\n\\n8. Does the ratio of GAN loss to distillation loss vary with different scenarios? If yes, how to adjust the ratio in different scenarios? Do we need to change the ratio for each image?\\n\\n9. Please describe the effects of using only GAN loss and only distillation loss.\\n\\n[1] Gu Jinjin, et al. Pipal: a\\nlarge-scale image quality assessment dataset for perceptual image restoration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Previous diffusion-based SR methods face challenges related to perception distortion imbalance and sensitivity to the quality of conditioning inputs. To address these issues, this paper introduces TA-ADD and a self-refinement strategy, respectively. Specifically, instead of using the same LR image as a condition across timesteps, the self-refinement strategy utilizes the predicted \\\\hat{x_{0}}\\u200b from previous steps, offering richer information for subsequent denoising. Additionally, to manage the imbalanced loss weights between adversarial and distillation losses at different timesteps, the authors propose a weighting function to adjust the loss weights accordingly.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The proposed methodologies appear straightforward yet powerful. In comparison to previous SR methods, AddSR demonstrates superior performance in both qualitative and quantitative analyses.\", \"weaknesses\": \"1. While the primary focus of the paper is on PSR and TA-ADD, there are significant concerns regarding its novelty. First, although several works utilize the output from previous steps as a condition for the next denoising process*, what distinguishes the proposed TA-ADD from *? Second, since ** also employs a weighting function to adaptively adjust the loss term weights, what advantages does Equation 3 offer compared to their approach? If possible, it would be helpful to include a comparison using their weighting functions.\\n\\n*Andreas Lugmayr, RePaint: Inpainting using Denoising Diffusion Probabilistic Models\\n\\n**Tianwei Yin, One-step Diffusion with Distribution Matching Distillation\\n\\n**Axel Sauer, Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation\\n\\n2. What advantages does AddSR-1 gain from self-refinement? Additionally, please provide a diverse comparison with other one-step diffusion methods* in Table 1 for a more comprehensive understanding.\\n\\n*Y Wang, SinSR: Diffusion-Based Image Super-Resolution in a Single Step \\n\\n*R Wu, One-Step Effective Diffusion Network for Real-World Image Super-Resolution\\n\\n*M Noroozi, You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation\\n\\n3. While this is not a weakness, the reviewer suggests that the authors align the abstracts in both the OpenReview submission and the paper. The abstract on OpenReview seems to be the earlier version before revisions were made.\", \"questions\": \"1. Let the reviewer denote Figure 4's \\\\hat{x}^{1}_{0}...\\\\hat{x}^{4}_{0} to 4-1, 4-2, 4-3, and 4-4. In Figure 4, it appears that 4-1 is more similar to 4-4 than to 4-2. Since 4-1 is generated using 4-2 as a condition, one would expect it to be more similar to 4-2. What could explain this discrepancy?\\n\\n2. Given that PSR can be applied to previous stable diffusion-based SR models without the need for retraining, could the authors present both quantitative and qualitative results showing the performance of PSR on other models?\\n\\n3. Instead of using enhanced conditions in PSR, would it be possible to substitute all LR images with predicted SR images from simpler SR methods like RealESRGAN? If the richer conditional information from PSR is a crucial factor, then using a naively super-resolved image could be more beneficial during the initial timesteps. Here, instead of retraining the model with SR image conditions, it might suffice to replace the \\\\hat{x}^{0}\\u200b condition with outputs from RealESRGAN on a pretrained AddSR model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the time of ACs and reviewers, we decide to withdrawal our paper.\"}" ] }
BpIbnXWfhL
RuAG: Learned-rule-augmented Generation for Large Language Models
[ "Yudi Zhang", "Pei Xiao", "Lu Wang", "Chaoyun Zhang", "Meng Fang", "Yali Du", "Yevgeniy Puzyrev", "Randolph Yao", "Si Qin", "Qingwei Lin", "Mykola Pechenizkiy", "Dongmei Zhang", "Saravan Rajmohan", "Qi Zhang" ]
In-context learning (ICL) and Retrieval-Augmented Generation (RAG) have gained attention for their ability to enhance LLMs' reasoning by incorporating external knowledge but suffer from limited contextual window size, leading to insufficient information injection. To this end, we propose a novel framework to automatically distill large volumes of offline data into interpretable first-order logic rules, which are injected into LLMs to boost their reasoning capabilities. Our method begins by formulating the search process relying on LLMs' commonsense, where LLMs automatically define head and body predicates. Then, we apply Monte Carlo Tree Search (MCTS) to address the combinational searching space and efficiently discover logic rules from data. The resulting logic rules are translated into natural language, allowing targeted knowledge injection and seamless integration into LLM prompts for LLM's downstream task reasoning. We evaluate our framework on public and private industrial tasks, including Natural Language Processing (NLP), time-series, decision-making, and industrial tasks, demonstrating its effectiveness in enhancing LLM's capability over diverse tasks.
[ "Large language model", "Logic Rule Learning", "Monte Carlo Tree Search" ]
Accept (Poster)
https://openreview.net/pdf?id=BpIbnXWfhL
https://openreview.net/forum?id=BpIbnXWfhL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y2UxL8h9gs", "xajEHCiL0H", "xDVnv1WWqd", "szrxc29BW0", "oUSmvnshIv", "gGSenyAxhS", "fOl05sNZ41", "cGduA84xK3", "aFZNd9V5Iv", "a2afUTIsMw", "WhaeDktD0W", "TH6Os1nVZC", "RPwxBmxRWK", "QB5FrHGfFw", "P9M3FZBQEb", "JM1qWZOJs9", "AQzM81mLQ4", "9wvFg8u4Rj", "1bD446D4MJ", "0BRZP7jR2Q" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732203179946, 1737523759352, 1732537127504, 1732200863596, 1734811877761, 1732205138419, 1730716941295, 1732200493568, 1732205791453, 1730824761678, 1732564677178, 1732205456376, 1733209553893, 1732317863884, 1730741827061, 1732536440413, 1732301305507, 1732199779791, 1732199261976, 1732202104824 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Area_Chair_c7Zy" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Reviewer_31bM" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Reviewer_fTJ7" ], [ "ICLR.cc/2025/Conference/Submission6286/Reviewer_31bM" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Reviewer_ME5G" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Reviewer_ME5G" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ], [ "ICLR.cc/2025/Conference/Submission6286/Authors" ] ], "structured_content_str": [ "{\"comment\": \"## Response to Weakness 2 (LLM's involvement in RuAG):\\n\\nThank you for your inquiry regarding the use of actual data and LLMs. We address this concern point by point and welcome further discussion.\\n\\n**Response to Weakness 2.1:**\\n> Line 188: it is not clear if the LM looks at the actual data. The authors talk about using the LLM to look at the data and find patterns, but it seems maybe the LLM is only used to look at the schema, or the featuer descriptions to define what are the body and target predicates and it does not use the actual values of these features in the data at all? \\n\\nTo clarify, LLMs indeed analyze both the schema and the value ranges of data., which are derived from **actual data**. This process is akin to traditional feature engineering, which typically relies on human expertise to interpret and define relevant features. By harnessing the extensive commonsense knowledge that LLMs acquire during pretraining, we can significantly reduce human involvement, enhancing both the efficiency and scalability of our method.\\n\\n**Response to Weakness 2.2:**\\n\\n> Choosing the rules is done with MCTS where it is not clear if the LM is used - it seems like rules are applied on the data to see if they work well. So if the LLM is not used to find patterns in the data this might be a bit limited.\\n\\nTo ensure clarity and address your concerns, I would like to highlight the specific roles that LLMs play in our approach, particularly in steps 2 and 4, where LLMs are instrumental in selecting the appropriate rules.\\n\\n1) **LLM-aided Predicate Definition:** LLMs are used to assist in defining predicates, including suggesting new target predicates and eliminating impossible ones.\\n2) **Rule Search via MCTS (without LLMs):** MCTS is employed to search for logic rules based on the defined candidate body and head predicates. The search in MCTS is guided by designed rewards that consider the precision and recall of the rules. LLMs are not used in this phase to avoid excessive costs.\\n3) **Post-processing of Learned Rules (without LLMs):** This step involves removing duplicates and translating rules into natural language, without involving LLMs.\\n4) **Learned-Rule-Augmented Generation:**\\n - The learned rules are **explicitly chosen** to be inserted into LLMs: in cooperative games, all learned rules are directly inserted; for relation extraction and anomaly detection, rules are retrieved based on similarity, inspired by RAG.\\n - During LLMs' generation, LLMs perform **implicit rule selection:** As multiple rules are inserted, LLMs evaluate the reliability of these rules and may refine their selection, ensuring that only the most pertinent and trustworthy rules are applied.\\n\\n**Allowing LLMs to examine all the data is prohibitively costly and may undermine their overall understanding.** For example, approaches like HtT and PLLB in cooperative games attempt to process all data to extract rules. However, these methods encounter significant challenges with long-text comprehension, which hampers their ability to distinguish between data samples and efficiently summarize rules. Our experimental results demonstrate that MCTS offers a more effective and practical solution for extracting knowledge, addressing these limitations with greater success.\\n\\n\\n## Response to Weakness 3 (clarity issues):\\n\\nThanks for pointing these out. We response to the clarity issues point by point as following.\\n\\n**Response to Weakness 3.1:**\\n> I don't understand figure 3 well enough - it seems important but is a mix of using text and emoji without proper explanation I can kind of squint at it and guess but it seems really difficult to understand what are the details of each step.\\n\\nThank you for bringing this to our attention. Given the constraints of limited page space, we initially chose to use emojis in the figure, recognizing that this might compromise clarity in certain illustrations. To address this concern, we have implemented the following modifications in the attached version:\\n- Simplified the symbols used for players (`A` for Alice, `B` for Bob), treasure (`box`), and blocks. \\n- Provided clear explanations for the symbols and predicates in natural language: \\n - Visit(A, `yellow block`): Indicates whether Alice has visited the yellow block. \\n - Stand(A, `yellow block`): Indicates whether Alice is currently standing on the yellow block. \\n - DisX(A, `box`) = -6: Represents the horizontal distance between Alice and the diamond, measured as -6. \\n\\nWe greatly appreciate any further suggestions you may have for optimizing the figures!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer 31bM,\\n\\nThank you for your positive support to our work. We greatly appreciate the opportunity to refine our work through your comments. \\n\\nWe sincerely hope that our responses have effectively addressed all your questions and concerns. If there are any additional questions, we are more than happy to provide further input. \\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"## Response to Question 1 & 2:\\n> Question 1: Did you use only the \\\"train\\\" split to construct the rules, testing them exclusively on the \\\"test\\\" split?\\n> Question 2: How did you partition the data for the cooperative game into train, validation, and test sets?\\n\\nYes, we adhere to standard evaluation protocols across all tasks, where we derive logic rules exclusively from the training data and assess the performance of rule-augmented LLMs using the test split. Below are the details of our training and evaluation procedures:\\n\\n| Task | Training Data for Logic Rule Learning | Evaluation Data |\\n|------------------|-------------------------------------------------|-----------------------------------------------------|\\n| Relation Extraction | Training split | Test split |\\n| Anomaly Detection | Training split | Test split |\\n| Cooperative Game | We gathered 1,000 episodes of trajectories using a crafted policy where the agent follows the optimal policy with a probability \\\\(p = 0.7\\\\) and a random policy otherwise. | We randomized the initial state and averaged the accumulative rewards over 30 episodes. |\\n\\nThis structure ensures a clear separation of data used for rule extraction and performance evaluation, maintaining the fair of our results.\\n\\n\\n## Response to Question 3:\\n> Given the reliance on deterministic predicates, how do you anticipate this approach adapting to real-world RAG scenarios requiring dynamic, knowledge-based decision-making?\\n\\nAs noted in our response to Weakness 3, it's important to highlight that our method also effectively handles non-deterministic scenarios:\\n* We employ a dynamic knowledge retrieval mechanism from large language models (LLMs), leveraging semantic similarities with incoming inputs. This approach mirrors the flexibility found in Retrieval-Augmented Generation (RAG).\\n* Additionally, LLMs are equipped with multiple logic rules and corresponding precision metrics, enabling them to judiciously choose the most applicable rules for each specific situation.\\n* We have further validated the versatility of our approach through its successful application in a real-world industrial task, specifically in detecting Unauthorized Party Abuse (UPA). This demonstrates its efficacy in dynamic and knowledge-intensive environments.\"}", "{\"metareview\": \"The paper proposes a novel framework that distills offline data into logical rules using Monte Carlo Tree Search (MCTS), integrating these rules with LLMs to enhance reasoning across tasks. Strengths include its scalability, computational efficiency compared to methods like RAG, and its demonstrated effectiveness in diverse tasks (e.g., relation extraction, anomaly detection, and cooperative games). Reviewers appreciated its innovation in leveraging LLMs for rule-based reasoning and its empirical results showing improvement over baselines. However, weaknesses include unclear scoping on the method's generality, limited discussion on when rules are applicable, and insufficient clarity in some sections (e.g., predicate definitions, experimental details). The authors have conducted a satisfactory job on addressing these responses during the rebuttal. The paper has received borderline scores of 5,6,8, where the reviewer who gave 5 asked clarification questions and raised points on the inherent limitation of the method, such as the rule-based approach may be difficult to be applied in real-world tasks. This reviewer did not further participate in the discussion after the author responded. Personally I agree on this limitation but I think that is tolerable and the paper is interesting, thus I lean towards acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers highlighted strengths in the novelty of using LLMs to generate logical rules and its strong empirical results across tasks, but raised concerns about unclear scoping (when the method is applicable), insufficient experimental details, and clarity issues (e.g., predicate definitions, figure explanations). They also requested more fair comparisons with RAG and ICL. The authors addressed these by clarifying the method's scope, tasks, and predicate definitions, improving figure explanations, adding baselines for RAG and ICL, and revising experimental details (e.g., data splits, prompts). These responses satisfied most reviewers, with some raising scores and acknowledging the improvements. Overall I think this paper is novel and studies an interesting topic on finding rules with LLMs and augmenting generations with the rules.\"}", "{\"comment\": \"**Response to Weakness 3.3:**\\n> The paper talks about \\\"impossible body predicates\\\". What are those? Why are they impossible? What is exactly the input provided to the LLM to perform this task (please don't send me to the appendix in author response). Similar, \\\"suggesting new target predicates\\\" how? What is the task given to the LLM to do that? All those seem like crucial aspects that are not explained. I can imagine the LM doing an OK job in these things with some prompt but they don't seem necessarily like something that is well defined that even humans can do with reasonable agreement.\", \"we_would_like_to_address_your_concerns_point_by_point\": \"**How do LLMs assist in defining predicates?**\\n- Eliminating Body Predicates: LLMs achieve this by analyzing the task description, logical rule definitions, and candidate predicates with their descriptions. For example, in relation extraction, certain relations like `appears_in` (denoting a player's participation in an event) are filtered out by LLMs based on their semantic information, as they are irrelevant to other relations.\\n- Suggesting New Target Predicates: LLMs are guided by logical rule definitions, task descriptions, and the data schema to propose new target predicates. For example, in a cooperative game, the initial task-relevant predicate might be `GameWin`, representing whether the agents win the game. However, after analyzing the game description and the agents' observation and action spaces, LLMs may suggest exploring logical rules involving agents standing on blocks of different colors, as these could play a significant role in achieving a win.\\n\\n**Can LLMs work in defining predicates?** For the body predicates elimination, we provide the initial predicates so that they just remove some of them; as for the head predicates, we found that LLMs can generate some predicates that easy to be extract from the initial predicates as well.\\n\\nWe revise the Line 208 - 218 for clarity and provide detailed prompts in Figure A3, A4 in Appendix.\\n\\n**Response to Weakness 3.4:**\\n> HtT -- this seems like a key baseline that is not explained properly.\\n\\nHtT shares similar motivation that use constructed rule library to enhance LLMs' generation. They builds a rule library by having an LLM generate and verify rules over training examples. However, our method is more computationally efficient in learning rules by structuring them in a systematic manner. This enables MCTS to learn rules effectively while significantly reducing the reliance on extensive LLM calls.\\n\\nWe incooperate this in the revised version (Line 264 - 286).\\n\\n**Response to Weakness 3.5:**\\n> Line 285: During the rule extraction process, we leveraged the LLM to filter out 15% of the relationships that were unlikely to serve as valid predicates. Unclear;\\n\\nTo reduce searching cost in MCTS, LLMs are prompted to eliminate impossible body predicates (as said in **Response to Weakness 3.3**) according to the relation description and rule description and we found there are likely 15\\\\% of the total body predicates candidates are eliminated in this task, including `vs`\\uff0c`appears_in` and `player_of`.\\n\\nWe revise this sentence for better understanding in the attached version of our paper (Line 299 - 301). \\n\\n\\n**Response to Weakness 3.6:**\\n> Experimental details: There is very little detail in the paper on what are the featuers/labels/target and body predicates in each of the experiments, this makes it hard to understand the task. There are a few examples in a table but this is insufficient for understanding.\\n\\nThanks for pointing this out. As we explained in **Response to Weakness 3.2**, the features and labels come from the dataset, and we translate them into body predicates and head predicates as follows:\\n\\n- Relation Extraction: Features and labels refer to the relationships between entities, such as `in0(A, C)` (A is located in country C). We define the target and body predicates as follows:\\n - Target Predicate: A chosen relation, e.g., `in0(A, C)`.\\n - Body Predicates: Remaining relations excluding the target predicate. For example, if the target predicate is `in0(A, C)`, the body predicates include all other relations.\\n- Log-Based Anomaly Detection: Features indicate whether specific log events occurred, while labels `Anomaly` indicate whether the log sequence is abnormal.\\n - Target Predicate:`Anomaly`\\n - Body Predicates: Log events, such as `E5` (receiving a block) and `E7` (write operation exception).\\n- Cooperative Game: Observations and actions in the collected data are treated as features, and the labels `GameWin` indicate whether the team won.\\n - Target Predicate: initial target predicate is task-relevant, i.e., `GameWin`\\n - Body Predicates: Transformed from observations and actions, e.g., `IsYellow(Alice, Right)` (Alice's right block is yellow) and `Move(Bob, Right)` (Bob moves right).\\n\\nWe have added Table 1 in the revised version to enable a better understanding of the features and predicates.\"}", "{\"summary\": \"This work introduces the RuAG framework that automatically distills large volumes of offline data into logical rules , which are then included in LLM prompts to enhance their reasoning capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. RuAG offers a scalable solution for integrating extensive domain knowledge into LLMs that improves upon RAG or SFT.\\n2. Model performance is tested on a wide array of tasks and show improvement on strong baselines\\n3. RuAG is more computationally efficient than other methods that summarize external dataset as knowledge storage, as the calls to API models only happen once during logic rule constructions.\", \"weaknesses\": \"1. Ablation studies in Table 5 could include RAG or SFT on open-source LLMs, as the current baselines only include COT which does not include external knowledge.\\n2. How do LLMs suggest new rules to explore and detect impossible body predicates? These parts seem unclear to me.\", \"questions\": \"1. L190-195 probably has some copy-pasting errors?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Response to Weakness 3 (Applicability of Rules):\\n\\nThanks for your comments, below are our point-wise response to this concern:\\n\\n**Response to Weakness 3.1:**\\n> The rules generated in this paper may not directly translate to real-world retrieval-augmented generation (RAG) settings, which often require \\\"external knowledge\\\" represented in both body and target predicates. For instance, the example in Figure 2 does not reflect how real-world decisions about weather predictions are made. \\n\\nThank you for raising this point. We believe our learned rules are highly applicable to real-world retrieval-augmented generation (RAG) settings for the following reasons:\\n\\n***Direct Translation to RAG Settings:*** The logic rules learned by our method are inherently interpretable and can be seamlessly translated into natural language. This capability is demonstrated in lines 73\\u201376 of the paper. Additionally, because the rules can be expressed in natural language, they can be effectively retrieved using LLM or semantic similarity measures, making them well-suited for integration into RAG pipelines.\\n\\n**Clarification of Figure 2's Purpose:** We would like to clarify that Figure 2 serves as a simplified illustration of the logic rules generated by our method and is not intended to represent the full pipeline. While Figure 2 shows how rules encode external knowledge in a structured format, our approach extends far beyond this, leveraging these rules to guide large language models (LLMs) in making predictions or decisions. Specifically, the rules assist LLMs by providing external knowledge in an interpretable and accessible way, enabling them to handle complex tasks that go beyond the scope of direct rule application.\\n**Response to Weakness 3.2:**\\n\\n> The rules discussed here seem applicable primarily to scenarios with deterministic target predicates (e.g., classification tasks).\", \"our_method_is_flexible_and_capable_of_addressing_non_deterministic_and_non_classification_tasks_for_the_following_reasons\": \"**Handling Non-Deterministic Predicates:** For tasks involving non-deterministic predicates, our method learns rules with the high precision and incorporates these rules, along with their associated precision scores, into the input for large language models (LLMs). By including this information, LLMs can rely on their own reasoning capabilities to evaluate the reliability of the rules and select appropriate ones for predictions. This process enables our method to handle tasks where target predicates are not strictly deterministic or exhibit variability.\\n\\n**Addressing Non-Classification Tasks:** In non-classification tasks, such as decision-making or strategic reasoning, logic rules act as providers of domain-specific knowledge. For example, in game-related scenarios, rules might supply hidden information, such as identifying key blocks necessary to achieve a winning strategy. By presenting this external knowledge in an interpretable form, the rules enhance the LLM\\u2019s ability to solve complex tasks by complementing its generative and reasoning capabilities.\\n\\n**Response to Weakness 3.3:**\\n> How might this approach be extended to real-world scenarios, as suggested in the conclusion?\", \"our_method_is_highly_adaptable_to_various_real_world_scenarios_for_several_compelling_reasons\": \"- Inspired by human reasoning processes, large language models (LLMs) excel in task resolution when bolstered by external knowledge sources [1]. Logical rules, serving as a robust framework for this knowledge, have proven universally effective across diverse real-world applications [2][3][4]. Furthermore, in practical settings, researchers often employ these rules to analyze and select features during the feature engineering phase.\\n- By integrating with LLMs, our method is both flexible in handling raw data (e.g., paragraphs), addressing out-of-distribution cases, filtering inaccurate rules, and combining multiple rules seamlessly. \\n- Leveraging LLMs' assistance in predicate definition, which draws on human commonsense reasoning, our method becomes more versatile across diverse tasks. \\n\\nTo showcase its practicality, we apply our method to an industrial task\\u2014Unauthorized Party Abuse (UPA) detection\\u2014demonstrating its effectiveness and real-world applicability. \\n\\n\\n[1] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., ... & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. NeurIPS 2020.\\n\\n[2] Teru, Komal, Etienne Denis, and Will Hamilton. \\\"Inductive relation prediction by subgraph reasoning.\\\" ICML 2020.\\n\\n[3] Siyuan Wang, Zhongyu Wei, Yejin Choi, and Xiang Ren. Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs. ACL 2024. \\n\\n[4] Morishita, T., Morio, G., Yamaguchi, A., & Sogawa, Y. Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus. NeurIPS 2024.\"}", "{\"comment\": \"# Response to comments from Reviewer 31bM\\n\\nWe sincerely appreciate your recognition of our work. Below, we address your concerns point by point.\\n\\n## Response to Weakness 1:\\n>Ablation studies in Table 5 could include RAG or SFT on open-source LLMs, as the current baselines only include COT which does not include external knowledge.\\n\\nThanks for your suggestion. We add experimental result of RAG and ICL resulton relation extraction and log-based anomaly detection as following:\\n\\n\\n| **Method** | **Relation Extraction (F1/Precision/Recall)** | **Log-based Anomaly Detection (F1/Precision/Recall)** |\\n|------------|----------------------------------------------|------------------------------------------------------|\\n| **Vanilla** | 46.94% / 69.61% / 35.41% | 60.10% / 47.05% / 83.16% |\\n| **ICL** | 50.26% / 74.09% / 38.02% | 69.77% / 78.95% / 62.50% |\\n| **RAG** | 52.30% / 78.64% / 39.17% | 84.32% / 98.97% / 73.46% |\\n| **Ours** | 60.42% / 69.44% / 53.48% | 92.59% / 100% / 86.21% |\\n\\n\\nDue to limited computational resources, we are not able to provide SFT on open-source LLMs. Instead, we provide supervised learning results, like those DL-based method in relation extraction (i.e., CNN, BiLSTM, Bert, Context-aware in Table 2), anomaly detection (i.e., DeepLog, LogRobust in Table 3); and behavior cloning in cooperative games (Table 4).\\nAccording to the results, our method outperforms all the methods with external knowledge, no matter through in-context information or supervised training. \\n\\n## Response to Weakness 2:\\n> How do LLMs suggest new rules to explore and detect impossible body predicates? These parts seem unclear to me.\\n\\nWe sincerely appreciate this valuable concern! LLMs assist in exploring new rules and identifying impossible body predicates through task-specific guidance. We explain this separately as follows:\\n\\n- **Eliminating Body Predicates:** LLMs analysis the task description, logical rule definitions, and candidate predicates with their descriptions to eliminate irrelevant predicates. For example, in relation extraction, certain relations like appears_in (denoting a player's participation in an event) are filtered out by LLMs based on their semantic information, as they are irrelevant to other relations.\\n- **Suggesting New Target Predicates:** LLMs are prompted to propose new target predicates according to logical rule definitions, task descriptions, and the data schema. For example, in a cooperative game, the initial task-relevant predicate might be GameWin, representing whether the agents win the game. However, after analyzing the game description and the agents' observation and action spaces, LLMs may suggest exploring logical rules involving agents standing on blocks of different colors, as these could play a significant role in understanding the regulation of the game and achieving a win.\\n\\nIn the revised version, we modify Line 208 - 218 for better understanding and provide the relevant prompts in Figure A3 and A4 in Appendix.\\n\\n## Response to Question 1:\\n> L190-195 probably has some copy-pasting errors?\\n\\nThanks for appointing this. We will revise it in the future version.\"}", "{\"summary\": \"The paper presents a rule-augmented generation approach, where rules are learned from the training dataset using Monte Carlo Tree Search (MCTS).\\nIt shows that leveraging rules can outperform retrieved passages or even other supervised trained models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper contributes by integrating rule-based augmentation with generation models, leveraging rules learned from data.\", \"weaknesses\": \"There are several areas of concern:\\n\\n1. Clarity in Section 3.1 (LLM-based Logic Rule Search): This section is difficult to understand. Here are some follow-up questions for clarification:\\n\\n 1.1. What do the initial predicates look like across the three different datasets?\\n\\n 1.2. How does the LLM eliminate impossible predicates? Could you provide prompt examples?\\n\\n 1.3. How does the LLM propose new target predicates? Any prompt examples for this?\\n\\n2. Performance of Rules Alone: It appears that in cases where the rules generalize well to the test set, predictions might be straightforward using only the rules. However, this may not extend to more complex or varied test cases.\\n\\n3. Applicability of Rules: The rules generated in this paper may not directly translate to real-world retrieval-augmented generation (RAG) settings, which often require \\\"external knowledge\\\" represented in both body and target predicates. For instance, the example in Figure 2 does not reflect how real-world decisions about weather predictions are made. The rules discussed here seem applicable primarily to scenarios with deterministic target predicates (e.g., classification tasks). How might this approach be extended to real-world scenarios, as suggested in the conclusion?\", \"questions\": \"1. Did you use only the \\\"train\\\" split to construct the rules, testing them exclusively on the \\\"test\\\" split?\\n2. How did you partition the data for the cooperative game into train, validation, and test sets?\\n3. Given the reliance on deterministic predicates, how do you anticipate this approach adapting to real-world RAG scenarios requiring dynamic, knowledge-based decision-making?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Dear Authors,\\n\\nThank you for your thorough response. I have no further questions.\"}", "{\"comment\": \"**Response to Weakness 3.7:**\\n> Experimental evaluation: a lot of the comparisons in the experiment are between models that have very different abilities. Comparing GPT-4 to some CNN or BiLSTM or BERT is not a fair comparison at all. Baselines should be between the same model that uses different methods for providing in-context information and this is missing.\\n\\nThanks for your question. The comparison is fair since SOTA methods like CNN, BiLSTM, and BERT are trained on the provided training data, whereas LLMs are not fine-tuned on it. The experimental results support our claim that learned logic rules can enhance LLM generation, enabling it to perform competitively with or even surpass SOTA methods across various domains.\\n\\nWe also provide baselines with different methods for providing in-context information, like HtT in relation extraction, LogGPT in anomaly detection and Vanilla, HtT, ICL-good, ICL-contrastive, PLLB, and RAG in cooperative game. Additionally, we have included results for Vanilla, ICL, and RAG in relation extraction (Tables 2) and anomaly detection (Tables 3) in the revised version. **Among all the methods providing in-context learning information, our method achieves the best performance and enjoys the efficient computation.**\\n\\n| **Method** | **Relation Extraction (F1/Precision/Recall)** | **Log-based Anomaly Detection (F1/Precision/Recall)** |\\n|------------|----------------------------------------------|------------------------------------------------------|\\n| **Vanilla** | 46.94% / 69.61% / 35.41% | 60.10% / 47.05% / 83.16% |\\n| **ICL** | 50.26% / 74.09% / 38.02% | 69.77% / 78.95% / 62.50% |\\n| **RAG** | 52.30% / 78.64% / 39.17% | 84.32% / 98.97% / 73.46% |\\n| **Ours** | 60.42% / 69.44% / 53.48% | 92.59% / 100% / 86.21% |\\n\\n**Response to Weakness 3.8:**\\n> Experimental evaluation: The main claim in the abstract and introduction is that using rules in context is a better alternative to in-context learning and RAG - but I don't see any RAG or ICL baselines in section 4.1 or section 4.2 so how can this claim be made? Did the author try ICL and RAG in 4.1 and 4.2? or only in 4.3? It would also be good to have much more detail on what the ICL and RAG baselines look like in Section 4.3.\\n\\nWe would like to address your concern point by point.\\n\\n**More baselines in relation extraction and anomaly detection:** Thank you for your suggestions. We have added experimental results for Vanilla, ICL, and RAG in both relation extraction and anomaly detection, as shown in the table in **Response to Weakness 3.7**. According to the experimental results, our method outperforms Vanilla, ICL, and RAG across diverse tasks, supporting our claim that RuAG is a viable alternative to ICL and RAG.\\n\\n**More implementation details on baselines in cooperative game:** In the cooperative game task (Sec 4.3), we provide additional details on how different methods deliver in-context knowledge: `ICL-good` prompts LLMs with three good demonstrations; `ICL-Contrastive` provides external knowledge through two good and two bad demonstrations; and RAG retrieves timesteps with similar observations, informing LLMs of both observations and actions.\\nAs discussed in Lines 409\\u2013411, due to the lengthy trajectory descriptions in the game, `ICL-good` and `ICL-Contrastive` often struggle to interpret the examples effectively, frequently failing to act intuitively\\u2014such as moving directly toward the treasure. Additionally, RAG sometimes retrieves poor action samples, which can mislead the LLMs.\\n\\nWe revise Sec. 4.3 in the attached version to include above.\\n\\n**Response to Weakness 3.9:**\\n> Minor: lines 189 and onwards have the same few sentences twice. \\n\\nThanks for appointing this. We address this in the revised version.\\n\\n## Response to Question 1:\\n> Table 4 -- where do confidences come from?\\n\\nThe 'confidence' in Table 4 refers to the rule's precision by grounding it in the training set. We assume that a rule with high precision is reliable and pass this information to LLM for more considering the uncertainty in learned rules. We replace the term 'confidence' with 'precision' in the revised version for better understanding.\"}", "{\"comment\": \"Dear Reviewer fTJ7,\\n\\n\\nAs the rebuttal period nears its end, we wanted to kindly follow up on our previous response. We sincerely hope the revisions and clarifications provided address your concerns and align with the criteria for an acceptable-level score. \\n\\n\\nIf you have any additional comments or suggestions, we would be more than happy to address them before the discussion period concludes.\\n\\n\\nThank you again for your thoughtful feedback and for considering our responses. We truly appreciate your time and effort.\\n\\n\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"We appreciate your positive feedback and recognition of our work!\"}", "{\"summary\": \"This paper proposes to use LLM to create abstract rules that can be provided in context for better decision making when using LLM. The method contains a few steps where first the LM is used to define the features and labels for which we would like to forumlate rules over. Then MCTS is used to find the rules that explain the data best, and last the rules are provided in-context for prediction. The idea of using LLM to create features to search over given data is very nice. The method is evaluated on tasks of relation extraction, anomaly detection and a multi-agent game. Using GPT-4 is much better than weaker models (unsurprisingly) and using the rules is better than some prior work termed HtT (which needs more explanation). In the game experiment there is advantage over ICL and RAG.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea of using large language models for \\\"feature extraction\\\" is very interesting. It is related to https://arxiv.org/abs/2409.08466 (which I don't expect to be in the paper since it's very recent)\", \"The empirical results show that for some use cases abstract rules can be leveraged to improve performance.\"], \"weaknesses\": \"* The paper needs better **scoping** -- when is this method likely to be useful and when not? For example, the point of machine learning is that some things are hard to express by rules -- for example, what makes a face a face or a cat a cat? We learn machine learning models for cases where rules are hard to formulate. Is this method restricted to things that can be defined by rules or not? I think it's important to discuss this. Second - the method assumes an input of N features with some feature description. Often nowadays we work with more raw data like a sequence of words etc. Is this restricted to such cases? The tasks chosen are diverse but rare and it is not clear how general the method is and when we should expect it to work? Overall the generality of the method remains unclear and needs further discussion.\\n\\n* Line 188: it is not clear if the LM looks at the actual data. The authors talk about using the LLM to look at the data and find patterns, but it seems maybe the LLM is only used to look at the schema, or the featuer descriptions to define what are the body and target predicates and it does not use the actual values of these features in the data at all? Choosing the rules is done with MCTS where it is not clear if the LM is used - it seems like rules are applied on the data to see if they work well. So if the LLM is not used to find patterns in the data this might be a bit limited.\\n\\n* The paper has many clarity issues:\\n** Clarity - I don't understand figure 3 well enough - it seems important but is a mix of using text and emoji without proper explanation I can kind of squint at it and guess but it seems really difficult to understand what are the details of each step.\\n\\n** Clarity: line 190: \\u201cinitial the features as the body predicates\\u201d - what are the features exactly? can you give some examples and intuition at this point already?\\n\\n** The paper talks about \\\"impossible body predicates\\\". What are those? Why are they impossible? What is exactly the input provided to the LLM to perform this task (please don't send me to the appendix in author response). Similarly \\\"suggesting new target predicates\\\" how? What is the task given to the LLM to do that? All those seem like crucial aspects that are not explained. I can imagine the LM doing an OK job in these things with some prompt but they don't seem necessarily like something that is well defined that even humans can do with reasonable agreement.\\n\\n** HtT -- this seems like a key baseline that is not explained properly.\\n\\n** Line 285: During the rule extraction process, we leveraged the LLM to filter out 15% of the relationships that were unlikely to serve as valid predicates. Unclear;\\n\\n** Experimental details: There is very little detail in the paper on what are the featuers/labels/target and body predicates in each of the experiments, this makes it hard to understand the task. There are a few examples in a table but this is insufficient for understanding.\\n\\n* Experimental evaluation: a lot of the comparisons in the experiment are between models that have very different abilities. Comparing GPT-4 to some CNN or BiLSTM or BERT is not a fair comparison at all. Baselines should be between the same model that uses different methods for providing in-context information and this is missing.\\n\\n* Experimental evaluation: The main claim in the abstract and introduction is that using rules in context is a better alternative to in-context learning and RAG - but I don't see any RAG or ICL baselines in section 4.1 or section 4.2 so how can this claim be made? Did the author try ICL and RAG in 4.1 and 4.2? or only in 4.3? It would also be good to have much more detail on what the ICL and RAG baselines look like in Section 4.3.\\n\\n* Minor: lines 189 and onwards have the same few sentences twice.\", \"questions\": \"** Table 4 -- where do confidences come from?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer fTJ7,\\n\\nAs the discussion period draws to a close, we would like to confirm that our responses have thoroughly addressed the questions and concerns raised in your initial reviews. We are confident that we have effectively addressed both major and minor points in our replies. If there are any further questions or clarifications needed, we are eager to continue the discussion. If you find our responses satisfactory, we kindly request you to consider raising the score.\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"In light of the additional baseline I will raise my score.\"}", "{\"comment\": \"**Response to Weakness 1.3:**\\n> 1.3. How does the LLM propose new target predicates? Any prompt examples for this?\\n\\nThank you for your question. In our method, LLMs suggest new target predicates based on the definitions of logical rules, task descriptions, and data schemas. For example, in a cooperative game, LLMs utilize these definitions along with descriptions of the game, agents\\u2019 observation spaces, and action spaces to propose new target predicates, as demonstrated in the prompt:\\n```txt\\nYou are a helpful assistant. Your task is to build body predicate and the head predicate for searching logic rules of a game through the Monte Carlo Tree Search. \\n\\n**Game Description:**\\nTwo player, Alice and Bob are collaborating to obtain a treasure in the grid world. At each timestep, the agent can observe their own paritial observation of the grid world, consisting of the following informations:\\n1) the color of the blocks surrounding the agent\\n2) the color of the block where the agent is located\\n3) the relative position of the agent's teammate to the treasure\\n4) the relative position of the agent to the treasure \\n\\nEach agent can choose to move up, move down, move right, move left and no action. If move towards an unmovable block, the agent gets a penalty of reward = -0.1. The two agents share a team reward whose values fails in [-0.2, -0.1, 0, -10, -10.1, 99.9, 100].\", \"there_are_blocks_with_five_colors\": \"movable white blocks, unmovable blocks, movable yellow blocks, movable purple blocks, movable skyblue blocks and the green blocks representing the treasure. Yellow, purple, skyblue blocks are with different functions but we do not know what will happen if any of the agent stand on any blocks. At any timestep, the agents can not stand on a same block.\\n\\n**Definitions of Logic Rule:**\", \"the_logic_rules_are_defined_as\": \"[body predicates: (feature 1 satisfies condition 1) & (feature 2 satisfies condition 2) & ...] -> [head predicate: a special game state]. \\nAs an example, (relative x-position of agent equals value 1) & (relative y-position of agent equals value 2) -> Alice obtains green block. \\n\\n**Your Task: Define Head Predicates:**\\nTo help the agents know the environment better, please suggest all events that may significant for the game win. Below are some examples,\\n - if alice obtain treasure\\n - if bob obtain the green block\\n - the agent get a reward of -10. \\n\\nPlease think step by step to finish your task.\\n```\\n\\nWe include the above prompt in Figure A4 in Appendix.\\n\\n\\n\\n\\n## Response to Weakness 2:\\n> Performance of Rules Alone: It appears that in cases where the rules generalize well to the test set, predictions might be straightforward using only the rules. However, this may not extend to more complex or varied test cases.\\n\\nWe agree that directly applying rules may work well for straightforward cases but is limited in addressing more complex or varied test cases. However, our proposed method is both flexible and capable of handling such scenarios for the following reasons:\\n\\n**Leveraging LLMs for Complex Tasks:**\\nFor tasks where logic rules cannot be directly applied, our method utilizes the generation capabilities of LLMs in conjunction with logic rules. This approach mimics human reasoning, where LLMs are guided by the external knowledge encapsulated in the rules to address complex cases. Furthermore, while traditional rule-based methods struggle with raw data inputs (e.g., paragraphs of text), our method enables processing such data by relying on LLMs\\u2019 generative abilities.\\n\\n**Addressing Out-of-Distribution and Misleading Rules:**\\nEven for tasks where logic rules are directly applicable, relying solely on rules can encounter challenges with out-of-distribution samples or inaccuracies caused by misleading rules. Our method addresses these issues by prompting LLMs to integrate relevant knowledge from multiple learned rules, enhancing the robustness and accuracy of predictions or decisions. This allows the system to generalize effectively, even when faced with unforeseen or edge-case scenarios.\\n\\nBy combining logic rules with LLMs, our method ensures greater adaptability and robustness across a wide range of tasks, including those with complexities or variations that traditional rule-based systems cannot handle effectively.\"}", "{\"comment\": \"# Response to comments from Reviewer fTJ7\\nThank you for your constructive review to help further improve our paper. Below, we provide a point-by-point response to your concerns.\\n\\n## Response to Weakness 1 (Clarity in Section 3.1):\\n**Response to Weakness 1.1:**\\n> 1.1. What do the initial predicates look like across the three different datasets?\\n\\nThank you for question. A predicate is a function or condition that evaluates input and returns a boolean value. A logic rule can be represented as `BodyPredicate1=True & BodyPredicate2=True & ... -> TargetPredicate=True`. Initial predicates are derived by converting dataset features into binary variables. Below are more details for each task:\\n- Relation extraction: This task identifies relationships among entities within a paragraph. The dataset contains 20 distinct relations, such as `in0(A, C)` (A is located in country C). \\n - **Target predicate**: The specific relation being predicted, e.g., `in0(A, C)`.\\n - **Body predicate**: Remaining relations excluding the target predicate. For instance, if the target predicate is `in0(A, C)`, body predicates are all other relations in the dataset.\\n- Log-based anomaly detection: This task classifies whether a sequence of log events isabnormal. Each sequence comprises log events such as `E5` (receiving a block).\\n - **Target predicate**: `Anomaly`, which indicates whether the log sequence is abnormal.\\n - **Body predicates**: Idividual log events, like `E5` (receiving a block) and `E7` (write operation exception).\\n- Cooperative Game: In this scenario, two players collaborate to locate a treasure. At each timestep, agents take actions based on their observations, such as the color of surrounding blocks.\\n - **Target predicate**: `GameWin`, indicating whether the agents successfully win the game.\\n - **Body predicates**: Observations and actions transformed into predicates, e.g., `IsYellow(Alice, Right)` (Alice's right block is yellow), `Move(Bob, Right)` (Bob moves right). \\n\\nTo enhance clarity, we have added Table 1 in the revised paper, to address your concerns and make the concept of initial predicates easier to understand.\\n\\n**Response to Weakness 1.2:**\\n> 1.2. How does the LLM eliminate impossible predicates? Could you provide prompt examples?\\n\\nThank you for question. Taking the relation extraction task as an example, we utilize the LLM to reduce searching cost in MCTS by filtering out relation candidates that do not pertain to other relations. This process considers the logical rule definition and detailed relation descriptions. The LLM is prompted with the following instructions:\\n``` txt!\\n I need your assistance in completing the preprocessing for generating logical rules between relationships. \\n\\n The logical rules follow this format (where the predicates before the arrow are considered Body predicates and the ones after the arrow are Head predicates):\\n\\n - relation1 -> relation3: This means if relation1 exists between entity A and entity B, then relation3 also exists between entity A and entity B.\\n - relation1, relation2 -> relation3: This means if relation1 exists between entity A and entity B, and relation2 exists between entity B and entity C, then relation3 exists between entity A and entity C.\\n\\n Given the following twenty relations and their descriptions, I need you to identify which relations are suitable for being Body predicates. Please remove the ones that are not appropriate for Body predicates.\\n {relationships}\\n\\n Please return the results as a dictionary where the key represents the relations suitable as Body predicates, and the value explains why.\\n```\\n\\nWe include the above prompt in Figure A3 in Appendix.\"}", "{\"comment\": \"# Response to comments from Reviewer ME5G\\nThank you for acknowledging our idea and its robust empirical support. The [referenced paper](https://arxiv.org/abs/2409.08466) reinforces our argument for using interpretable predicates. We build upon this by exploiting logical rules\\u2014relationships among these predicates\\u2014to enhance the generative capabilities of LLMs. This integration is grounded in the logic that:\\n1) Logical rules represent a fundamental form of external knowledge crucial for human reasoning and are readily translatable into the natural language constructs that LLMs can understand.\\n2) As the demand for explainable AI grows, like methods centered on causality and concept bottleneck models, our approach provides a significant advancement in using interpretable data to improve LLM generation.\\n\\n## Response to Weakness 1 (Scoping):\\nWe appreciate your suggestion to clarify the scope, which has greatly enhanced our paper. We've provided a point-by-point response to your concerns and incorporated them into the revised version. We're happy to discuss further if needed.\\n\\n**Response to Weakness 1.1:**\\n> The paper needs better scoping -- when is this method likely to be useful and when not? For example, the point of machine learning is that some things are hard to express by rules -- for example, what makes a face a face or a cat a cat? We learn machine learning models for cases where rules are hard to formulate. Is this method restricted to things that can be defined by rules or not? I think it's important to discuss this.\\n\\n\\n**Scope/Generality:** our work is flexible to address all the tasks where are underlying logical or structured relationships within the data, which can be distilled into explicit rules. This is widely-appeared in real world[1][2][3], which ensures the generality of our method. \\n\\n*Therefore, our method is versatile and not limited to things that can solely be defined by rules.* \\n\\nWe add the above discussion in the revised version.\\n\\n[1] Teru, Komal, Etienne Denis, and Will Hamilton. \\\"Inductive relation prediction by subgraph reasoning.\\\" ICML, 2020.\\n\\n[2] Siyuan Wang, Zhongyu Wei, Yejin Choi, and Xiang Ren. Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs. ACL, 2024. \\n\\n[3] Morishita, T., Morio, G., Yamaguchi, A., & Sogawa, Y. Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus, NeurIPS 2024.\\n\\n**Response to Weakness 1.2:**\\n> the method assumes an input of N features with some feature description. Often nowadays we work with more raw data like a sequence of words etc. Is this restricted to such cases?\\n \\nThank you for your question. Our method can handle raw data, such as sequences of words. In our method, logic rules serve as a compact knowledge source for large language models, enhancing their capacity to manage such data in tasks like relation extraction and anomaly detection. This approach is akin to Retrieval-Augmented Generation (RAG); however, it utilizes compact logic rules instead of extensive knowledge bases, offering greater efficiency and significantly reducing computational overhead during both retrieval and generation phases.\\n\\n**Response to Weakness 1.3:**\\n> The tasks chosen are diverse but rare and it is not clear how general the method is and when we should expect it to work? Overall the generality of the method remains unclear and needs further discussion.\\n\\nThank you for raising this important concern. The selection of these tasks was indeed intentional to showcase the broad applicability of our methods across various domains. Far from being rare, these tasks represent common challenges in the real world. For instance, relation extraction is a fundamental problem in NLP, as demonstrated by studies [1][2]. Similarly, log-based anomaly detection is crucial in time-series analysis [3][4], and cooperative games are widely used to study decision-making processes [5][6]. Additionally, the task of unauthorized abuse detection is a critical concern in industrial settings. Together, these examples illustrate the versatility and general applicability of our approach.\\n\\n[1] Kunxun Qi, Jianfeng Du, and Hai Wan. 2024. End-to-end Learning of Logical Rules for Enhancing Document-level Relation Extraction. ACL 2024.\\n\\n[2] Teru, Komal, Etienne Denis, and Will Hamilton. \\\"Inductive relation prediction by subgraph reasoning.\\\" ICML, 2020.\\n\\n[3] Gruver, N., Finzi, M., Qiu, S., & Wilson, A. G. Large language models are zero-shot time series forecasters. NeurIPS 2024.\\n\\n[4] Gong, Y., Luo, H., Liu, A. H., Karlinsky, L., & Glass, J. R. Listen, Think, and Understand. ICLR 2024.\\n\\n[5] Piatti, G., Jin, Z., Kleiman-Weiner, M., Sch\\u00f6lkopf, B., Sachan, M., & Mihalcea, R. Cooperate or collapse: Emergence of sustainable cooperation in a society of llm agents. NeurIPS 2024..\\n\\n[6]Sun, C., Huang, S., & Pompili, D. (2024). LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions. arXiv preprint arXiv:2405.11106.\"}" ] }
BpDa4YTKtO
Robust Locally Differentially Private Graph Analysis
[ "Jacob Imola", "Amrita Roy Chowdhury", "Kamalika Chaudhuri" ]
Locally differentially private (LDP) graph analysis allows private analysis on a graph that is distributed across multiple users. However, such computations are vulnerable to poisoning attacks where an adversary can skew the results by submitting malformed data. In this paper, we formally study the impact of poisoning attacks for graph degree estimation protocols under LDP. We make two key technical contributions. First, we observe LDP makes a protocol more vulnerable to poisoning – the impact of poisoning is worse when the adversary can directly poison their (noisy) responses, rather than their input data. Second, we observe that graph data is naturally redundant – every edge is shared between two users. Leveraging this data redundancy, we design robust degree estimation protocols under LDP that can significantly reduce the impact of poisoning and compute degree estimates with high accuracy. We prove that our robust protocols achieve the optimal levels of accuracy and soundness via information-theoretic lower bounds. Finally, we evaluate our proposed robust degree estimation protocols under poisoning attacks on real-world datasets to demonstrate their efficacy in practice.
[ "Data poisoning", "Local differential privacy", "graphs." ]
Reject
https://openreview.net/pdf?id=BpDa4YTKtO
https://openreview.net/forum?id=BpDa4YTKtO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zjhudqqgAt", "tt5IWgPWzy", "ssiGxIadi7", "lj8A77VaVr", "j0lCRppDRz", "idcczTHJKc", "hSLJFnbv59", "frcYRJnAWb", "csVvFoJxZZ", "ZdMhYDASSp", "WldF39ZcrR", "VpGtFQfq5r", "OMJhDGNqb5", "Mdkt3PlWmM", "EaDfgdCxSy", "DDgFHBCd7R", "BJgwWpdcfi", "5V736pEXZG" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732537351214, 1732588329539, 1732243811956, 1730608775609, 1731970295775, 1732220907178, 1730608772997, 1732583160185, 1730620419948, 1733154171761, 1731967924180, 1737524102528, 1734201966536, 1733155314265, 1732218005550, 1732584003695, 1732583138040, 1730406867528 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_Sabb" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_mTXY" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_rYuk" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_Sabb" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_BeqU" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_rYuk" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11088/Area_Chair_ThcD" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Authors" ], [ "ICLR.cc/2025/Conference/Submission11088/Reviewer_mTXY" ] ], "structured_content_str": [ "{\"title\": \"Response to rebuttal\", \"comment\": \"Thanks to the authors for their rebuttal.\\n\\n> **Fit** We chose ICLR since it has long been a leading venue for research in differential privacy. Here are a few examples of papers from ICLR 2024 ...\\n\\nI acknowledge the authors have found ICLR papers that focus on privacy. This is helpful. I acknowledge that the paper is not out of scope for ICLR, but perhaps might have slightly narrow appeal to the ICLR community.\\n\\n> **Rate of flagging.** The quoted statement was not made in reference to Fig. 3, but rather in the context of the immediately preceding discussion (lines 504-507), which we reproduce here for clarity...\\n\\nThe authors are essentially repeating what's already in lines 504-507. While they mention Table 1 in this paragraph, the table itself is placed in the appendix. The paper would be well served by referencing Table 1.\\n\\n> **Choice of parameters for experiments.** All our theoretical results are completely general purpose meaning they do not rely on any assumptions about problem-specific parameters, such as the privacy parameter, the underlying graph, input distribution, or the type of attack. \\n\\nI understand the authors chose $\\\\epsilon = 0.7$ as a high privacy setting, but am curious about the specific reason for this value. From the range of possible values $0.1, 0.2, ..., 3$, why exactly $0.7$? Was this choice made to make the visualizations in figures (e.g., Fig. 7) more distinguishable and clear? Or was it randomly selected just to represent a high privacy level? Did the authors consider how other values like $0.1$ or $0.3$ might affect the visualization of their results?\", \"there_seems_to_be_an_inconsistency_in_the_numbers\": \"The authors state that for the Facebook dataset with 4082 users, using a 33% poisoning rate should result in 1347 malicious users. However, the paper reports 1332 malicious users. Could the authors please clarify this discrepancy?\\n\\n> **Impact of different problem specific parameters** - One of the biggest advantages of our robustness guarantees is that they are completely general purpose and attack agnostic ...\\n\\nNo further issues here\\n\\n> **Computational complexity.** Our algorithm requires O(n) work to estimate the and the operations are extremely light weight\\n\\nThe authors demonstrate their protocol on Facebook and synthetic graph datasets. It would be helpful to understand how it performs in terms of actual running time? Specifically:\\nWhat is the computational time for normal operation (without attacks)?\\nHow much additional time is required when handling poisoning attacks?\\nUsing the Apple phone call example, how long would it take for poisoned users to affect the graph collection process compared to the regular case? \\n\\n> **Related Work.** We will update the related work section with more concurrent work. However ...\\n\\nThe review was not asking the authors to cite specific papers - it was just noticed that the literature review stops at 2022. An updated review through 2024 would help verify novelty claims about being the first graph-based approach and might reveal useful insights from recent research that could strengthen this paper, especially since the graph-based approach builds upon some ideas from tabular data work.\\n\\nThe typos and figure issues that were identified in the previous review have not been fixed in this version. \\n\\nWhile the rebuttal, updates, and discussion with reviewers as a whole are appreciated, I will maintain my scores/overall ratings.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for engaging with us and provide additional details:\\n\\n** Figures.** We will fix all the typographical issues and we will include Table 1 in the main body.\\n\\n**Related work.** We thank the reviewer for the suggestion and would include an updated review through 2024.\\n\\n**Choice of experimental parameters.** We selected $\\\\epsilon=0.7$ at random in the high privacy regime. The visualization remains the same for any value in this regime. There is a typo -- the number of nodes in FB should be 4039 (as noted in the original paper [a]). We thank the reviewer for catching this and will fix this typo in the paper. \\n\\n[a] Learning to discover social circles in ego networks\\n\\n**Computational complexity.** The computational cost is roughly twice that of the naive approach based on randomized response. We will revise our paper to also include the runtime numbers. \\nThe clients do not require any extra time to perform poisoning attacks. Since our protocols are non-interactive, clients can deceive the server in a single, one-shot communication.\"}", "{\"comment\": \"Thanks very much for your response! Your rebuttal is basically saying that for the problem you study, node-level DP does not give non-trivial utility guarantee, while I am already aware of this. I raised the concern about the setting because it feels counterintuitive to me to study local differential privacy in a context where some local data is already shared with others due to the graph structure. But after reading the settings in some of the references you mentioned in your response, I am sufficiently convinced that this should not be an issue \\u2014\\u2014 sharing data with users in the network does not equate to placing trust in a central aggregator, so local DP is still meaningful. So I take this concern back.\\n\\nAfter convincing myself that there are no critical issues with the setting and motivation, I carefully went through the entire paper once again. To be honest, I am not particularly impressed by this paper. From a technical perspective, the algorithms are built upon basic mechanisms, and much of the analysis heavily relies on standard concentration inequalities. However, from a perspective of appreciation, of course I agree that all these contributions are still highly non-trivial, and it is the first\\u00a0work studying the poisoning attack for graphs under LDP. I also found that the $O(m+\\\\sqrt{n}/\\\\varepsilon)$ critical point of the honest error and malicious error described in Theorem 1 is pretty interesting. \\n\\nOverall speaking, I would not fight for accepting this paper but I believe its conceptual contribution should merit the acceptance to ML conferences like ICLR. Therefore, I decided to raise my score to 8.\"}", "{\"summary\": \"The paper explores the vulnerability of locally differentially private (LDP) graph analysis to poisoning attacks, where adversaries skew results by submitting malformed data. The authors highlight that LDP protocols are particularly susceptible to such attacks and leverage the natural redundancy in graph data to design robust degree estimation protocols under LDP. They propose a formal framework to analyze protocol robustness, focusing on accuracy for honest users and soundness for malicious ones. The paper introduces new protocols that significantly reduce the impact of adversarial poisoning and computes degree estimates with high utility. Comprehensive empirical evaluations on real-world datasets validate the effectiveness of these protocols. The study contributes to the understanding of poisoning attacks under LDP and provides practical solutions for more secure graph analysis.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper focuses on an interesting research question and builds on strong theoretical foundations, including information theory and differential privacy, to establish lower bounds and prove the efficacy of the proposed solutions.\", \"weaknesses\": \"My fundamental concern lies in that the practical significance of the paper is rather unclear. The paper gives an motivating real-world example, which involves degree collection on social networks. In practice, social networks often publicly display the number of followers or connections a user has, rendering the need for private degree aggregation obsolete.\", \"questions\": \"If the major focus of the paper more targeted to aggregated degree calculation or network publishing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their insightful comments and address the main concerns below.\\n\\n**Factual Error.** Our analysis is indeed correct and in line with the definition of edge-LDP which was introduced in the seminal paper by Nissim et al. [a] . What the reviewer points to is not edge-LDP but **relationship**-DP which was introduced by Imola et al in [b][c]. Nevertheless, translating between these two definitions is straightforward - any $\\\\epsilon$-edge LDP protocol satisfies $2\\\\epsilon$-relationship DP (Proposition 1 in [b]). Note since the privacy definitions only affect a constant term all our asymptotic conclusions from our theoretical results are completely unaffected by the choice of the privacy definition.\\n\\n[a] Smooth Sensitivity and Sampling in Private Data Analysis, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith\\n\\n[b] Locally differentially private analysis of graph statistics 2021, Jacob Imola, Takao Murakami, Kamalika Chaudhuri\\n\\n[c] Communication-Efficient triangle counting under local differential privacy 2022, Jacob Imola, Takao Murakami, Kamalika Chaudhuri\\n\\n\\n**Contributions.** Ours is the *first* work to study the impact of poisoning under LDP for graphs -- prior work only focused on tabular data or key-value datasets. Nevertheless, showing the separation between input and response poisoning the is *not* at all our primary contribution. Our major contributions are \\n1. Providing a new framework for quantifying robustness \\n2. A new **lower bound** result on poisoning attacks for graphs (Thm. 1)\\n3. Providing the first provably robust degree estimation protocol that is completely **attack agnostic** and **optimal** (i.e., matches the above lower bound)\\n\\nAll of these theoretical contributions are completely novel and highly non-trivial. In fact, ours is the first work to give *provable* and *attack-agnostic* robustness guarantee against any LDP protocols, graphs or otherwise -- prior defenses (all in the context of tabular data) were empirical and customized to specific attacks. \\n\\n**Honest and Malicious Error.** Honest error corresponds to the error introduced in the degree estimate of an *honest client* while malicious error corresponds to the error introduced in the degree estimate of a *malicious client*. In a nutshell, honest error and malicious error quantify the error of the two disjoint sets of clients.\\n\\n**Experiments.** We have carried out an extremely extensive experimental evaluation with **16** different attacks capturing real-world attack scenarios. Due to lack of space, we could only include a subset of our experimental results in the main paper -- an additional set of results are presented in Appendix D. Fig. 4 plots both malicious and honest error for 1) the naive baseline 2) our proposed protocols. As seen from the plots, the main observation is that the both honest and malicious error is significantly lower for our proposed protocols. Additionally, the plot also shows the separation between input and response poisoning. As such, the plot empirically demonstrates the superiority of our proposed protocols in protecting against poisoning attacks and thereby validate our theoretical results. \\nCould the reviewer please point out how can the evaluation be improved?\\n\\n**Presentation.** We will do a thorough copy-edit pass of the paper.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their insightful comments and address their main concerns below. We are glad to learn that we were able to address the reviewer's concerns with our previous rebuttal. Here, we would like to take the opportunity to provide some additional context as to why edge-LDP is the most meaningful privacy guarantee in our setting.\\n\\nThere are two standard notions of DP for graphs - edge DP where the information about a single edge is protected, and node-DP where the information about an entire node of the graph is protected. Although node-DP is a stronger privacy guarantee, it is not suitable for our setting. Recall that our goal is to estimate the degree of every user. Now protecting this under node-LDP would require adding noise proportional to $\\\\frac{n}{\\\\epsilon}$ (since under node-LDP all the edges of a user can change resulting in a sensitivity of $n$). But this means that the amount of noise itself is much higher than the true answer (a degree can be at most $n$ and for real-world graphs it is often much lower) rendering the estimates to be completely meaningless. \\nAs a result, the current literature considers edge-DP to be the standard notion of privacy in the local setting and has been used in the context of a variety of tasks such as counting the number of triangles [a], k-core decomposition [b], training graphical neural networks [c] and synthetic graph generation [d].\\n\\n\\n[a] Triangle counting with local edge differential privacy\\n\\n[b] Near-Optimal Differentially Private k-Core Decomposition \\n\\n[c] LPGNet: Link Private Graph Networks for Node Classification\\n\\n[d] Generating Synthetic Decentralized Social Graphs with Local Differential Privacy.\"}", "{\"summary\": \"This work introduces a systematic framework for analyzing poisoning attacks in Local Differential Privacy (LDP) protocols for graph degree estimation. The authors propose two key metrics: honest error and malicious error, to quantify the impact of adversarial manipulation on both honest users and overall estimation accuracy. Their analysis reveals that poisoning attacks are more effective when targeting randomized response mechanisms compared to direct input manipulation. The work contributes two novel attack vectors: degree inflation and degree deflation, providing a comprehensive examination of potential adversarial strategies. To counter these threats, the authors leverage the inherent redundancy in graph structures\\u2014specifically, the property that edges are naturally reported by both connected vertices\\u2014to develop two defensive protocols. The empirical evaluation encompasses both synthetic and real-world (Facebook) datasets of varying scales, demonstrating the effectiveness of their findings and proposed defenses. Their results provide important insights into the vulnerability of LDP protocols in graph statistics and offer practical approaches for enhancing robustness against poisoning attacks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**Originality:**\\nThe paper presents the first comprehensive study exploring poisoning attacks in LDP protocols for graph degree estimation. The work introduces several novel ideas. These include:\\n-Demonstrating that poisoning attacks on randomized responses (i.e. output of the noise addition required for LDP) are more effective than input data poisoning\\n- Leveraging edge-sharing properties between adjacent nodes for malicious user detection\\n- Developing a method to distinguish between LDP-induced and malicious inconsistencies\\n- Proposing solutions that exploit the inherent redundancy in graph edge reporting for attack mitigation\\n\\n**Quality:**\", \"the_paper_demonstrates_technical_soundness_through\": [\"An appropriate mathematical formulation of the real-world graph degree estimation problem\", \"Rigorous analysis of dual sources of edge distribution inconsistency: LDP randomization and malicious manipulation\", \"Comprehensive parameter evaluation across privacy budget ($\\\\epsilon$), accuracy error, malicious error, database size, and adversary size and bounds\", \"**Clarity:**\"], \"the_work_presents_its_ideas_through\": [\"Practical motivation grounded in real-world applications, particularly social network influence analysis (e.g., Mastodon)\", \"Systematic development of robust degree estimation protocols that address both malicious and honest errors\", \"**Significance:**\"], \"the_paper_makes_several_significant_contributions\": [\"Direct applicability to real-world scenarios of influencer detection and manipulation in social networks\", \"A good (but perhaps not comprehensive in terms of data sources) empirical validation using both synthetic and Facebook datasets\", \"Practical defensive measures for preventing adversaries from promoting malicious users as influential nodes\", \"The work provides both theoretical insights and practical defensive measures against poisoning attacks in LDP protocols for graph analysis. The comprehensive parameter analysis and thorough experimental validation across multiple datasets demonstrate both the theoretical and practical significance of the contributions.\"], \"weaknesses\": \"**Writing and Technical Issues:**\\n- There is redundant wording in line 122, page 3: \\\"distributed graphs and has been widely studied widely\\\"\\n- The reference formatting lacks consistency throughout the paper. For instance: \\n1. Author names are inconsistently abbreviated (e.g., \\\"Xiaoyu Cao, et al.\\\" vs. full author lists)\\n2. Conference/journal names and their formatting vary (e.g., inconsistent capitalization and abbreviations)\\n3. In the current version, the latest reference is from the year 2022; The reference section could be strengthened by including recent (2023-2024) developments in LDP poisoning attacks, particularly works on LDP protocol robustness and defense mechanisms against output poisoning. This additional context would further highlight the paper's pioneering contribution to LDP-protected graph poisoning attacks. A list that is far from exhaustive is given below. Other references have been updated but not reflected as such: e.g., Li et al. (2022) on fine-grained poisoning attacks has appeared in a more final form at USENIX Security 2023.\\n\\n**Figures and Visualizations:**\\n1. Figure Quality:\\n- Figures 3 and 4 are not provided in vector format, resulting in poor scalability and reduced readability when zoomed\\n- The font styles and sizes in subcaptions (a)(b)(c)(d) lack consistency across Figures 3 and 4, etc.. \\n2. Experimental Design and Presentation:\\n- A limitation in the experimental design appears in Figure 4, where the varying database sizes (m=1332 vs m=1320) lack rigorous theoretical motivation. The authors' justification that these parameters \\\"meet the asymptotic theoretical error bounds\\\" requires more substantial analytical support to establish the connection between these specific numerical choices and the theoretical foundations.\\n- The choice of $\\\\epsilon$ values (0.7 and 3.00) requires justification\\n- Consider using other additional visualization methods for the comparative analysis, as it might better highlight the differences in some malicious errors and honest errors. \\n\\n**References:**\\n1. Huang, Kai, Gaoya Ouyang, Qingqing Ye, Haibo Hu, Bolong Zheng, Xi Zhao, Ruiyuan Zhang, and Xiaofang Zhou. \\\"LDPGuard: Defenses against data poisoning attacks to local differential privacy protocols.\\\" IEEE Transactions on Knowledge and Data Engineering (2024).\\n2. Sun, Xinyue, Qingqing Ye, Haibo Hu, Jiawei Duan, Tianyu Wo, Jie Xu, and Renyu Yang. \\\"Ldprecover: Recovering frequencies from poisoning attacks against local differential privacy.\\\" arXiv preprint arXiv:2403.09351 (2024).\", \"questions\": [\"**Venue Fit and Positioning:**\", \"While the paper presents solid technical contributions in security and privacy, its fit with ICLR's focus on learning is not immediately clear. Privacy/security of ML is certainly on topic for ICLR, however it would be appreciated if the authors elaborate on their thoughts here, and whether they had considered a security/privacy venue. Given that many cited works on LDP and poisoning attacks appear in security and privacy venues.\", \"**Technical Clarifications:**\", \"The finding that \\\"the rate of flagging is less aggressive for FB since it is a sparse graph\\\" (line 508) is not readily apparent in Figure 3. Could the authors clarify this observation with supporting evidence?\", \"How does the computational complexity of the proposed protocols scale with very large graphs? Are there any limitations or performance bottlenecks?\", \"For the experiments comparing input poisoning and response poisoning, what informed the choice of different database sizes (m=1332 vs m=1320)? How do these specific values relate to the theoretical bounds?\", \"The paper uses an argument about the Bernoulli distribution to distinguish between LDP-induced and malicious inconsistencies. It would be appreciated if the authors might elaborate on the theoretical justification here; The sensitivity of this modeling choice to different graph topologies (beyond the tested Facebook and synthetic datasets) and different attack patterns. How might the results be affected by: networks with heterogeneous degree distributions, social networks exhibiting power-law connectivity, and graphs with varying density across different regions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Checking In\", \"comment\": \"Dear Reviewer,\\n\\nWe wanted to check in if there are additional concerns that we can help address.\\n\\nThanks, Authors\"}", "{\"summary\": \"This paper studies the problem of data poisoning attacks to graph data analysis under local differential privacy, specifically targeting the estimation of node degree distribution. Although the studied problem is important, the contribution is incremental, and the proposed solution, along with its theoretical analysis, contains flaws.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The studied problem is important.\\n\\n2. Extensive theoretical analysis is provided.\", \"weaknesses\": \"1. The graph data perturbation involved in this work does not satisfy LDP. This work is based on edge LDP, which protects the existence of an edge between any two users. In terms of adjacency vector, the sensitivity of an edge\\u2019s existence should be 2 bits. Thus, when applying RR to perturb that vector, the probability should be $\\\\frac{1}{1+e^{\\\\epsilon/2}}$, rather than $\\\\frac{1}{1+e^\\\\epsilon}$. In terms of degree perturbation, the sensitivity of an edge\\u2019s existence should be 2, as the edge connects to two nodes and affects the degree of both nodes. Thus, when applying Laplace noise, it should be $Lap(2/\\\\epsilon)$, rather than $Lap(1/\\\\epsilon)$. This issue has been widely studied in the literature [1-2].\\n\\n[1] Liu Y, Wang T, Liu Y, et al. Edge-Protected Triangle Count Estimation under Relationship Local Differential Privacy. IEEE Transactions on Knowledge and Data Engineering, 2024.\\n\\n[2] Ye Q, Hu H, Au M H, et al. LF-GDPR: A framework for estimating graph metrics with local differential privacy. IEEE Transactions on Knowledge and Data Engineering, 34(10): 4905-4920, 2022.\\n\\n2. The contribution is incremental. The difference between input poisoning and output poisoning in the context of LDP has been thoroughly studied in the literature. In addition, it is unclear how the honest error differs from the malicious error. Can the authors provide a concrete example for illustration? \\n\\n3. The experimental evaluation needs to be improved. It is unclear what observation and conclusion can be made from Figure 4. \\n\\n4. The presentation needs to be improved. There are quite a few typos in the manuscript. Here are some examples. \\n- In page 2, \\u201cupto\\u201d -> \\u201cup to\\u201d\\n- In page 4, \\u201creponse\\u201d -> \\u201cresponse\\u201d\\n- In page 5, \\u201cIn our first scenario, consider\\u201d -> \\u201cOur first scenario considers\\u201d\", \"questions\": \"Please refer to the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors for addressing my concern about the motivation of the work. I would like to suggest the authors to add the motivating examples to the revised paper to make its motivation and potential contribution more clear.\\nI would also ask more clarification on how would the authors differentiate their work with the private network publishing works?\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their insightful comments and address the main concerns below.\\n\\n**Motivation** We work in the setting of a distributed graphs -- i.e., nobody has access to the entire graph. Hence, if the graph is in the context of a distributed social media network, such as Mastodon, bluesky, it is *impossible* for the server to publish the exact degrees of the users because it simply does not have access to this information. The only way the server can get access to this information is collecting this directly from the users. Now, one of the biggest selling point of distributed social networks is privacy, hence, it is unrealistic to imagine that the users will be okay to report their degrees in the clear to the server which showcases the real-world applicability of our protocol.\\nAnother example of a distributed graph is in the context of phone call graphs. Consider that every iPhone owner is a user or node, and an edge between two users indicates a phone call between them. Apple, acting as the untrusted aggregator, wants to compute a degree vector of the entire graph. The edges are sensitive (phone calls reveal users' personal social interactions), so users cannot submit their data to Apple directly. Instead, they add noise to their data to achieve a local differential privacy guarantee before sharing it with Apple. \\n\\nStudying privacy-preserving degree distribution is a fundamental and classic problem in the graph privacy literature and has been examined thoroughly in prior literature starting from the seminal works of Nissim [a], Hay et al. [b], Karwa et al. [c]. We extend this body of work to a new dimension by considering the threat of poisoning attacks which is again extremely realistic in our setting.\\n\\n\\n[a] Smooth Sensitivity and Sampling in Private Data Analysis, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith\\n\\n[b] Accurate Estimation of the Degree Distribution of Private Networks, Michael Hay, Chao Li, Gerome Miklau, David Jensen\\n\\n[c] Private Analysis of Graph Structure , Vishesh Karwa, Sofya Raskhodnikova, Adam Smith, Grigory Yaroslavtsev\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": [\"# Summary of Contribution\", \"This paper studies a setting where each user is a node in a graph and has its adjacency list. The goal is to compute the estimate degree of each user while respective edge differential privacy (edge DP). The standard protocol for this problem is to add Laplace noise to each degree and publish it, which achieves error of $O(1/\\\\epsilon)$. However, this protocol is very non-robust: A malicious user can lie and make their estimate degree arbitrarily large. The contributions of this paper are as follows:\", \"They propose a model for robust protocols in edge-LDP setting, together with the concept of honest accuracy and malicious accuracy.\", \"They provide a protocol that has honest accuracy of roughly $O(1/\\\\epsilon)$ and malicious accuracy of roughly $O(m + \\\\sqrt{n} / \\\\epsilon)$ where $m$ denote the number of malicious users. The protocol is roughly as follows: in addition to the Laplace mechanism, each user also sends their adjacency list, privatized via Randomized Response. We then calculate the estimated degrees in two ways: (i) via the published degree (from Laplace mechanism) by that node, and (ii) via the randomized adjacency lists of the other nodes. If the two are close enough, the estimate is set to (i). Otherwise, it is set to (ii).\", \"They prove a matching lower bound on the errors.\", \"# Strengths\", \"**Novel model**: This is the first paper that studies robustness in edge-DP model.\", \"**Elegant Protocols**: The protocols are based on simple ideas and are well explained in the paper.\", \"**Matching Lower Bounds**: The authors also show that their lower bounds are nearly optimal by providing lower bounds.\", \"# Weaknesses\", \"**Importance**: The problem studied / techniques proposed in this paper are quite specific. Namely, it only works in the setting where each piece of data (i.e. edge) is redundant. This is very specific to degree estimation in the edge-LDP model. The methods proposed in this paper are also relatively straightforward from the technical standpoint. Thus, it is unclear how the insights in this paper can lead to broader insights.\", \"**Practicality**: There are several factors that question the practicality of the protocol:\", \"**Error**: The error for a malicious user here is at least $\\\\sqrt{n}/\\\\epsilon$. This means that, even if there is a single malicious user with degree zero, they can pretend to have a degree of $\\\\sqrt{n}/\\\\epsilon$. This is already quite a large. (E.g. on Twitter, this would easily put them in the 0.01% top users.) Of course, as shown by the lower bound, this is inevitable; but this also suggests that maybe edge-LDP is *not* the right model when it comes to robustness.\", \"**Communication**: Since every user needs to apply randomized response over the entire $n$-bit vector, the total communication here is (at least) $\\\\Omega(n^2)$. It is thus unlikely that this can be applied to any real-world social network graphs of today.\", \"**Parameter setting**: The setting of the threshold $\\\\tau$ involves knowing (or approximating) the number of malicious users $m$ beforehand.\", \"# Recommendation\", \"Although this paper makes a solid theoretical contribution towards the degree estimation problem with edge-LDP, the model might be too specific and not sufficiently practical for a broader audience at ICLR. As such, the paper might be more suitable to be published at a more focused venue (e.g. on privacy / security or distributed graph analysis). Given this, we recommend rejection.\"], \"additional_comments_on_reviewer_discussion\": \"The authors give some examples where edge-LDP setting makes sense and also discuss differences compared to previous work. However, the concerns in meta-review remained. Additionally, the authors tried to clarify a misunderstanding with reviewer BeqU, who unfortunately didn't reply during the rebuttal period; nonetheless, I had already taken this into account when recommending rejection.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We are glad to be able to address the reviewer's concern and will include this discussion in the paper.\\n\\n**Difference from prior work** - Ours is the *first work* to study the impact of poisoning attacks for graphs under LDP and furthermore, to provide the *first* provably robust algorithms for graph statistics. All prior work on private graph analysis has focused on computing different graph statistics *only* under privacy -- none of them consider poisoning attacks. Additionally, all prior studies on data poisoning under LDP have been limited to tabular or key-value data.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their insightful comments and address their main concerns below. We are very glad to know that the reviewer found our work to provide both theoretical insights and practical implementations. We would like to take the opportunity to address the main concerns below.\\n\\n**Fit.** We chose ICLR since it has long been a leading venue for research in differential privacy. Here are a few examples of papers from ICLR 2024\\n\\na. A Differentially Private Clustering Algorithm for Well-Clustered Graphs\\n\\nb. Numerical Accounting in the Shuffle Model of Differential Privacy\\n\\nc. Privacy Amplification for Matrix Mechanisms\\n\\nd. Efficiently Computing Similarities to Private Datasets\\n\\n**Rate of flagging.** The quoted statement was not made in reference to Fig. 3, but rather in the context of the immediately preceding discussion (lines 504-507), which we reproduce here for clarity.\\n\\nOur protocols are able to flag malicious\\nusers when they target a large number of honest users. Specifically, for the strongest degree deflation\\nattack, Hybrid flags 4.5% and 49.8% of the malicious users for FB and Syn, respectively. RRCheck,\\non the other hand, flags 3% and 59.3% of the malicious users for FB and Syn, respectively. Note that\\nthe number of actual honest users affected by a malicious user is bounded by its degree. This is the reason why \\nrate of flagging is less aggressive for FB since it is a sparse graph (the maximum degree is low). \\nThe rates of flagging are presented in Table 1. \\n\\n**Choice of parameters for experiments.** All our theoretical results are completely *general purpose* meaning they do not rely on any assumptions about problem-specific parameters, such as the privacy parameter, the underlying graph, input distribution, or the type of attack. This means that our theoretical results hold for *any arbitrary* choice of these parameters.\\n\\n For the experiments in the main body we considered two settings for the privacy parameter (i) high privacy regime with $\\\\epsilon=0.7$ - (values $\\\\epsilon < 1$ is considered to provide high privacy guarantees), and (ii) $\\\\epsilon=3$ for low privacy. Similarly, we considered two settings for the number of malicious users, $m$, $m=$1% (low poisoning rate) and $m=$33% (high poisoning rate). $m=$33% for our two datasets FB and Syn gives the concrete numbers m=1332 and m=1320, respectively. Prior work has shown that $m=$1% is considered a realistic threat in practice [28]. $m=$33% corresponds to $\\\\frac{1}{3}$ of parties being malicious which is a classic threshold considered in the literature on Byzantine robustness and cryptography [a][b][c]. Additional experiments with different choices of parameters are presented in Appendix D. \\n\\n[28] Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning\\n\\n[a] Asynchronous consensus and broadcast protocols, 1985\\n\\n[b] How to Play Any Mental Game, 1987\\n\\n[c] Asynchronous Secure Computations, 1999\\n\\n**Impact of different problem specific parameters** - One of the biggest advantages of our robustness guarantees is that they are completely *general purpose* and *attack agnostic* -- i.e., our guarantees are completely unaffected by the choice of problem-specific parameters, such as the privacy parameter, the underlying graph, input distribution, or the type of attack. Our consistency check is based on an analysis of the tail bound for the Bernoulli distribution, which stems from the properties of the classic LDP mechanism, Randomized Response -- the individual bits of the user's adjacency lists are Bernoulli random variables. This analysis is again independent of everything else about the problem setup. \\n\\n**Computational complexity.** Our algorithm requires O(n) work to estimate the and the operations are extremely light weight.\\n\\n**Related Work.** We will update the related work section with more concurrent work. However, we would like to highlight that all prior work still focuses on tabular data setting, the novelty of our setting is that this is the *first* work studying the impact of poisoning for graphs under LDP.\"}", "{\"title\": \"Thank You\", \"comment\": \"We would like to sincerely thank the reviewer for engaging with us in good faith and finding our contributions to be \\\"highly non-trivial\\\".\"}", "{\"title\": \"Checking In\", \"comment\": \"Dear Reviewer,\\n\\nWe wanted to check in if there are additional concerns that we can help address.\\n\\nThanks,\\nAuthors\"}", "{\"summary\": \"This paper continues a line of study on exploring the impact of poisoning attack in local differential privacy. In particular, they consider the task of estimating the degrees of each vertex under the widely used notion of edge-level DP, in which two graphs are considered neighboring if they differ in one edge. For the poisoning setting, they consider two types of attack. First is the input poisoning, where a malicious user falsify their underlying input. A stronger one is the response poisoning, where the adversary has access to the implementation of the LDP randomizer.\\n\\nUnder such settings, they first show that the navie implementation of the Laplace mechanism or the Randomized Response mechanism leads to almost trivial gurantee on the soundness. Then, by revealing the fact that the information are naturally redundant for degree estimation, they design a verification mechanism to improve the soundness under poisoning attack, and achieving $O(m(1+1/\\\\varepsilon) + \\\\sqrt{n}/\\\\varepsilon)$ accuracy and soundness with a small failure probability, based on the randomized response mechanism. Finally, they combining the laplace mechanism and improve the the accuracy to logarithmic error for \\\"honest\\\" users.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The technical lemmas and theorems in this paper are clearly stated and correct.\\n2. The hybrid mechanism for reducing the error is interesting.\", \"weaknesses\": \"I agree that it is natural to consider the poisoning attack within the context of local DP, and the edge-level (global) differential privacy is a rather standard notion. However, I think using edge-DP in the local DP model is unusual. In particular, I agree that \\\"the users do not explicitly share this information; rather, it is implicitly shared by the structure of the graph itself.\\\" My concern, however, is whether studying local differential privacy remains meaningful, given that the graph's structure may *already* \\\"leak\\\" information to other users within it.\\n\\nIn the last review process, I mentioned a typo in Appendix G.3 (in line 1325 of this version) that it should be $|L_i|\\\\leq \\\\frac{1}{\\\\varepsilon}\\\\ln \\\\frac{n}{\\\\delta}$ instead of $|L_i|\\\\leq \\\\frac{1}{\\\\varepsilon}\\\\ln \\\\frac{\\\\delta}{n}$. But the typo seems to be still exist in this version, so I worry that the authors did not tidy up their proofs carefully.\", \"questions\": \"The authors have answered my questions in the last review process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
Bp2axGAs18
On the Resilience of Multi-Agent Systems with Malicious Agents
[ "Jen-tse Huang", "Jiaxu Zhou", "Tailin Jin", "Xuhui Zhou", "Zixi Chen", "Wenxuan Wang", "Youliang Yuan", "Maarten Sap", "Michael Lyu" ]
Multi-agent systems, powered by large language models, have shown great abilities across various tasks due to the collaboration of expert agents, each focusing on a specific domain. However, when agents are deployed separately, there is a risk that malicious users may introduce malicious agents who generate incorrect or irrelevant results that are too stealthy to be identified by other non-specialized agents. Therefore, this paper investigates two essential questions: (1) What is the resilience of various multi-agent system structures (e.g., A$\rightarrow$B$\rightarrow$C, A$\leftrightarrow$B$\leftrightarrow$C) under malicious agents, on different downstream tasks? (2) How can we increase system resilience to defend against malicious agents? To simulate malicious agents, we devise two methods, AutoTransform and AutoInject, to transform any agent into a malicious one while preserving its functional integrity. We run comprehensive experiments on four downstream multi-agent systems tasks, namely code generation, math problems, translation, and text evaluation. Results suggest that the "hierarchical" multi-agent structure, i.e., A$\rightarrow$(B$\leftrightarrow$C), exhibits superior resilience with the lowest performance drop of $23.6\%$, compared to $46.4\%$ and $49.8\%$ of other two structures. Additionally, we show the promise of improving multi-agent system resilience by demonstrating that two defense methods, introducing a mechanism for each agent to challenge others' outputs, or an additional agent to review and correct messages, can enhance system resilience. Our code and data are available in the supplementary materials and will be made publicly available upon publication.
[ "Multi-Agent Systems", "Large Language Models", "Resilience" ]
Reject
https://openreview.net/pdf?id=Bp2axGAs18
https://openreview.net/forum?id=Bp2axGAs18
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yL1ynyQZVa", "wmcJVBJc2N", "uzAlbzyoJ9", "thSesdCFwf", "tAYH8Lqjfs", "rs0wHkfe2M", "idtWR0X8oo", "hJcLcp7BVO", "c2B3lqM4cr", "ZPIKZlI9Qk", "XQfCTZrfRG", "XHxFOyk9TK", "W5z1ZpHtNc", "VV0hJgi6lT", "T2JIBfOUPS", "PHFdmDBmT1", "M8xSMVpqdS", "LalKRv4Hym", "KQPm4saw0e", "ISgxVo99z6", "I5mYrSSJFg", "G7mMO4fjZD", "FVb5oEyAp7", "E2LLpyEqpU", "DNK8V7YUuZ", "B97N0xYSfC", "5fc7TFHijO", "5IDHupNngN", "3BR5yS10Ty", "1m0CDUHJSw", "0098Xydcav" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732556535457, 1732557877986, 1732557625518, 1734642711777, 1732835891916, 1729948060098, 1732557343814, 1732557110824, 1732600991639, 1732556675444, 1732557559193, 1730516953963, 1732557010341, 1730568575053, 1732798376070, 1732798287540, 1732673888121, 1732558009120, 1732680840424, 1732611731362, 1732558103771, 1732557810253, 1730783651525, 1732670387139, 1737523755778, 1730598325016, 1732830570706, 1732556885815, 1732837275769, 1732558288794, 1732556773307 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Area_Chair_W7xr" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_3j9A" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_Q1ar" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_54qu" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_jrv1" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_jrv1" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_QzAB" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_54qu" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_QzAB" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_Q1ar" ], [ "ICLR.cc/2025/Conference/Submission6253/Reviewer_3j9A" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ], [ "ICLR.cc/2025/Conference/Submission6253/Authors" ] ], "structured_content_str": [ "{\"title\": \"Official Response (1/n)\", \"comment\": \"We deeply appreciate your efforts in reviewing and your recognition of our extensive experiments and the clear presentation of our paper. Your feedback has significantly improved our paper. In the response, we address your concerns one by one.\\n\\n> The proposed methods can be restricted and there is no guarantee whether the proposed two methods reflect (at least the majority types of) real-world attacks.\\n\\nWe appreciate the reviewer\\u2019s insightful comment. We acknowledge the diversity of real-world attack types, including DDoS and MITM. Our study focuses on simulating highly stealthy attacks\\u2014those that appear benign at first glance or are difficult for non-experts to detect. Specifically, we target scenarios where malicious content, such as a deliberately incorrect line of code, **is embedded within seemingly innocuous messages**. This approach aims to reflect a critical subset of real-world attacks that prioritize subtlety and evasion.\\n\\n> As the author pointed out, AutoTransform is convenient, yet hard to analyze. This inherently does not align with the objectives of the paper, because this method provide minimal added insights over AutoInject. It is thus not clear why an LLM-based approach is necessary and considered one of the contributions. A more principled automatic approach that attempts to capture different types of attacks can be interesting to explore.\\n\\nThank you for the insightful feedback. While we acknowledge that AutoTransform's reliance on LLM-generated outputs introduces challenges in control and analysis, its inclusion addresses a key limitation of AutoInject: **the inability to replicate the inherent behaviors of LLMs**. AutoInject-generated errors may not accurately reflect errors that LLMs themselves might produce in real-world scenarios. Furthermore, AutoTransform allows us to investigate **whether LLMs can be guided to generate errors that are sufficiently covert to obfuscate malicious intent**. This exploration aligns with our broader objective of understanding LLM vulnerabilities and highlights the necessity of a goal-driven approach to address complex attack dynamics.\\n\\n> While AutoInject seems more principal, whether P_m and P_e, the degree of error injected on the input side represents a good error rate metric is doubted, because even injecting the same number of errors per line can lead to different output behavior. For instance, in AutoInject, both injecting error only on a single line of code while b, changing it to (1) while b>=0 or (2) while True leads completely different results. In the latter case, if the agent running the code has no mechanism to jump out of infinite loop, this leads to catastrophic propogation of error to the entire system. In this example, it is clear the error in case (2) can be more dangerous, yet the provided error rate metric seems too trivial to capture it.\\n\\nThank you for highlighting this important consideration. We agree that different types of errors can have varied impacts on program functionality and may pose different challenges for detection. While our current metric\\u2014errors per line of code ($P_m$ and $P_e$)\\u2014is relatively simple, it provides a **practical and interpretable approach akin to how lines of code are widely used as a baseline metric in software development** despite the existence of more nuanced alternatives.\\n\\nTo address your concern regarding **error diversity and severity**, we further analyzed the distribution of error types generated by AutoInject. The errors span across seven distinct categories, as detailed below, ensuring diversity in the types of faults injected and reducing the bias of any single category dominating the results:\\n\\n| Category Name | Description | Count |\\n|---|---|---|\\n| Logical Errors | Errors in logical operations, such as incorrect operators or inverted logic. | 12 |\\n| Indexing and Range Errors | Issues with boundary conditions or off-by-one indexing. | 23 |\\n| Mathematical Errors | Errors in calculations or numerical processing. | 20 |\\n| Output and Formatting | Issues with producing or formatting expected output. | 9 |\\n| Initialization Errors | Problems with starting values or incorrect initialization. | 4 |\\n| Infinite Loops | Errors causing unintended infinite execution loops. | 6 |\\n| Runtime Invocation Issues | Errors in function calls or runtime handling. | 6 |\\n\\nBy incorporating a diverse range of errors and generating them at scale, AutoInject effectively captures the broad spectrum of fault types, mitigating the risk that specific critical cases\\u2014like infinite loops\\u2014are overlooked. This approach ensures that **the reported error metrics, while simple, remain robust and representative of diverse error scenarios.**\"}", "{\"title\": \"Official Response (2/n)\", \"comment\": \"> While the paper explores the impact of error rates (Pm and Pe), the analysis remains somewhat superficial. It lacks a nuanced discussion of why certain error rates were chosen and how these rates might affect system resilience in different real-world applications.\\n\\nWe appreciate the reviewer\\u2019s observation regarding the need for a nuanced discussion on error rates and their real-world implications. While we acknowledge that different error types can have varied impacts on program functionality and detection difficulty, we selected line of code (LOC) as a metric for **its simplicity and widespread acceptance in software development**. Although more sophisticated metrics exist, LOC remains a standard and practical measure for assessing program complexity and error distribution.\\n\\nTo address the concern further, we **categorized and quantified the errors introduced in our analysis into seven distinct types**, as shown in the table below. This categorization ensures diversity in error representation, thereby enhancing the robustness of our results. By generating a large number of errors across these categories, we aim to mitigate biases introduced by any single error type and provide a comprehensive evaluation of system resilience.\\n\\n| Category Name | Description | Count |\\n|---|---|---|\\n| Logical Errors | Errors in logical operations, such as incorrect operators or inverted logic. | 12 |\\n| Indexing and Range Errors | Issues with boundary conditions or off-by-one indexing. | 23 |\\n| Mathematical Errors | Errors in calculations or numerical processing. | 20 |\\n| Output and Formatting | Issues with producing or formatting expected output. | 9 |\\n| Initialization Errors | Problems with starting values or incorrect initialization. | 4 |\\n| Infinite Loops | Errors causing unintended infinite execution loops. | 6 |\\n| Runtime Invocation Issues | Errors in function calls or runtime handling. | 6 |\\n\\nThis categorization supports our conclusion that the **diverse error set adequately covers a range of scenarios**, allowing for meaningful evaluation of system behavior under varying conditions.\"}", "{\"title\": \"Official Response (3/n)\", \"comment\": \"> Finally, I noticed in Appendix B.2 that the prompt for the text evaluation problem explicitly tells the agent which model generated which text (ChatGPT or Vicuna-13B). For the evaluation to be unbiased, surely the model outputs should be anonymised?\\n\\nWe appreciate the reviewer\\u2019s observation regarding potential bias. To maintain consistency with the design of the dataset and facilitate a direct comparison between our multi-agent results and the single-agent results reported in the original study, we **adhered to the dataset's original prompts**, which include identifying the model source. This approach ensures alignment with prior evaluations and comparability of results.\\n\\n> This paper proposes novel methods for adversarially attacking multi-agent LLM systems. I am uncertain of whether this meets the bar for requiring an ethics review, but it is clearly relevant to the privacy, security, and safety of AI systems.\\n\\nThank you for highlighting this important concern. We fully acknowledge the potential risks associated with adversarial attacks on multi-agent LLM systems. To address this, we have proposed and evaluated two defense mechanisms: **the Challenger and the Inspector, as well as their combination**. These defenses have been shown to significantly mitigate the influence of malicious agents, as evidenced by the results presented below:\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\n| Camel | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 62.20 | 62.23 | 61.03 | 63.79 |\\n| AutoTransform | 32.46 | 43.50 | 41.75 | 48.70 |\\n| AutoInject | 29.27 | 40.24 | 44.16 | 48.64 |\\n\\nThese results demonstrate that our methods effectively reduce the success of attacks, improving security and safety in multi-agent LLM interactions. Additionally, the proposed defenses underscore the ethical responsibility to mitigate potential harm.\"}", "{\"metareview\": \"The reviewers acknowledged that the paper tackles an important, timely question about the robustness of various LLM-based multi-agent systems, and provides interesting connections between the system's structure and its resilience. However, the reviewers pointed out several weaknesses and shared concerns related to unclear evaluation setup, lack of systematic ablation experiments, and limited discussion regarding the existing literature on the resilience of non-LLM-based multi-agent systems. We want to thank the authors for their detailed responses. Based on the raised concerns and follow-up discussions, unfortunately, the final decision is a rejection. Nevertheless, this is exciting and potentially impactful work, and we encourage the authors to incorporate the reviewers' feedback when preparing a future revision of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers pointed out several weaknesses and shared concerns related to unclear evaluation setup, lack of systematic ablation experiments, and limited discussion regarding the existing literature on the resilience of non-LLM-based multi-agent systems. A majority of the reviewers support a rejection decision and agree that the paper is not yet ready for acceptance.\"}", "{\"comment\": \"We appreciate your time for reading our response. We are glad to further address your concerns. Please feel free to reach out.\"}", "{\"summary\": \"The paper proposes to investigate the case of LLM-based agents that are collaborating on a task (such as a coding task), when some of the agents are malicious. The paper considers several patterns of communication between the agents such as A->B->C or A<->B<->C.\\n\\nThe authors propose methods to transform agents into malicious ones. These methods essentially involve prompts that tell the agents to introduce subtle errors into their output, such as code. \\n\\nInvestigating various communication architectures, they argue that hierarchical architectures are less vulnerable to malicious agents compared to flat and linear ones. The paper also introduces countermeasures, such as agents that challenge the malicious output, and asks the malicious agent to correct its output.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem of malicious LLM-based agents participating in tasks is an important topic.\", \"The paper shows the results of a relatively extensive experimentation and development of prompts for agent modeling.\", \"The authors found several interesting and non-trivial insights into the behavior of certain LLM agent models. Some of these are the introduction of errors can improve the output of agent models that are based on debates.\"], \"weaknesses\": [\"The paper seem to be unaware of the existing, and very well known literature of malicious agents in a system (such as the Byzantine Generals problem in its many variations). There are algorithms that are extensively used in networking protocols and database systems.\", \"The paper presents as new discoveries facts such as hierarchical systems are more resilient because the agent at the top of the hierarchy is provided \\\"with various versions of the answer by multiple agents performing the same sub-task\\\". This is not a property of hierarchy, but of replication - again, distributed system theory contains many algorithms that can show how to protect against malicious agents in a fully flat and distributed environment.\", \"The various agent implementations considered in this paper are essentially relatively short prompts provided to ChatGPT. The validity of various observations is thus dependent on the current version of ChatGPT, which might be different by the time this paper is presented.\", \"Some of the observations are also dependent on the limitations of current LLMs - for instance, the observation that the malicious agents gradually loose track of the assignment to introduce errors. These are problems that can be easily fixed by periodically reintroducing the tasks.\"], \"questions\": [\"Do you expect that the observations in this paper about the relative strengths of different architectures will be still valid for the next versions of language models? What happens if this paper is published and becomes part of the knowledge-base of the LLMs?\", \"The agents (even the malicious ones) do not seem to be aware of the architecture of the overall system. Does this matter?\"], \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The paper contains source code for the prompt for a malicious agent that tries to deceive the user about it maliciousness. Overall, the impact of such released source code is minimal, because examples of such prompts are widely available. The objective of the paper it to minimize the impact of such malicious agents, a legitimate research problem.\\n\\nOverall, I believe that this should not impact the paper, but it can benefit from the insight of an ethics reviewer.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response (1/n)\", \"comment\": \"We deeply appreciate your efforts in reviewing and your recognition of the importance of the research questions and the clear presentation of our paper. Your feedback has significantly improved our paper. In the response, we address your concerns one by one.\\n\\n> In several places it is not clear why the authors test some experimental configurations but not others. These absences may well be justifiable, but the authors should provide clear justification (and otherwise include the additional configurations, if only in the appendices). For example: In Figure 4 the authors only evaluate MAD. MAD is not evaluated in Figure 7a. In Figure 8 the authors only evaluate Camel.\\n> In several tables and bar charts it is not always clear what tasks are actually being evaluated or how much variation there is between these tasks. E.g. in Figure 5 it simply says \\\"selected downstream tasks\\\".\\n\\nWe appreciate the reviewer\\u2019s thoughtful feedback. Below, we clarify the rationale for our experimental configurations and address the specific concerns raised:\\n\\n- Figure 4 (MAD): This figure focuses on **a case study** demonstrating a counter-intuitive phenomenon where introducing errors can improve performance\\u2014a rare observation in multi-agent systems. MAD was selected specifically for its relevance to this unique insight.\\n- Figure 7a (Exclusion of MAD): MAD was excluded from Figure 7a because this experiment involves scenarios with malicious **instruction-sending agents**, which are not present in the MAD system configuration.\\n- Figure 8 (Self-collab and Camel): Only Self-collab and Camel are included in Figure 8 because they **represent the weaker systems within the Linear and Flat structures**, respectively. Our objective in this experiment is to illustrate how our proposed defense method enhances resilience in weaker systems.\\n\\nTo provide greater clarity on our multi-agent system settings, we have added a comprehensive table summarizing the experimental configurations to our presentation, as shown below:\\n\\n| Systems | Structure | Tasks | N. of Agents | Final Agent | Malicious Agent |\\n|---|---|---|---|---|---|\\n| MetaGPT | Linear | Code | 5 | Test Engineer | Code Engineer |\\n| Self-collab | Linear | Code | 2-5 | Tester | Coder |\\n| Camel | Flat | All | 2 | User | Assistant |\\n| SPP | Flat | Code | 3 | AI Assistant | Python Programmer |\\n| MAD | Hierarchical | All | 3 | Judge | debater |\\n| AgentVerse | Hierarchical | All | 4 | Critic | Solver |\\n\\n> What does \\\"Vanilla\\\" mean in Figure 3 and elsewhere? In Figure 2 it seems as though the idea is that one agent is responsible for repeating(?) the task description and another for executing the task, but I assume it cannot be this simple. How does it generalise when there are more than two agents?\\n\\nThe term \\\"Vanilla\\\" refers to **the baseline scenario where no attack or defense mechanisms are applied**, representing a standard, unmodified system. In Figure 2, it specifically denotes the **normal communication between agents** without any adversarial influence or additional safeguards. This serves as a control setup to evaluate the impact of the proposed methods.\\n\\n> There are no error bars or standard errors reported anywhere, which makes it difficult to interpret the statistical significance of the results.\\n\\nThank you for highlighting this concern. While including error bars or standard errors would indeed provide additional statistical context, conducting repeated experiments for all tasks would incur significant time and resource constraints given the extensive scope of the study. However, the **large number of test cases in each task ensures that our results are statistically robust and representative.**\"}", "{\"title\": \"Official Response (3/n)\", \"comment\": \"> Figure 3b includes results from a single GPT-3.5 agent. Are all other agent systems here exclusively using GPT-3.5, or is this including results with GPT-4o? The text doesn't make this clear. I'm guessing they all just use GPT-3.5, in which case it's all fine, but if not, then this would raise additional questions. In particular, it would seem that the simple baseline of a single GPT-4o agent would beat the multi-agent systems, and the rest of the investigation would be a bit closer to moot.\\n\\nWe appreciate the reviewer\\u2019s observation. To clarify, all experiments in Fig. 3 (a) and (b) utilize GPT-3.5 for consistency and comparability. Similarly, all experiments in Fig. 9 (a) and (b) are conducted using GPT-4o. **Our main conclusions remain consistent across both models**: hierarchical structures demonstrate superior performance; rigorous tasks exhibit greater sensitivity to malicious agents; and systems like MAD and Camel also show notable performance improvements.\\n\\n> A related question to the above: the fact that code generation as a task is more susceptible to sabotage by malicious agents seems surprising to me, since it is the most verifiable of the tasks (running the code provides a source of truth for its functionality that does not depend on trust in the specific agents). This is another example of my feeling that simple baselines can possibly beat many of the setups described. Is there a reason why the agents were not able to verify the code by running it?\\n\\nWe appreciate this insightful observation. While integrating an external interpreter or execution tool can indeed assist in detecting syntactic errors, it has **limitations in addressing deeper semantic issues**. For instance, systems like **Self-collab employ tools to verify code correctness but remain vulnerable to semantic errors** introduced by malicious agents. This limitation highlights the need for robust mechanisms beyond simple execution-based verification, as these cannot capture the nuanced sabotage strategies we investigate.\\n\\n> I don't think this paper is net harmful and I think that this type of work is important for building safer systems. I would not like to see this type of work be slowed down due to ethical concerns (I think that would be counterproductive to ethics). But it is the case that this paper presents potentially harmful methodologies, so I'm flagging it for further review.\\n\\nThank you for your thoughtful feedback. We acknowledge the potential risks associated with the methodologies presented in our work. To address these concerns, we propose two defense **mechanisms\\u2014the Challenger and the Inspector\\u2014and their combination**. These methods have been rigorously evaluated and demonstrate significant effectiveness in mitigating the influence of malicious agents, as evidenced by the results below:\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\n| Camel | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 62.20 | 62.23 | 61.03 | 63.79 |\\n| AutoTransform | 32.46 | 43.50 | 41.75 | 48.70 |\\n| AutoInject | 29.27 | 40.24 | 44.16 | 48.64 |\\n\\nThese results illustrate that the proposed methods effectively mitigate potential harm while maintaining system performance, reinforcing the value of this research for building safer systems.\"}", "{\"comment\": \"I thank the authors for the detailed responses. I feel they added significant clarity and I felt my concerns were heard. I am especially thankful that the authors ran the experiment of applying the defenses in the \\\"No attack\\\" scenario -- the results are interesting and compelling.\\n\\nStill, the authors' response has not allayed my primary concern, which is that the paper attempts to do too much. While the authors' response adds significant clarity, my main point is that these clarifications, in a sense, should not be needed. I feel somewhat in analogy to a code reviewer who comments that some code is unclear, and receives in response an explanation in text of what the code does. It is helpful for my understanding, but it does not change my feelings about the quality of the code.\\n\\nThat said, the authors' response has significantly increased my confidence in the authors' overall line of work, and I'm optimistic that an updated, more focused version of this work, could be quite compelling.\"}", "{\"title\": \"Official Response (2/n)\", \"comment\": \"> The paper only discussed the observed phenomenon, and do not seem to deepen the research area by providing more insights how to use the consequences of these observations to design better resilient system. For instance, in certain systems, it may be inevitable to choose a linear architecture. Given these observations, can we join a proposed defense method to make it more closely resemble a hierarchical system, so as to demonstrate the usefulness and significance of the observed results?\\n\\nThank you for highlighting the need to demonstrate the practical implications of our observations. To address this, we evaluated the effectiveness of our proposed defense methods\\u2014**Challenger, Inspector, and their combination**\\u2014within a linear system, Self-collab. The results below illustrate **the improved resilience against both AutoTransform and AutoInject attacks:**\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\nAs shown, integrating our defense methods significantly enhances the robustness of the linear system. This demonstrates that our findings are not only theoretically valuable but also practically applicable in designing more resilient systems, even when constrained to a linear architecture.\\n\\n> Similarly, for the surprising observation that \\\"introduced errors can cause performance increase\\\", we only see discussion up to the reasons, but not how this result leads any designing insights. In particular, if agents are already capable of double checking the results and identifying the injected errors, how the proposed defense methods, which are designed to challenge the results of others, provide additional help over such tasks?\\n\\nWe appreciate the reviewer\\u2019s insightful comment. The Challenger contributes to system resilience by **explicitly instructing agents to critically evaluate and challenge others\\u2019 results, an initiative they might not otherwise undertake independently.** As demonstrated in the case study (Fig. 6(a), Page 8), a single line of erroneous code is insufficient for detection by other agents. However, when AutoInject introduces additional erroneous lines, the agents identify the discrepancy and prompt the coder to refine its results. This highlights a key design insight: in multi-agent debate frameworks, **intentionally injecting errors\\u2014particularly those outside LLMs\\u2019 standard distribution\\u2014can foster divergent thinking,** encourage thorough verification, and ultimately lead to more refined and agreed-upon results.\\n\\n> The experiment settings are very vaguely presented. It is not clear which agent is malicious, which agent output the final results, and which task is used to evaluate different architectures. Or the experiment results represent the average performance under all different settings. It is also not clear how many agents are there, and thus not clear if the conclusion holds only for a small-scaled system, or can be generalized to more complicated systems.\\n\\nWe thank the reviewer for highlighting the need for clarity in our experimental settings. To address this concern, we have included a detailed table summarizing the experimental configurations:\\n\\n| Systems | Structure | Tasks | N. of Agents | Final Agent | Malicious Agent |\\n|---|---|---|---|---|---|\\n| MetaGPT | Linear | Code | 5 | Test Engineer | Code Engineer |\\n| Self-collab | Linear | Code | 2-5 | Tester | Coder |\\n| Camel | Flat | All | 2 | User | Assistant |\\n| SPP | Flat | Code | 3 | AI Assistant | Python Programmer |\\n| MAD | Hierarchical | All | 3 | Judge | debater |\\n| AgentVerse | Hierarchical | All | 4 | Critic | Solver |\\n\\nThis table provides clarity on the number of agents, the tasks evaluated, the malicious agent setup, and the agent responsible for the final output. We have also clarified whether results represent average performance across multiple configurations.\\n\\nWe acknowledge that the multi-agent systems analyzed in this work are relatively small in scale (<6 agents). However, the majority of contemporary research in multi-agent systems employs a limited number of agents. Therefore, **we believe that our conclusions are generalizable to most frameworks, such as AutoGen**, and can serve as a foundation for scaling up to larger systems in future work.\"}", "{\"title\": \"Official Response (2/n)\", \"comment\": \"> In one instance, the authors claim that they are \\\"the first to examine how different structures of multi-agent systems affect resilience\\\" to malicious agents, which is clearly false in general (as opposed to the special case of LLM agents). Indeed, as far as I can tell, none of the literature on game theory and the fault-tolerance of multi-agent/distributed systems is cited in the related work section. I suspect that there are many ideas in that literature that the authors might find useful for solving the problems they are interested in.\\n\\nWe appreciate the reviewer\\u2019s feedback and the suggestion to consider broader literature. We acknowledge the extensive body of work on the fault-tolerance of distributed systems, including the Byzantine Generals problem and related attacks such as DDoS, MITM, Sybil, and impersonation [3, 4, 5]. However, much of this work focuses on **specific system designs, emphasizing mechanisms like Authentication, Authorization, and Confidentiality** [1, 2, 6].\\n\\nIn contrast, our paper addresses the organizational structures of LLM-based multi-agent systems, which differ significantly in nature. These systems **mimic real-world human collaboration dynamics** rather than functioning as traditional distributed nodes. This novel focus on structural design in LLM-based systems sets our work apart.\\n\\n[1] Reiter, Michael, Kenneth Birman, and Li Gong. \\\"Integrating Security in a Group Oriented Distributed System.\\\" Proceedings 1992 IEEE Computer Society Symposium on Research in Security and Privacy. IEEE Computer Society, 1992.\\n\\n[2] Satyanarayanan, Mahadev. \\\"Integrating security in a large distributed system.\\\" ACM Transactions on Computer Systems (TOCS) 7.3 (1989): 247-280.\\n\\n[3] Harinath, Depavath, P. Satyanarayana, and M. R. Murthy. \\\"A review on security issues and attacks in distributed systems.\\\" Journal of Advances in Information Technology 8.1 (2017).\\n\\n[4] Brown, Philip N., Holly P. Borowski, and Jason R. Marden. \\\"Security against impersonation attacks in distributed systems.\\\" IEEE Transactions on Control of Network Systems 6.1 (2018): 440-450.\\n\\n[5] Kumar, Manoj, and Nikhil Agrawal. \\\"Analysis of different security issues and attacks in distributed system a-review.\\\" International Journal of Advanced Research in Computer Science and Software Engineering 3.4 (2013): 232-237.\\n\\n[6] Mudholkar, P. K., and M. Mudholkar. \\\"Security in distributed system.\\\" Proceedings of the International Conference and Workshop on Emerging Trends in Technology. 2010.\\n\\n> Relatedly, even when restricting to the LLM setting, I recently came across (but have not yet read in full) the paper \\\"Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems\\\" (arXiv:2410.07283), which seems closely related to this work. To the extent that it is, the authors of this paper should comment on the differences and similarities, including to any other relevant work that is referenced within the \\\"Prompt Infection\\\" paper.\\n\\nWe thank the reviewer for bringing this paper to our attention. Although it was uploaded to arXiv after the ICLR submission deadline, we have reviewed it to identify relevant distinctions. Notably, the \\\"Prompt Infection\\\" paper primarily explores **scenarios such as data theft, malware propagation, and social manipulation**, whereas our work focuses on tasks like code generation and mathematical problem solving. Additionally, their study **does not examine how varying organizational structures within multi-agent systems influence outcomes**, which is a key focus of our research.\\n\\n> As a small point, in line 156 it is claimed that it is not possible to use LLMs to inject syntax errors in 20% of the lines in some code, but I am somewhat suspicious of this assertion. Having personally used LLMs for very similar tasks in the past, it is my impression that SOTA models are can do a reasonable job of following such instructions (especially when combined with additional checks applied to the resulting code).\\n\\nWe appreciate the reviewer\\u2019s insight and acknowledge the potential of SOTA LLMs for similar tasks. To clarify, we conducted an analysis using AutoTransform to instruct a GPT-3.5 agent to introduce errors in 20% and 40% of the code lines. The results are summarized below:\\n\\n| Error Rate | Avg | Std | Min | Max |\\n|---|---|---|---|---|\\n| Instruct 20% | 1.56 | 3.65 | 0.0 | 14.3 |\\n| Instruct 40% | 9.49 | 26.70 | 0.0 | 90.1 | \\n\\nThese results indicate significant variability, with **agents struggling to consistently achieve the precise error rates** of 20% or 40%. This underscores the necessity and robustness of our AutoInject method, which addresses these limitations effectively.\"}", "{\"summary\": \"The paper explores the resilience of MAS comprising agents with LLM in the presence of malicious agents. The authors focus on evaluating the robustness of different MAS structures\\u2014linear, flat, and hierarchical\\u2014across tasks such as code generation, math problem solving, translation, and text evaluation. To simulate malicious agent behavior, two approaches\\u2014AUTOTRANSFORM and AUTOINJECT\\u2014are introduced. The study's findings indicate that hierarchical structures show superior resilience, with additional strategies for enhancing system robustness, including the \\\"Challenger\\\" and \\\"Inspector\\\" defense mechanisms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Solid Experimentation: The experimental setup is robust, involving several MAS architectures and tasks, and employs quantitative measures that provide a detailed analysis of resilience across scenarios.\", \"insightful_findings_on_architecture_impact\": \"The conclusion that hierarchical systems exhibit better resilience, supported by performance metrics, is particularly valuable for MAS design in practical applications where security and reliability are critical.\", \"contributions_to_llm_safety\": \"The exploration of malicious agent effects and subsequent defenses offers important insights into enhancing MAS reliability, especially in decentralized or unregulated environments.\", \"weaknesses\": \"1. The experiment models are limited: As it only tests on the gpt-based models, gpt3.5 and gpt4o.\\n2. The paper presents results across tasks that involve different cognitive demands (e.g., code generation requiring precision versus translation being more subjective). However, there is limited analysis of how the degree of agent specialization affects system resilience in different MAS structures. \\n3. While the chosen tasks (code generation, math, translation, and text evaluation) provide a reasonable testbed, they may not fully represent the diversity of tasks that MAS are deployed to handle. These tasks are fairly discrete and objective; however, multi-agent systems in more nuanced, real-world applications (e.g., recommendation engines or dynamic response systems) might face unique types of malicious behavior. Including a more diverse array of tasks or explaining the rationale behind the current selection would strengthen the applicability of the findings.\\n4. While the paper explores the impact of error rates (Pm and Pe), the analysis remains somewhat superficial. It lacks a nuanced discussion of why certain error rates were chosen and how these rates might affect system resilience in different real-world applications.\", \"questions\": \"1. Following the weakness point 2, whether agents with specialized roles (e.g., a math-focused agent vs. a generalist) exhibit varying vulnerabilities to malicious behaviors is not fully explored. This is a missed opportunity to highlight if specialized roles within the MAS require additional security considerations or different structural adjustments.\\n2. Following the weakness point 1, I suggest you can deploy experiments on the o1-mini, o1-preview, I hope to see the results.\\n3. For weakness point 3 and 4, I hope you can improve it in your next exploration in this topic.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response (2/n)\", \"comment\": \"> I found the combination of experiments a bit confusing and unclear. Part of this is directly downstream of point 1 (there are too many axes of variation), but it is also the case that the different factors are, it seems, inconsistently varied. For example, the introduction of \\\"Error Types\\\" in section 3 and the tables in the appendices seem to suggest this distinction is only being done in the code generation tasks. This isn't a bad thing per se (indeed this distinction makes the most sense in the context of code generation), but the inconsistency of the variations, added to the sheer number of them, makes it harder to form a coherent and compelling picture of the results. In a similar vein, I also find that I'm pretty confused as to which experiments included results with GPT-3.5 as well as GPT-4o, vs which ones only included results with GPT-3.5.\\n\\nThank you for your detailed feedback. To clarify the experimental design and address concerns about inconsistency, we have added a summary table for better visualization of the setup:\\n\\n| Systems | Structure | Tasks | N. of Agents | Final Agent | Malicious Agent |\\n|---|---|---|---|---|---|\\n| MetaGPT | Linear | Code | 5 | Test Engineer | Code Engineer |\\n| Self-collab | Linear | Code | 2-5 | Tester | Coder |\\n| Camel | Flat | All | 2 | User | Assistant |\\n| SPP | Flat | Code | 3 | AI Assistant | Python Programmer |\\n| MAD | Hierarchical | All | 3 | Judge | debater |\\n| AgentVerse | Hierarchical | All | 4 | Critic | Solver |\\n\\n**All experiments in the main text are conducted with GPT-3.5**. Additional experiments using GPT-4o are presented in Appendix A to provide broader insights. For completeness, we have also included results with a state-of-the-art open-source model, **LLaMA-3.1-70B-Instruct**, as part of our rebuttal. Below are the results summarized across system structures:\\n\\n| LLaMA-3.1-70B-Instruct | Linear | Flat | Hierarchical |\\n|---|---|---|---|\\n| No Attack | 73.78 | 76.83 | 76.15 |\\n| AutoTransform | 11.90 | 39.03 | 66.96 |\\n| AutoInject | 38.72 | 36.59 | 55.64 |\\n\\n> Confidence intervals could be calculated and included in the bar charts. The results seem to be generally close enough that this could matter a fair amount.\\n> How much iteration was done in the prompting of the systems? It seems plausible to me that many of the observed shortcomings of the multi-agent systems, the malicious agent simulators (e.g. the relative inability of AutoTransform to decrease performance on Translation and TextEval), and the defense methods may be attributed to insufficiently refining of the methods.\\n\\nThank you for this valuable feedback. While we agree that confidence intervals could provide additional insights, the extensive scope of experiments in this study presents significant computational and financial constraints for running tasks multiple times to generate these intervals. However, we believe the large number of test cases included in each task offers a robust basis for statistically meaningful results. Furthermore, the **observed trends are consistent across a diverse set of scenarios**, supporting the reliability of our findings.\\n\\n> How confident are you that the main results are not spurious? By which I mean: how likely does it seem that the results would generalize with more numerous and systematic variations on each problem aspect studied (e.g. if there were more systematic variation within each \\\"multi-agent system structure\\\", each \\\"task category\\\", etc)? What evidence are you relying on for your assessment?\\n\\nThank you for your insightful comment. We acknowledge the vast diversity in real-world tasks, roles, prompts, and multi-agent system structures. To address this, we carefully selected **six widely-used multi-agent frameworks** and evaluated them across **four representative downstream tasks**. By employing three backbone models\\u2014**GPT-3.5, GPT-4o, and LLaMA-3-8B-Instruct**\\u2014we ensured robustness in our analysis. Notably, GPT-4o, despite being a stronger model, corroborated the findings from GPT-3.5, reinforcing the consistency of our results. This alignment across models suggests that our conclusions are generalizable, and we anticipate they will remain relevant as LLMs continue to evolve.\"}", "{\"summary\": \"This paper studies the resilience of multi-agent systems (with LLM agents) to the introduction of errors by malicious agents. The authors consider two methods for introducing such errors (\\\"AutoTransform\\\" and \\\"AutoInject\\\") and two methods for mitigating them (an \\\"Inspector\\\" and \\\"Challenger\\\" agent). They study the impact of these methods on three multi-system structures (linear, flat, and hierarchical) in the context of four downstream tasks (code generation, maths, translation, and text evaluation). Overall, they find that hierarchical structures are more resistant to the introduction of errors, that such errors can actually _aid_ performance in some settings, and that their strategies for mitigating the introduction errors are helpful. They also study other questions such as the impact of the type vs. rate of errors, etc.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is an important and increasingly relevant topic, and the paper does a good job of outlining some interesting research questions in the area. The authors also make reasonable choices of systems and tasks to study. Overall I found the paper well-structured and relatively easy to read (there are a few minor typos here and there, but that didn't affect the scientific quality and clarity of the paper). I appreciated the clear statement of the research questions especially. I thought their experiments seemed mostly well-designed, and their choices of methods (both for the introduction of errors and for defending against them) were sensible.\", \"weaknesses\": [\"The main weaknesses of the paper, in my opinion, are two-fold.\", \"First and most importantly, it is not always clear exactly what the stated results represent and how significant they are:\", \"In several places it is not clear why the authors test some experimental configurations but not others. These absences may well be justifiable, but the authors should provide clear justification (and otherwise include the additional configurations, if only in the appendices). For example:\", \"In Figure 4 the authors only evaluate MAD.\", \"MAD is not evaluated in Figure 7a.\", \"In Figure 8 the authors only evaluate Camel.\", \"What does \\\"Vanilla\\\" mean in Figure 3 and elsewhere? In Figure 2 it seems as though the idea is that one agent is responsible for repeating(?) the task description and another for executing the task, but I assume it cannot be this simple. How does it generalise when there are more than two agents?\", \"In several tables and bar charts it is not always clear what tasks are actually being evaluated or how much variation there is between these tasks. E.g. in Figure 5 it simply says \\\"selected downstream tasks\\\".\", \"There are no error bars or standard errors reported anywhere, which makes it difficult to interpret the statistical significance of the results.\", \"Secondly, and less importantly, I found that some of the claims the authors made seemed overly strong, and that they were excessively focused on the context of LLMs, despite the vast literature on fault-tolerance in multi-agent systems more generally. I suggest that the authors caveat their claims appropriately and aim to discuss how their work results to similar efforts in the context of non-LLM agents. As specific examples:\", \"In one instance, the authors claim that they are \\\"the first to examine how different structures of multi-agent systems affect resilience\\\" to malicious agents, which is clearly false in general (as opposed to the special case of LLM agents). Indeed, as far as I can tell, none of the literature on game theory and the fault-tolerance of multi-agent/distributed systems is cited in the related work section. I suspect that there are many ideas in that literature that the authors might find useful for solving the problems they are interested in.\", \"Relatedly, even when restricting to the LLM setting, I recently came across (but have not yet read in full) the paper \\\"Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems\\\" (arXiv:2410.07283), which seems closely related to this work. To the extent that it is, the authors of this paper should comment on the differences and similarities, including to any other relevant work that is referenced within the \\\"Prompt Infection\\\" paper.\", \"As a small point, in line 156 it is claimed that it is not possible to use LLMs to inject syntax errors in 20% of the lines in some code, but I am somewhat suspicious of this assertion. Having personally used LLMs for very similar tasks in the past, it is my impression that SOTA models are can do a reasonable job of following such instructions (especially when combined with additional checks applied to the resulting code).\", \"Finally, I noticed in Appendix B.2 that the prompt for the text evaluation problem explicitly tells the agent which model generated which text (ChatGPT or Vicuna-13B). For the evaluation to be unbiased, surely the model outputs should be anonymised?\"], \"questions\": \"Please see the Weaknesses section for my questions. I also welcome the authors to correct any misunderstandings I may have about their paper.\\n\\nAs an additional note to the authors, I have currently selected \\\"marginally below the acceptance threshold\\\" but I believe most of the weaknesses above are addressable without too much additional effort. If that were to be done (and the paper updated accordingly within the rebuttal period) I would happily increase my score in order to recommend acceptance.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"This paper proposes novel methods for adversarially attacking multi-agent LLM systems. I am uncertain of whether this meets the bar for requiring an ethics review, but it is clearly relevant to the privacy, security, and safety of AI systems.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your feedback and acknowledge the concern regarding the breadth of the paper. Our intention in adopting a broader scope is to establish foundational insights that are generalizable across various settings, rather than limiting the analysis to highly specific scenarios. For example, focusing exclusively on a hierarchical structure for code generation with 20% syntactic errors might yield conclusions that **do not extend to other tasks, such as mathematical problem solving, or to cases with different error levels (e.g., 40% or 60%)**. By presenting a broader analysis, we aim to provide a versatile framework that can guide future studies into more focused and scenario-specific investigations. To address your suggestion, we will restructure the paper to better highlight the key takeaways and defer some of the less critical findings to the appendices.\"}", "{\"comment\": \"Thank you for reading our response. We appreciate and are encouraged that you find our \\u201cNo Attack\\u201d experiment helpful!\\n\\nWe understand your concern about the breadth of the paper. However, we believe that a broader scope is essential at this preliminary stage to derive generalizable insights. A highly focused setting, while providing deeper insights into a specific scenario, risks limiting the applicability of the findings. For instance, if we were to conclude that a hierarchical structure performs best for code generation with 20% syntactic errors, such a result **might not hold for other tasks, such as math problem solving, or with different levels of error, like 40% or 60%**. By maintaining a broader approach, we aim to establish foundational, high-level conclusions that can guide future, more targeted investigations. This broader perspective lays the groundwork for deeper, scenario-specific studies moving forward.\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"I thank the authors for their detailed reply, and I believe that most of my questions and comments have been addressed. I also appreciate the effort they put into running additional experiments. In light of this, I am happy to update my score to recommend acceptance if the authors can add the clarifying remarks in their rebuttal to the actual paper, even if only in the appendices. My questions came about even after a careful reading of the paper, and I noticed that other reviewers had similar questions, so I can only expect that future readers will too. Given that they have already done the hard work of typing the answers out, I would strongly suggest the authors add all of these clarifications to the actual manuscript.\\n\\nRegarding the fault tolerance of distributed systems, I agree that some of the cited works mentioned are less directly relevant. I had in mind more the (vast) literature that explicitly considers the impact of network topology on robustness. I am no expert on this myself, but I believe this sometimes also falls under the complex systems literature. See, for instance, [this Wikipedia page](https://en.wikipedia.org/wiki/Robustness_of_complex_networks) or [this textbook](https://www.cambridge.org/us/universitypress/subjects/physics/statistical-physics/complex-networks-structure-robustness-and-function?format=HB&isbn=9780521841566). Even some of the very first things one learns in a distributed computing class is that, e.g. star networks can be less robust because they have a single point of failure, etc. Obviously I think it's important to study some of these things in the case of LLMs, but I want to make it clear that I do not view this more fundamental/theoretical aspect underlying the paper to be a novel contribution of the paper (and that is ok, as long as the authors do not claim novelty in this regard).\"}", "{\"title\": \"Official Response (1/n)\", \"comment\": \"We deeply appreciate your efforts in reviewing and recognition of our experiment\\u2019s comprehensiveness and non-trivial insights. Your feedback has significantly improved our paper. In the response, we address your concerns one by one.\\n\\n> The paper seem to be unaware of the existing, and very well known literature of malicious agents in a system (such as the Byzantine Generals problem in its many variations). There are algorithms that are extensively used in networking protocols and database systems.\\n\\nThank you for your insightful comment. We acknowledge that the Byzantine Generals problem and its variations represent foundational work in understanding malicious agents in distributed systems, with numerous algorithms addressing attacks such as DDoS, MITM, Sybil, and impersonation [3, 4, 5]. While these studies focus on **designing specific system-level mechanisms like authentication, authorization, and confidentiality** [1, 2, 6], our work diverges by exploring **the organizational structures and interaction strategies of LLM-based multi-agent systems**. The scenarios we investigate **mirror real-world human collaboration dynamics rather than the behavior of traditional distributed nodes**, offering a novel perspective distinct from conventional solutions.\\n\\n[1] Reiter, Michael, Kenneth Birman, and Li Gong. \\\"Integrating Security in a Group Oriented Distributed System.\\\" Proceedings 1992 IEEE Computer Society Symposium on Research in Security and Privacy. IEEE Computer Society, 1992.\\n\\n[2] Satyanarayanan, Mahadev. \\\"Integrating security in a large distributed system.\\\" ACM Transactions on Computer Systems (TOCS) 7.3 (1989): 247-280.\\n\\n[3] Harinath, Depavath, P. Satyanarayana, and M. R. Murthy. \\\"A review on security issues and attacks in distributed systems.\\\" Journal of Advances in Information Technology 8.1 (2017).\\n\\n[4] Brown, Philip N., Holly P. Borowski, and Jason R. Marden. \\\"Security against impersonation attacks in distributed systems.\\\" IEEE Transactions on Control of Network Systems 6.1 (2018): 440-450.\\n\\n[5] Kumar, Manoj, and Nikhil Agrawal. \\\"Analysis of different security issues and attacks in distributed system a-review.\\\" International Journal of Advanced Research in Computer Science and Software Engineering 3.4 (2013): 232-237.\\n\\n[6] Mudholkar, P. K., and M. Mudholkar. \\\"Security in distributed system.\\\" Proceedings of the International Conference and Workshop on Emerging Trends in Technology. 2010.\\n\\n> The paper presents as new discoveries facts such as hierarchical systems are more resilient because the agent at the top of the hierarchy is provided \\\"with various versions of the answer by multiple agents performing the same sub-task\\\". This is not a property of hierarchy, but of replication - again, distributed system theory contains many algorithms that can show how to protect against malicious agents in a fully flat and distributed environment.\\n\\nThank you for this insightful comment. While replication indeed enhances resilience, our argument highlights the additional role of hierarchy in improving performance. For instance, in the MAD system, **removing the Judge transforms the structure into a flat configuration where two agents debate directly**. Although replication ensures multiple interaction rounds, **the absence of hierarchical oversight degrades performance**, as the Judge's role in aggregating and adjudicating inputs is critical for efficiency and accuracy. This distinction underscores the unique contribution of hierarchical systems beyond simple replication.\\n\\n> The various agent implementations considered in this paper are essentially relatively short prompts provided to ChatGPT. The validity of various observations is thus dependent on the current version of ChatGPT, which might be different by the time this paper is presented.\\n> Do you expect that the observations in this paper about the relative strengths of different architectures will be still valid for the next versions of language models? What happens if this paper is published and becomes part of the knowledge-base of the LLMs?\\n\\nThank you for raising this important point. We acknowledge that advancements in LLMs may impact performance. To address this, we conducted experiments with different iterations of GPT, specifically GPT-3.5 and GPT-4o, **as detailed in Appendix A (Line 772, Page 15).** The results show a general performance improvement with GPT-4o while our **core conclusions remain consistent**: hierarchical structures consistently outperform others, rigorous tasks are more susceptible to malicious agents, and systems like MAD and Camel also exhibit performance gains. This suggests that **our findings are robust across model updates**, providing a strong foundation for future iterations of LLMs.\"}", "{\"comment\": \"I appreciate the authors for their thoroughly addressing my concerns and for their efforts in preparing more results. I believe that the highlighted messages in your responses are all crucial to demonstrate the significance of your work and clarify a lot of the confusions and concerns. I would love to see the rebuttal properly integrated and better presented in your manuscript.\\n\\nAt the same time, I agree with the comments made by Reviewer Q1ar that \\\"the paper attempts to do too much.\\\" This is consistent with one of my earlier concerns that you want to include more insights of the key observations, rather than vaguely presenting everything.\\nThus, I would suggest that the authors to attempt a critical restructuring of the paper such that it can better clarify the settings and showcase the key takeaways, while defering some of the less interesting findings to the appendices.\"}", "{\"title\": \"Thank you for the reply\", \"comment\": \"Your answers solve my concerns. I have raised my score to 8.\"}", "{\"title\": \"Official Response (2/n)\", \"comment\": \"> Some of the observations are also dependent on the limitations of current LLMs - for instance, the observation that the malicious agents gradually loose track of the assignment to introduce errors. These are problems that can be easily fixed by periodically reintroducing the tasks.\\n\\nThank you for this insightful comment. While periodically reintroducing tasks could address issues of task drift in malicious agents, our primary goal with AutoTransform is to implement **a one-time modification of agent profiles** without ongoing intervention. **Incorporating periodic task reintroduction would require significant modifications to the framework** (e.g., appending tasks to the latest user prompt), which falls outside the scope of our intended methodology.\\n\\n> The agents (even the malicious ones) do not seem to be aware of the architecture of the overall system. Does this matter?\\n\\nThank you for the insightful observation. We analyzed six systems to address this point. In **Camel, MAD, and SPP**, agents are aware of the overall system architecture, while in **Self-collab, MetaGPT, and AgentVerse**, they are not. Despite this difference, the performance drop under AutoInject is comparable: from 63.2 to 39.3 for architecture-aware systems and from 66.3 to 39.0 for architecture-unaware systems, as shown below:\\n\\n| | Aware | Unaware |\\n|---|---|---|\\n| Vanilla | 63.2 | 66.3 |\\n| AutoInject | 39.3 | 39.0 |\\n\\nCurrently, **our attack design does not leverage architectural knowledge** (e.g., instructing agents to target specific recipients). This is an intriguing area for future exploration and could further elucidate the impact of architecture awareness on system robustness.\\n\\n> The paper contains source code for the prompt for a malicious agent that tries to deceive the user about it maliciousness. Overall, the impact of such released source code is minimal, because examples of such prompts are widely available. The objective of the paper it to minimize the impact of such malicious agents, a legitimate research problem. Overall, I believe that this should not impact the paper, but it can benefit from the insight of an ethics reviewer.\\n\\nThank you for your thoughtful feedback. We acknowledge the potential misuse of the prompts described in our paper. To address this concern, we have proposed and rigorously evaluated two defense methods\\u2014**Challenger and Inspector\\u2014and their combination**, which are specifically designed to mitigate the influence of malicious agents. The results, summarized in the tables below, demonstrate the effectiveness of these defenses across different scenarios and attack types:\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\n| Camel | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 62.20 | 62.23 | 61.03 | 63.79 |\\n| AutoTransform | 32.46 | 43.50 | 41.75 | 48.70 |\\n| AutoInject | 29.27 | 40.24 | 44.16 | 48.64 |\\n\\nThese defenses significantly reduce the effectiveness of malicious prompts while preserving system performance under benign conditions. Additionally, we agree that an ethics review could further enhance the paper's insights, and we welcome the input of an ethics reviewer if deemed necessary.\"}", "{\"title\": \"Official Response (1/n)\", \"comment\": \"We deeply appreciate your efforts in reviewing and recognition of our experiment\\u2019s comprehensiveness and impactful insights. Your feedback has significantly improved our paper. In the response, we address your concerns one by one.\\n\\n> The experiment models are limited: As it only tests on the gpt-based models, gpt3.5 and gpt4o. Following the weakness point 1, I suggest you can deploy experiments on the o1-mini, o1-preview, I hope to see the results.\\n\\nThank you for the valuable suggestion. To broaden the scope of our investigation and ensure our conclusions generalize **beyond the GPT model family**, we conducted additional experiments using one of **the state-of-the-art open-source models, the LLaMA-3.1-70B-Instruct**. The results, presented in the tables below, confirm that our findings hold across diverse model architectures, including non-GPT-based LLMs.\\n\\n| LLaMA-3.1-70B-Instruct | Linear | Flat | Hierarchical |\\n|---|---|---|---|\\n| No Attack | 73.78 | 76.83 | 76.15 |\\n| AutoTransform | 11.90 | 39.03 | 66.96 |\\n| AutoInject | 38.72 | 36.59 | 55.64 |\\n\\n> The paper presents results across tasks that involve different cognitive demands (e.g., code generation requiring precision versus translation being more subjective). However, there is limited analysis of how the degree of agent specialization affects system resilience in different MAS structures. Following the weakness point 2, whether agents with specialized roles (e.g., a math-focused agent vs. a generalist) exhibit varying vulnerabilities to malicious behaviors is not fully explored. This is a missed opportunity to highlight if specialized roles within the MAS require additional security considerations or different structural adjustments.\\n\\nThank you for this insightful comment. **In Section 4.7 (Line 422, Page 8)**, we conducted an initial experiment where the Manager (Instructor) was made malicious instead of the Coder in Camel and MetaGPT. Our findings indicate that **compromising higher-level task distributors leads to a more significant performance decline** in both systems. While this provides preliminary insights, we acknowledge the need for a more comprehensive analysis of how agent specialization impacts resilience to malicious behaviors. We have noted this as an important avenue for future research, as the primary focus of this paper is on the influence of organizational structures in MAS.\\n\\n> While the chosen tasks (code generation, math, translation, and text evaluation) provide a reasonable testbed, they may not fully represent the diversity of tasks that MAS are deployed to handle. These tasks are fairly discrete and objective; however, multi-agent systems in more nuanced, real-world applications (e.g., recommendation engines or dynamic response systems) might face unique types of malicious behavior. Including a more diverse array of tasks or explaining the rationale behind the current selection would strengthen the applicability of the findings.\\n\\nThank you for this insightful comment. We acknowledge that the four tasks chosen may not fully capture the diversity of real-world multi-agent system applications. Our selection was guided by **their prevalence in existing literature and their suitability for evaluating the resilience of the six systems studied**. We recognize the importance of exploring more nuanced and dynamic scenarios, such as recommendation systems or multidisciplinary team consultations in healthcare, and will incorporate these in future research to enhance the applicability of our findings.\"}", "{\"summary\": \"The paper experimentally examine the resilience of LLM-based multi-agent systems with various system architectures, when malicious agent presents. The work designs two methods to simulate malicious agents, and design two corresponding defense methods. The paper designs several experiments to examine how different types of system perform on several downstream tasks given various degree of errors injected by malicious agents.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The research direction is interesting and of great significance. The presentation is overall clear, not hard to follow. Various experiments are designed and several interesting observations are provided.\", \"weaknesses\": [\"The overall weaknesses concern with the contributions, presentation of details, and the experiment\", \"1. The design of malicious agents can be rather trivial and heuristic, and lacking representation guarantees.\", \"The proposed methods can be restricted and there is no *guarantee* whether the proposed two methods reflect (at least the majority types of) real-world attacks. As the author pointed out, AutoTransform is convenient, yet hard to analyze. This inherently does not align with the objectives of the paper, because this method provide minimal added insights over AutoInject. It is thus not clear why an LLM-based approach is necessary and considered one of the contributions. A more principled automatic approach that attempts to capture different types of attacks can be interesting to explore.\", \"While AutoInject seems more principal, whether $P_m$ and $P_e$, the degree of error injected on the input side represents a good error rate metric is doubted, because even injecting the same number of errors per line can lead to different output behavior. For instance, in AutoInject, both injecting error only on a single line of code `while b`, changing it to (1) `while b>=0` or (2) `while True` leads completely different results. In the latter case, if the agent running the code has no mechanism to jump out of infinite loop, this leads to catastrophic propogation of error to the entire system. In this example, it is clear the error in case (2) can be more dangerous, yet the provided error rate metric seems too trivial to capture it.\", \"2. The specific research questions seem shallow, the presentation of experiment results are not clear, and for certain interesting observed phenomenon, the provided insights seem limited.\", \"The paper only discussed the observed phenomenon, and do not seem to deepen the research area by providing more insights how to use the consequences of these observations to design better resilient system. For instance, in certain systems, it may be inevitable to choose a linear architecture. Given these observations, can we join a proposed defense method to make it more closely resemble a hierarchical system, so as to demonstrate the usefulness and significance of the observed results?\", \"Similarly, for the surprising observation that \\\"introduced errors can cause performance increase\\\", we only see discussion up to the reasons, but not how this result leads any designing insights. In particular, if agents are already capable of double checking the results and identifying the injected errors, how the proposed defense methods, which are designed to challenge the results of others, provide additional help over such tasks?\", \"The experiment settings are very vaguely presented. It is not clear which agent is malicious, which agent output the final results, and which task is used to evaluate different architectures. Or the experiment results represent the average performance under all different settings. It is also not clear how many agents are there, and thus not clear if the conclusion holds only for a small-scaled system, or can be generalized to more complicated systems.\", \"Figure 3 is very poorly plotted. The title says the figures demonstrate \\\"performance drops\\\" thus one would think the y-axis represents the percentage drops compared to an intact system. However, it seems the y-axis corresponds to the absolute performance metrics. Are different tasks have the same metric? If not, then why does it make sense to compare different tasks on the same scale? It is also not immediately clear what \\\"Vanilla\\\" refers to, as they only appear in the plots.\", \"3. While it is claimed in Abstract that the paper investigates the question of how we can increase system resilience to defend against malicious agents, the paper has limited discussion on this. The paper provides no definitive answer whether the proposed method achieves consistent performance gain in various scenarios, and cannot guarantee performance over more realistic scenarios.\"], \"questions\": \"1. You mentioned that the proposed defense methods (Challenger and Inspector) correspond to the two simulation methods (AutoTransform and AutoInject). It is then natural to explore how well these defense methods fix the corresponding type of malicious agent. Do you believe, if e.g., a malicious agent is due to AutoInject, then the Inspector defense should work consistently better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for reading our responses. We deeply appreciate your recognition!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper investigates different aspects of the effects of malicious agents on multi-agent systems. Specifically, it investigates four different questions:\\n1. How do different multi-agent system structures differ in their resilience to malicious agents?\\n2. How do different tasks differ in their susceptibility to sabotage by malicious agents in multi-agent systems?\\n3. How do different error rates (and different types of error rates -- rate of messages with errors vs rate of errors per message with errors) differ in their effect on multi-agent systems?\\n4. How do syntactic and semantic errors differ in their effect on multi-agent systems?\\n\\nAs the backdrop for this investigation, the paper studies three multi-agent structures (linear, flat, and hierarchical) with two instantiations each, applied to four different tasks (code generation, math problem solving, translation with commonsense reasoning, and text evaluation), two types of malicious agent simulation (AutoTransform and AutoInject), and two defense methods (Challenger and Inspector). The experiments are drawn from combining these elements, and the following results are found:\\n\\n1. Out of the three multi-agent system structures, the hierarchical structure is the most resilient to the malicious agent simulations in the tasks considered;\\n2. The multi-agent structures studied are less resilient to the malicious agent simulations studied in code generation and math problem solving than translation with commonsense reasoning and text evaluation;\\n3. Higher error rates generally lead to worse the performance of the multi-agent systems, except that increasing the rate of errors per message with errors beyond 0.4 does not seem to worsen performance. In addition, generally, higher rates of messages with errors is more detrimental to performance than higher rates of errors per message with errors.\\n4. Semantic errors have a bigger impact on the performance of the multi-agent systems than syntactic errors.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**Originality**\\n\\nThe paper seems original and resourceful in its methods for simulating malicious agents. Its hypothesis that the structures of multi-agent systems (linear vs flat vs hierarchical) is central to determining their resilience to malicious agents also seems original and intriguing.\\n\\n**Quality**\\n\\nThe selection of downstream tasks seems well done for the purposes of covering a wide range of unrelated tasks. The included case studies (section 4.6) were quite interesting, and I'd be excited to see a more systematic assessment of those phenomena and incorporation into the design of ablation experiments. Overall, the justifications presented for various observed phenomena seem coherent and intuitive. I especially liked the discussion on higher rates of errors per message leading to better performance than the middling cases (although observing the chart, the effect seems plausibly negligible).\\n\\n**Clarity**\\n\\nI'll argue in the Weaknesses section that the overall setup of the experiments is not particularly clear, but I think this is downstream of the choice of experiments (number of axes of variations and inconsistency in their variations). Given the choice of experiments, the paper was impressively clear and the large number of results are presented in a way that does not overwhelm.\\n\\n**Significance**\\n\\nMulti-agent systems seem significant, and their resilience is likely to become a critically important area of research. I commend the authors in their choice of problem. The aspects of the experiments that are analyzed (system structure, tasks, etc) also seem quite relevant for the broader question.\", \"weaknesses\": \"1. My main concern is that this paper attempts to do too much and is not sufficiently focused. Between the different multi-agent structures, different tasks, different attacks, different types of error rates, different error types, and different defenses, there are too many variables, each investigation ends up with limited depth, and the overall picture ends up not fully compelling. As a result of this ambition, details in the subquestions appear insufficiently investigated. For example, when discussing different multi-agent structures, I hoped to find more details about the structures and their dynamics, and more possible instantiations of each high-level structure (or, more systematic variation in the instantiations considered). Then the rest of the setup could be simplified (for example, it could focus on a single task category such as code generation, again possibly with more than one instantiation of the task category). The resulting claim would have to be more modest -- for example, it would pertain only to code generation and not any task -- but it would be much more strongly substantiated. As it stands (especially given the results do not seem particularly extreme), I'm left wondering if it is really the case that hierarchical structures are more resilient, or if the results apply only to the specific hierarchical systems tested and whether there is something particular about each of them that led to the results. I'm left unconvinced that the main claims of the paper are true.\\n\\n2. A related general concern is that the lack of systematic ablations or closer analyses of the specific results made me quite doubtful of them. I think many additional experiments could have been very illustrative. For example, in the investigation of the defense methods, I was left wondering how much the relation between the defense methods and the attack methods mattered. I find it plausible that these \\\"defense methods\\\" are just generally useful enhancements to the multi-agent systems, and was interested in seeing results on the multi-agent systems with the \\\"defense methods\\\" even without the attack in place. In that case, the discussion of these results would be a bit different.\\n\\n3. I found the combination of experiments a bit confusing and unclear. Part of this is directly downstream of point 1 (there are too many axes of variation), but it is also the case that the different factors are, it seems, inconsistently varied. For example, the introduction of \\\"Error Types\\\" in section 3 and the tables in the appendices seem to suggest this distinction is only being done in the code generation tasks. This isn't a bad thing per se (indeed this distinction makes the most sense in the context of code generation), but the inconsistency of the variations, added to the sheer number of them, makes it harder to form a coherent and compelling picture of the results. In a similar vein, I also find that I'm pretty confused as to which experiments included results with GPT-3.5 as well as GPT-4o, vs which ones only included results with GPT-3.5.\\n\\n4. Confidence intervals could be calculated and included in the bar charts. The results seem to be generally close enough that this could matter a fair amount.\", \"questions\": \"1. How confident are you that the main results are not spurious? By which I mean: how likely does it seem that the results would generalize with more numerous and systematic variations on each problem aspect studied (e.g. if there were more systematic variation within each \\\"multi-agent system structure\\\", each \\\"task category\\\", etc)? What evidence are you relying on for your assessment?\\n\\n2. How much iteration was done in the prompting of the systems? It seems plausible to me that many of the observed shortcomings of the multi-agent systems, the malicious agent simulators (e.g. the relative inability of AutoTransform to decrease performance on Translation and TextEval), and the defense methods may be attributed to insufficiently refining of the methods.\\n\\n3. Figure 3b includes results from a single GPT-3.5 agent. Are all other agent systems here exclusively using GPT-3.5, or is this including results with GPT-4o? The text doesn't make this clear. I'm guessing they all just use GPT-3.5, in which case it's all fine, but if not, then this would raise additional questions. In particular, it would seem that the simple baseline of a single GPT-4o agent would beat the multi-agent systems, and the rest of the investigation would be a bit closer to moot.\\n\\n4. A related question to the above: the fact that code generation as a task is more susceptible to sabotage by malicious agents seems surprising to me, since it is the most verifiable of the tasks (running the code provides a source of truth for its functionality that does not depend on trust in the specific agents). This is another example of my feeling that simple baselines can possibly beat many of the setups described. Is there a reason why the agents were not able to verify the code by running it?\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"I don't think this paper is net harmful and I think that this type of work is important for building safer systems. I would not like to see this type of work be slowed down due to ethical concerns (I think that would be counterproductive to ethics). But it is the case that this paper presents potentially harmful methodologies, so I'm flagging it for further review.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The author's responses do not change my original evaluation of the paper.\"}", "{\"title\": \"Official Response (1/n)\", \"comment\": \"We deeply appreciate your efforts in reviewing and your recognition of the originality and significance of our paper. Your feedback has significantly improved our paper. In the response, we address your concerns one by one.\\n\\n> My main concern is that this paper attempts to do too much and is not sufficiently focused. Between the different multi-agent structures, different tasks, different attacks, different types of error rates, different error types, and different defenses, there are too many variables, each investigation ends up with limited depth, and the overall picture ends up not fully compelling. As a result of this ambition, details in the subquestions appear insufficiently investigated. For example, when discussing different multi-agent structures, I hoped to find more details about the structures and their dynamics, and more possible instantiations of each high-level structure (or, more systematic variation in the instantiations considered). Then the rest of the setup could be simplified (for example, it could focus on a single task category such as code generation, again possibly with more than one instantiation of the task category). The resulting claim would have to be more modest -- for example, it would pertain only to code generation and not any task -- but it would be much more strongly substantiated. As it stands (especially given the results do not seem particularly extreme), I'm left wondering if it is really the case that hierarchical structures are more resilient, or if the results apply only to the specific hierarchical systems tested and whether there is something particular about each of them that led to the results. I'm left unconvinced that the main claims of the paper are true.\\n\\nThank you for your thoughtful feedback. We recognize that the limited number of system instantiations in each structure may constrain the depth of our analysis. Our aim in this work was to **offer a broad perspective on the factors influencing resilience in multi-agent systems** as a foundation for future exploration. While this study focuses on providing a comprehensive overview, we acknowledge the importance of deeper investigation into specific structures and their dynamics. In future work, we plan to expand our evaluations to include more systematic variations and additional instantiations, such as AutoGen, to further substantiate our findings and address the concerns you raised.\\n\\n> A related general concern is that the lack of systematic ablations or closer analyses of the specific results made me quite doubtful of them. I think many additional experiments could have been very illustrative. For example, in the investigation of the defense methods, I was left wondering how much the relation between the defense methods and the attack methods mattered. I find it plausible that these \\\"defense methods\\\" are just generally useful enhancements to the multi-agent systems, and was interested in seeing results on the multi-agent systems with the \\\"defense methods\\\" even without the attack in place. In that case, the discussion of these results would be a bit different.\\n\\nThank you for your thoughtful suggestion. To address this concern, we conducted additional experiments to evaluate (1) the **combination** of Challenger and Inspector, (2) performance in **\\u201cNo Attack\\u201d** scenarios, and (3) defense against **AutoTransform** attacks. The expanded results, including the previously reported Camel and Self-collab experiments, are summarized below:\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\n| Camel | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 62.20 | 62.23 | 61.03 | 63.79 |\\n| AutoTransform | 32.46 | 43.50 | 41.75 | 48.70 |\\n| AutoInject | 29.27 | 40.24 | 44.16 | 48.64 |\\n\\nThe results show that while our defense methods significantly enhance resilience to attacks, they **provide only marginal improvements in \\u201cNo Attack\\u201d scenarios**. This indicates that their utility is primarily in mitigating adversarial challenges, rather than as general system enhancements.\"}", "{\"comment\": \"We thank you for acknowledging the additional experiments and explanations we provide. We are encouraged that you find these clarifications helpful. These additions significantly enhance our paper's clarity and address potential questions that future readers may have.\\n\\nRegarding fault tolerance in distributed systems, we appreciate the reviewer bringing attention to the broader literature on the impact of network topology on robustness, including references to complex systems. This is indeed a valuable perspective, and we will include this line of work in the revised related work section to provide a more comprehensive discussion and situate our contributions more effectively.\"}", "{\"title\": \"Author Response Period Summary\", \"comment\": [\"We deeply thank all reviewers for their time, efforts, and insightful feedback. Their suggestions have greatly improved our work. We are particularly encouraged by reviewer\\u2019s recognition of:\", \"**Interesting and significant research direction** (QzAB, Q1ar, jrv1, 3j9A)\", \"**Clear and structured presentation** (QzAB, Q1ar, jrv1)\", \"**Comprehensive and robust experimentation** (QzAB, 54qu, 3j9A)\", \"**Insightful findings and observations** (QzAB, Q1ar, 54qu, 3j9A)\", \"During the author response period, we have considered the constructive suggestions provided and made several significant improvements our manuscript, including:\", \"Analysis of **error diversity and error types** in AutoInject (QzAB, 54qu)\", \"More detailed evaluations of **defense methods** (QzAB, Q1ar, jrv1, 3j9A)\", \"Results with **LLaMA-3.1** (Q1ar, jrv1, 54qu)\", \"Related work in **distributed systems** (jrv1, 3j9A)\", \"**Improved presentation** (QzAB, Q1ar, 54qu, jrv1, 3j9A)\", \"Once again, we sincerely thank the reviewers for their thoughtful suggestions and valuable contributions to enhancing our paper.\"]}", "{\"title\": \"Official Response (3/n)\", \"comment\": \"> Figure 3 is very poorly plotted. The title says the figures demonstrate \\\"performance drops\\\" thus one would think the y-axis represents the percentage drops compared to an intact system. However, it seems the y-axis corresponds to the absolute performance metrics.\\n\\nThank you for highlighting this issue. We have revised the caption of Figure 3 (as well as Figures 5 and 9) to clarify the representation. The new caption explicitly describes the y-axis as **absolute performance metrics**, ensuring alignment with the data presented.\\n\\n> Are different tasks have the same metric? If not, then why does it make sense to compare different tasks on the same scale? It is also not immediately clear what \\\"Vanilla\\\" refers to, as they only appear in the plots.\\n\\nAll four tasks use accuracy as the evaluation metric, **ranging from 0 to 1.** For the translation task, accuracy specifically measures n-gram precision between the translated text and the reference text. The term \\\"Vanilla\\\" denotes **a baseline scenario where no attack or defense methods are applied** to the system.\\n\\n> While it is claimed in Abstract that the paper investigates the question of how we can increase system resilience to defend against malicious agents, the paper has limited discussion on this. The paper provides no definitive answer whether the proposed method achieves consistent performance gain in various scenarios, and cannot guarantee performance over more realistic scenarios.\\n\\nThank you for your thoughtful suggestion. To address this concern, we conducted additional experiments to evaluate (1) the **combination** of Challenger and Inspector, (2) performance in **\\u201cNo Attack\\u201d** scenarios, and (3) defense against **AutoTransform** attacks. The expanded results, including the previously reported Camel and Self-collab experiments, are summarized below:\\n\\n| Self-collab | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 76.22 | 74.56 | 76.39 | 76.83 |\\n| AutoTransform | 43.29 | 70.73 | 74.40 | 75.00 |\\n| AutoInject | 40.85 | 71.95 | 67.68 | 73.78 |\\n\\n| Camel | No Defense | Challenger | Inspector | Challenger + Inspector |\\n|---|---|---|---|---|\\n| No Attack | 62.20 | 62.23 | 61.03 | 63.79 |\\n| AutoTransform | 32.46 | 43.50 | 41.75 | 48.70 |\\n| AutoInject | 29.27 | 40.24 | 44.16 | 48.64 |\\n\\nThese results demonstrate that **combining the Challenger and Inspector defenses consistently improves system performance** under malicious attacks across various scenarios. We recommend adopting such a multi-agent defense strategy to enhance system resilience.\\n\\n> You mentioned that the proposed defense methods (Challenger and Inspector) correspond to the two simulation methods (AutoTransform and AutoInject). It is then natural to explore how well these defense methods fix the corresponding type of malicious agent. Do you believe, if e.g., a malicious agent is due to AutoInject, then the Inspector defense should work consistently better?\\n\\nThank you for the insightful comment. Our experiments with AutoTransform, as detailed **in the tables referenced in the previous question**, show that Challenger outperforms Inspector against AutoTransform, while Inspector is more effective against AutoInject in the Camel system. Interestingly, this trend is reversed in the Self-collab system. These findings suggest that **the effectiveness of a defense method is influenced more by the system architecture** than by the type of attack method employed.\"}" ] }
Bp0HBaMNRl
Differentiable Causal Discovery for Latent Hierarchical Causal Models
[ "Parjanya Prajakta Prashant", "Ignavier Ng", "Kun Zhang", "Biwei Huang" ]
Discovering causal structures with latent variables from observational data is a fundamental challenge in causal discovery. Existing methods often rely on constraint-based, iterative discrete searches, limiting their scalability for large numbers of variables. Moreover, these methods frequently assume linearity or invertibility, restricting their applicability to real-world scenarios. We present new theoretical results on the identifiability of non-linear latent hierarchical causal models, relaxing previous assumptions in the literature about the deterministic nature of latent variables and exogenous noise. Building on these insights, we develop a novel differentiable causal discovery algorithm that efficiently estimates the structure of such models. To the best of our knowledge, this is the first work to propose a differentiable causal discovery method for non-linear latent hierarchical models. Our approach outperforms existing methods in both accuracy and scalability. Furthermore, we demonstrate its practical utility by learning interpretable hierarchical latent structures from high-dimensional image data and demonstrate its effectiveness on downstream tasks such as transfer learning.
[ "Differentiable causal discovery", "causal representation learning", "latent variable models", "causal structure learning", "causal identifiability" ]
Accept (Poster)
https://openreview.net/pdf?id=Bp0HBaMNRl
https://openreview.net/forum?id=Bp0HBaMNRl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z6TIu9zwWK", "xoqqs2G2uR", "x6YPVvvGaE", "x3D9oG6bAw", "tnvYDrTF4L", "snZ0Qaeq90", "jPmuRwKtsH", "dC4fxozUaX", "cNLIdUfyC3", "bGXG9u1YHp", "ar2JG25oLm", "Yltw63NfrT", "WXtokEHd3r", "Tyw6SHfb7y", "QTpiAReqoi", "NSYC6daDGC", "MfTxaqTBJ8", "JyV0WBZxSS", "H1DZrjOjgI", "DgrQdj2oqP", "6W6OcoNo2Q", "61M7hkJkrE", "5ZsAFRLta9", "44NgGHgnng", "3yOzeWWqj5", "2wHLcU5DOH" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1732475787694, 1732207406270, 1732432360695, 1732664496668, 1734953438911, 1732207349149, 1732345645999, 1732207451861, 1732528938546, 1732591149778, 1730754024407, 1732207137963, 1733035797581, 1732288867891, 1732590982405, 1732207118976, 1733035677507, 1732313768153, 1731047869844, 1732207436434, 1732207246100, 1730574065497, 1732591023655, 1729865377109, 1737523695167, 1733035597023 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Area_Chair_aWk8" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_ehtm" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_sGn5" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_bSAD" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_ehtm" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_wGhK" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_sGn5" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ], [ "ICLR.cc/2025/Conference/Submission5264/Reviewer_ehtm" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5264/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors Continued\", \"comment\": \"**On evaluation and focus of the paper**\", \"we_emphasize_the_main_contributions_of_our_paper\": \"1. **Novel Identifiability Results**: As far as we know, this is the first work to prove the identifiability of latent hierarchical causal models without assuming linear or deterministic relations. \\n2. **Practical Methodology**: We propose a differentiable causal discovery method for latent hierarchical models, addressing scalability and error propagation issues of discrete search methods.\\n\\nOur evaluations in this paper aim to validate these contributions. Since our paper's main goal is causal discovery, we extensively evaluate our proposed method on metrics which are common and standard across causal discovery literature. Since most real-world datasets lack ground truth causal graphs, causal discovery methods are typically evaluated on synthetic data. In our updated manuscript, we extended synthetic experiments to include additional nonlinear activation functions and new baselines. **Our method significantly outperforms all baselines. Moreover, we are considerably faster compared to Kong et al [3] which is the only other non-linear hierarchical baseline.** \\n\\nIn order to demonstrate scalability, we also learn causal graphs for Image data. However, since the ground truth graph is not available, we evaluate our graph using indirect methods. Using MNIST data, we demonstrate that our model learns interpretable representations across layers and that these representations are useful for transfer learning compared to latent variable methods that do not learn such structures. While we include additional causal representation learning baselines, we do not claim state-of-the-art performance on transfer learning tasks. We have clarified this in the updated manuscript. \\n\\nAs mentioned by the reviewer, our step toward evaluating causal methods using additional metrics beyond discovery metrics is valuable. We plan to thoroughly investigate these metrics in future work. However, in this paper, our primary focus is on relaxing key assumptions in the causal discovery literature, providing theoretical contributions, and proposing a scalable differentiable causal discovery approach.\\n\\n\\n**References** \\n\\n[1] Dong, X., et al. \\\"On the Parameter Identifiability of Partially Observed Linear Causal Models.\\\" *arXiv preprint arXiv:2407.16975* (2024). \\n[2]Huang, Biwei, et al. \\\"Latent hierarchical causal structure discovery with rank constraints.\\\" Advances in neural information processing systems 35 (2022): 5549-5561.\\n[3] Kong, L., et al. \\\"Identification of nonlinear latent hierarchical models.\\\" *Advances in Neural Information Processing Systems* 36 (2023): 2010-2032. \\n[4] Agrawal, R., et al. \\\"The DeCAMFounder: nonlinear causal discovery in the presence of hidden variables.\\\" *Journal of the Royal Statistical Society Series B: Statistical Methodology* 85.5 (2023): 1639-1658. \\n[5] Kummerfeld, E., and Ramsey, J. \\\"Causal clustering for 1-factor measurement models.\\\" *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.* 2016. \\n[6] Yang, M., et al. \\\"CausalVAE: Disentangled representation learning via neural structural causal models.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2021. \\n[6] He, J., et al. \\\"Variational autoencoders with jointly optimized latent dependency structure.\\\" *International Conference on Learning Representations.* 2019. \\n\\nPlease let us know if you have further concerns, and please consider raising the score if we have cleared existing concerns \\u2013 thank you so much!\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for their constructive feedback and insightful comments. This feedback helps strengthen our paper. Below, we address each of the concerns raised and provide clarifications.\\n\\n**W1: Assumptions and Contributions**\\n\\n**Q:** There is some exchangeability of these assumptions and in that sense I agree that the current assumption is a more practical one but it is not a novel one or a clear contribution until a clear relation between the assumptions is shown.\\n\\n**A:** Thank you for your comments. We would like to emphasize the novelty of our work and its connections to previous studies below: \\n\\nMost existing causal discovery methods assume the absence of causal relations between latent variables. Among the few methods that permit such relations, to the best of our knowledge, they are either limited to **linear** models [1][2] or **deterministic mappings** [3] (e.g., where $z = f(x)$ and $f$ is invertible). For instance, the method proposed by Kong et al. [3] requires that latent variables can be expressed as an invertible function of observed variables, which excludes even simple relationships like $X = \\\\sin(Z) + \\\\epsilon$ . These constraints significantly limit their applicability in real-world scenarios.\\n\\nIn contrast, our work introduces a more general and practical framework for modeling latent hierarchical graphs, relaxing the aforementioned restrictive assumptions. Specifically:\\n- **General Latent Relations:** Our framework supports non-linear and non-deterministic causal relationships, broadening the scope of latent structures that can be identified. This is a significant departure from existing approaches that are constrained by linearity or invertibility.\\n- **Jacobian Rank Indicator for d-Separation:** We establish a novel Jacobian rank indicator to characterize d-separation in latent hierarchical graphs. Using this indicator, we provide a rigorous proof of identifiability under our relaxed assumptions. To the best of our knowledge, this contribution is original and represents a non-trivial theoretical advancement in causal discovery.\\n- **Practical and Theoretical Implications:** By enabling the modeling of general latent hierarchical graphs, our work overcomes practical limitations in existing methods, paving the way for applications to more complex real-world datasets. Additionally, the Jacobian rank approach has potential for extension to other graph classes, opening new directions for future research.\\n\\nWe believe that relaxing restrictive assumptions and establishing identifiability for a broader class of latent graphs represent a significant and novel contribution to the field. These advancements address fundamental gaps in the existing literature and offer practical value for applications beyond current methods.\\n\\n**Q:** The key claimed advantage for better identifiability results comes from the fact that instead it is assumed that \\\"not yet account for structures where measured variables have children.\\n\\n**A:** First we would like to clarify that our key contribution to the identifiability results is the development of a **novel Jacobian-rank indicator** for determining the number of d-separating latent variables in the non-linear case. This allows us to handle nonlinear, non-deterministic, and non-invertible latent causal relations (see Theorem 1 in Section 4). \\n\\nFurthermore, these results can indeed be extended to account for structures where measured variables have children, as suggested by the work of Dong et al. [1], which builds upon Huang et al. [2] for linear models.\\n\\nDong et al. demonstrate that, with an appropriate indicator for the number of variables d-separating any two observed variables, it is possible to recover the causal graph even when measured variables have children, under weak structural assumptions. We acknowledge this as a potential extension of our work, which we briefly discussed in Section 7 of the manuscript.\"}", "{\"title\": \"Rebuttal by Authors Continued\", \"comment\": \"**References**\\n\\n[1] Sch\\u00f6lkopf, B., et al. \\\"Toward causal representation learning.\\\" *Proceedings of the IEEE* 109.5 (2021): 612-634. \\n[2] Gitter, A., et al. \\\"Unsupervised learning of transcriptional regulatory networks via latent tree graphical models.\\\" *arXiv preprint arXiv:1609.06335* (2016). \\n[3] Higgins, I., et al. \\\"SCAN: Learning hierarchical compositional visual concepts.\\\" *arXiv preprint arXiv:1707.03389* (2017). \\n[4] Liu, N., et al. \\\"Unsupervised compositional concepts discovery with text-to-image generative models.\\\" *Proceedings of the IEEE/CVF International Conference on Computer Vision* (2023). \\n[5] Weinstein, E. N., & Blei, D. M. \\\"Hierarchical Causal Models.\\\" *arXiv preprint arXiv:2401.05330* (2024). \\n[6] O'Brien, K. L., et al. \\\"Causes of severe pneumonia requiring hospital admission in children without HIV infection from Africa and Asia: The PERCH multi-country case-control study.\\\" *The Lancet* 394.10200 (2019): 757-779. \\n[7] Brehmer, J., et al. \\\"Weakly supervised causal representation learning.\\\" *NeurIPS* (2022): 38319-38331. \\n[8] Subramanian, J., et al. \\\"Learning latent structural causal models.\\\" *arXiv preprint arXiv:2210.13583* (2022). \\n[9] He, J., et al. \\\"Variational autoencoders with jointly optimized latent dependency structure.\\\" *ICLR* (2019). \\n[10] Yang, Mengyue, et al. \\\"Causalvae: Disentangled representation learning via neural structural causal models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\\n\\nPlease let us know if you have further concerns, and please consider raising the score if we have cleared existing concerns \\u2013 thank you so much!\"}", "{\"title\": \"Looking forward for Futher Discussion\", \"comment\": \"We look forward to your thoughts on our response. Let us know if there is anything more we can do to address your comments!\\n\\nThanks again for your time and constructive feedback!\"}", "{\"metareview\": \"This paper attacks a longstanding problem in causal modeling: differentiability for hierarchical causal discovery. The authors propose a solution to this important problem which is motivated by theoretical results on identifiability of nonlinear latent hierarchical causal models.\\n\\n__Strengths:__ \\n1. The authors tackle a significant and important problem \\n2. The research is solid: from theory, to insights and then to a proposed solution. \\n3. The experimental evaluation (after the rebuttal) is convincing. \\n\\n__Weaknesses:__ \\nThe main weakness (brought up by one reviewer) is with regards to training and evaluation on downstream tasks. \\n\\nWhile more evidence of the method's utility could be provided for downstream tasks, this paper largely aligns with evaluations found elsewhere in the literature of causal discovery, so overall even in its current state it should be a useful addition to the literature.\", \"additional_comments_on_reviewer_discussion\": [\"There has been extensive discussion and both the reviewers and authors were deeply engaged. Key topics:\", \"Clarifications / notation: These have been largely resolved during the discussion\", \"Experiments: The authors provided additional experiments (and baselines) which were appreciated by the reviewers\", \"Downstream task (see weaknesses and paragraph below).\"]}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for their feedback and strong support of our work. We appreciate that the reviewer has recognized the originality and strong theoretical, methodological, and empirical contributions. Below, we address the concerns and provide clarifications.\\n\\n**Q1: Baselines for Colored MNIST Results** \\n\\nThank you for highlighting this point. In the revised manuscript, we have included two additional baselines\\u2014**CausalVAE** [1] and **GraphVAE** [2]\\u2014that explicitly model the latent causal structure. These baselines provide a more meaningful comparison for our approach. \\n\\nAdditionally, we clarified that our aim is not to achieve state-of-the-art performance on the Colored MNIST task but to evaluate the **transferability** of our learned representations in comparison to other representation learning methods. \\n\\n**Q2: Discussion of Learned MNIST Graph**\\n\\nWe have added a detailed discussion of the learned MNIST graph in **Appendix B.2**. This includes an analysis of **Figure 4** and **Table 3**, clarifying how the latent variables align with an interpretable hierarchical structure. \\n\\n**References** \\n\\n[1] Yang, Mengyue, et al. \\\"CausalVAE: Disentangled representation learning via neural structural causal models.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2021. \\n\\n[2] He, J., et al. \\\"Variational autoencoders with jointly optimized latent dependency structure.\\\" *International Conference on Learning Representations.* 2019.\\n\\nWe hope these updates address the reviewer\\u2019s concerns. Please let us know if there are further points requiring clarification.\"}", "{\"comment\": \"Thank you for the example, now the invariance makes sense to me!\\nTo improve the clarity of your works, I think it would be good to add such an example in the Appendix.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for their constructive feedback and suggestions to improve our paper. We appreciate that the reviewer has recognized the clarity of our writing, the correctness of the theoretical results, and the novelty of introducing a differentiable DAG learner for latent hierarchical causal models (LHCMs). Below, we address the concerns and provide clarifications.\\n\\n**Section 2: Do papers on differentiable causal discovery referenced in the paper actually perform causal discovery?** \\n\\nWe agree that some of the referenced works in the related work section do not fully perform causal discovery, as highlighted by [1] and [2]. We have updated the manuscript to explicitly discuss these limitations. However, we note that the critiques in [1] are not universally applicable. Improved evaluation protocols, such as using sampling with non-equal noise variances, can mitigate these issues and provide robust results for proposed causal discovery approaches [4]. These practices have been incorporated into our experimental evaluations. \\n\\n**Section 4: Identifiability upto permutation** \\n\\nLatent variables are inherently hidden, and their labels can only be identified up to a permutation. For instance, the structures `Z2 <- Z1 -> Z3` and `Z1 <- Z3 -> Z2` represent the same causal relationships, even though the meanings of `Z1`, `Z2`, and `Z3` differ. This is because the d-separation properties, which define the causal semantics, remain unchanged under such permutations. \\n\\n**Section 5: Enforcing structural constraints**\\n\\nCondition 1(i) is incorporated through Eqs. (6) and (8). These conditions are reflected in the final term of Eq. (10), which equals zero if and only if each latent variable has two pure children, ensuring that the structural constraints are satisfied. \\n\\nCondition 1(ii) is reflected in the block structure of the adjacency matrix `M`, which enforces acyclicity and consistency with the hierarchical structure. \\n\\n**Q: Causal Discovery vs. Structure Learning:** \\nEnforcing structural constraints alone does not guarantee recovery of the true causal graph, as multiple graphs can satisfy these constraints. Our method addresses this by jointly optimizing for likelihood and sparsity while satisfying the constraints, enabling the discovery of the true causal graph. \\n\\n**Section 6** \\n-**Tab. 1: Why do the baselines perform so poorly?** \\n\\nThe baselines perform poorly because they rely on assumptions of linearity or deterministic relationships, which are not satisfied in our data. This highlights the effectiveness of our method in more general nonlinear settings. \\n\\n-**Synthetic Experiment: How were the ground truth structures chosen?** \\n\\nThe ground truth structures were chosen randomly. \\n\\n-**Image Experiments: Why not compare with CausalVAE?** \\n\\nThank you for pointing this out. CausalVAE [3] requires additional information in the form of concept labels, which our setting does not provide. To address this, we have included a comparison with a modified version of CausalVAE that does not use concept labels in the updated manuscript. \\n\\n**References** \\n\\n[1] Reisach, C., et al. \\\"Beware of the simulated DAG! Causal discovery benchmarks may be easy to game.\\\" *NeurIPS,* 2021. \\n[2] Seng, A., et al. \\\"Learning Large DAGs is Harder Than You Think.\\\" *ICLR,* 2024. \\n[3] Yang, G., et al. \\\"CausalVAE: Structured Causal Disentanglement in Variational Autoencoder.\\\" *NeurIPS,* 2020. \\n[4] Ng, I., Huang, B., & Zhang, K. \\\"Structure learning with continuous optimization: A sober look and beyond.\\\" *Causal Learning and Reasoning, PMLR,* 2024.\\n\\nPlease let us know if you have further concerns, and please consider raising the score if we have cleared existing concerns \\u2013 thank you so much!\"}", "{\"title\": \"Thanks for rebuttal.\", \"comment\": \"Key point from the authors \\\"However, in this paper, our primary focus is on relaxing key assumptions in the causal discovery literature, providing theoretical contributions, and proposing a scalable differentiable causal discovery approach.\\\" There are so many identifiability results that it is really hard to obtain a good overview how the approaches connect or relate. Due to the unlimited number of combinations of assumptions, it is a sheer endless list of possible papers where it is hard to evaluate real progress.\\nGiven that the assumptions are not strictly holding in practice and are at best crude approximations of reality (in the end it is embedded in a deep learning model) it is crucial to not only evaluate wrt discovery metrics which have inherent problems but to actually train on downstream tasks in addition. Does the identified causal graph actually help in something we are interested in? In table 2 that is actually done and that is really great and so I update my score but overall I am not convinced even after reading the reviews of the other authors.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your reply and continued discussion. We are happy we could clarify the causal interpretation of permutation in variance. In order to increase clarity, we will add an example in the Appendix as per your suggestion.\\n\\nThanks again for your time and engagement! We highly appreciate this opportunity to exchange opinions and discuss with you.\"}", "{\"summary\": \"This paper shows that are particular class of causal graphs with hierarchical latent variables are identifiable by leveraging properties of the Jacobian of the conditional exception function between subsets of observed variables. They then present an efficient algorithm for inferring the hierarchical graph. They present strong empirical results on both synthetic & image based problems.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I thought this was interesting, original work. The class of graphs that they study is obviously limited but seems practical & the rank condition is intuitive.\", \"The paper is very well written - both the theory and methods section do a good job of explaining the intuition for why the method works\", \"The empirical results are strong on the datasets that they tested.\"], \"weaknesses\": [\"The coloured MNIST results appear very strong (though this is not my area), but not contextualized in the domain generalization literature. I would have at least expected you to report the published numbers from recent work from that setting. Autoencoders & Beta-VAE is not the right baselines?\", \"I would have liked a more detailed discussion of the learned MNIST graph. I am not sure what to make of figure 4 or table 3 in the appendix? Do those latents make sense? Is there a natural hierarchical structure that we would expect?\"], \"questions\": \"See weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response Continued\", \"comment\": \"Note: The test set in CelebA is highly imbalanced (\\\\(P(Y=1) = 0.97\\\\)). Consequently, we use AUC as the evaluation metric instead of accuracy, which can be misleading in such scenarios. For instance, GraphVAE achieves an accuracy of 0.97 simply by predicting \\\\(Y=1\\\\) for every test point.\\n\\nWe note that many other baselines, such as [4][5] (highlighted by Reviewer wGhK) and others [6][7], require auxiliary information, multiple domains, or interventional data, making them unsuitable for comparison in our setting.\\n\\nCausalVAE requires concept labels for identifiability as well. However, we were able to adapt their algorithm to run without concept labels. For GraphVAE, we could not find an official implementation from the authors, so we implemented the baseline ourselves. Despite this effort, we observed unstable training and generally poor performance compared to both CausalVAE and our proposed approach.\\n\\n**Motivation and Contribution of Our Work** \\n\\nLatent hierarchical causal models are found across various domains, including gene regulation, computer vision, political science, and epidemiology [8][9][10]. Despite their significance, there is a lack of theoretical understanding regarding the identifiability of these models in the general non-linear setting. Moreover, existing approaches for latent hierarchical causal discovery primarily rely on discrete search, which is computationally infeasible for high-dimensional data. Current causal representation learning methods often do not model hierarchical structures and require additional information, such as interventions or concept labels.\", \"our_contributions_are_twofold\": \"1)We prove identifiability results for general nonlinear latent hierarchical causal models. 2)We propose a differentiable approach that scales effectively to high-dimensional data.\\n\\nIn the absence of ground truth causal structures for real datasets, causal discovery methods are typically evaluated using synthetic data. We follow this practice and demonstrate significant improvements over baselines on synthetic datasets. For real-world data, we indirectly validate the effectiveness of our approach by showcasing the interpretability and transferability of the learned representations.\\n\\n***References*** \\n\\n[1] Agrawal, R., et al. \\\"The DeCAMFounder: nonlinear causal discovery in the presence of hidden variables.\\\" *Journal of the Royal Statistical Society Series B: Statistical Methodology* 85.5 (2023): 1639-1658. \\n\\n[2] Yang, M., et al. \\\"CausalVAE: Disentangled representation learning via neural structural causal models.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.* 2021. \\n\\n[3] He, J., et al. \\\"Variational autoencoders with jointly optimized latent dependency structure.\\\" *International Conference on Learning Representations.* 2019. \\n\\n[4] Brehmer, J., et al. \\\"Weakly supervised causal representation learning.\\\" *Advances in Neural Information Processing Systems* 35 (2022): 38319-38331. \\n\\n[5] Subramanian, J., et al. \\\"Learning latent structural causal models.\\\" *arXiv preprint arXiv:2210.13583* (2022). \\n\\n[6] Zhang, K., et al. \\\"Causal representation learning from multiple distributions: A general setting.\\\" *arXiv preprint arXiv:2402.05052* (2024). \\n\\n[7] Hyvarinen, A., Sasaki, H., and Turner, R. \\\"Nonlinear ICA using auxiliary variables and generalized contrastive learning.\\\" *The 22nd International Conference on Artificial Intelligence and Statistics.* PMLR, 2019. \\n\\n[8] Gitter, A., et al. \\\"Unsupervised learning of transcriptional regulatory networks via latent tree graphical models.\\\" *arXiv preprint arXiv:1609.06335* (2016). \\n\\n[9] Higgins, I., et al. \\\"SCAN: Learning hierarchical compositional visual concepts.\\\" *arXiv preprint arXiv:1707.03389* (2017). \\n\\n[10] Weinstein, E. N., and Blei, D. M. \\\"Hierarchical Causal Models.\\\" *arXiv preprint arXiv:2401.05330* (2024).\"}", "{\"title\": \"Looking forward for Futher Discussion\", \"comment\": \"We sincerely thank you for engaging with our rebuttal and participating in the discussion. We appreciate your time and valuable feedback. With the discussion period ending soon, we hope our response addresses your lingering concerns. We understand your busy schedule, but would greatly appreciate it if you could consider our updates when discussing with the AC and other reviewers.\\n\\nThank you again for your thoughtful and constructive input!\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you for the detailed response to my review.\\n\\nMost of my concerns have been resolved.\\n\\nThe only concern/question still unresolved for me is whether the learned graphs can be interpreted as causal graphs. Let me explain why: As you said in the rebuttal, the meaning of the latent variables (and so the labels) can differ, leading to permutation invariance. I agree that Eq. 6 & 8 encode the structural constraints imposed on the graph to be learned. However, given the permutation invariance, a causal interpretation is not possible without further assumptions. To stick to the example above, `Z2 <- Z1 -> Z3` and `Z1 <- Z3 -> Z2` have causally different meanings. In the first case, Z1 causes Z2, and in the second case, Z3 causes Z2. I would appreciate it if the authors could try to clarify how the permutation invariance of latents aligns with the causal interpretation of the learned graph.\\n\\nNevertheless, the paper's quality increased in the revised version, and new baselines have been added. Thus, I increased my score to 8.\"}", "{\"title\": \"Thanks for response.\", \"comment\": \"Thank you for your thoughtful response and for increasing your score. We greatly appreciate your engagement with our work and your constructive feedback. We particularly appreciate the positive recognition of our empirical work.\\n\\nWe would like to address the few lingering concerns of the reviewer.\\n\\n> Due to the unlimited number of combinations of assumptions, it is a sheer endless list of possible papers where it is hard to evaluate real progress.\\n\\nThanks for the thoughtful comments; however, we respectfully disagree with this premise. Exploring assumptions for identifiability is not only essential for ensuring model robustness but also crucial for guiding model design. Moreover, while there could theoretically be an unlimited combination of assumptions when adding new ones, the focus of our paper is on **removing and relaxing assumptions** (such as linearity or invertibility). Since existing work relies on a finite set of assumptions, this concern does not apply to our approach.\\n\\nMoreover, we consider a widely studied class of models in this paper. Latent hierarchical models (or special cases of such models like trees, or 1-factor measurement models) have been extensively studied in prior work [1][2][3][4][5][6], but they were unable to establish identifiability in the general non-linear case. We generalize the setting of these papers and several of these works are special cases of our framework.\\n\\n> Given that the assumptions are not strictly holding in practice and are at best crude approximations of reality (in the end it is embedded in a deep learning model). Does the identified causal graph actually help in something we are interested in?\\n\\nThanks for asking this question and allowing us to clarify applications of such models. These assumptions often do hold in interesting problems. We provide several examples where these assumptions are valid and useful:\\n\\n- **Gene Regulatory Networks (GRNs):** Gene expression data is observed, but transcriptional regulatory networks are latent. Latent hierarchical models help identify hidden regulators or shared biological function [7].\\n- **Image Data:** Generative models for image data are hypothesized to be compositional and hierarchical, with latent abstract concepts [8][9].\\n- **Complex Social Systems:** Hierarchical latent structures play a crucial role in understanding complex systems in political science and epidemiology [10]. For example, in epidemiology, clinical and microbiological findings are observed but disease states and population-level etiological agents are latent [11].\\n\\nFor image data, we demonstrate the utility of our approach in terms of interpretability and transferability, as shown in Section 6.2 (Table 2) and Appendix B.2 (Table 4). Beyond image data, we believe our methodology would assist domain experts in fields such as genomics and sociology, enabling the discovery of latent causal graphs even when the data does not satisfy strict assumptions.\\n\\nFinally, we are uncertain about the reviewer\\u2019s statement that \\\"in the end it is embedded in a deep learning model.\\\" While deep learning is used to model the generative process, the causal relations are explicitly parameterized using a masking matrix. During training, the parameters of this matrix converge to 0 or 1, explicitly revealing the causal relationships between the latent variables.\\n\\nWe hope these clarifications address your concerns and highlight the broader applicability and robustness of our work.\"}", "{\"title\": \"General response\", \"comment\": [\"We sincerely thank all reviewers for their constructive feedback and valuable suggestions. We appreciate the recognition of the originality, theoretical rigor, and clarity of our work. The reviewers\\u2019 feedback has helped us strengthen the paper. We have made changes to the manuscript, highlighted in blue for ease of reading. We address key changes and improvements in this response, with more details in the individual responses.\", \"**Strengths Highlighted by Reviewers**\", \"**Novel Theoretical Contribution**: Our paper, to the best of our knowledge, is the first to establish identifiability results for nonlinear latent hierarchical causal models. Our proofs are original and use novel techniques like jacobian rank indicator for d-separation. We also propose a practical, differentiable latent causal discovery algorithm, overcoming limitations of discrete search methods like error propagation and scalability. (Reviewer wGhK, Reviewer bSAD, Reviewer ehtm)\", \"**Empirical Evaluations**: Our experiments demonstrate that the proposed approach significantly outperforms baselines on causal discovery. Additionally, we validate our learned representations on real datasets, showcasing their interpretability and the utility of causal representations. (Reviewer bSAD, Reviewer sGn5, Reviewer ehtm)\", \"**Well written**: Reviewers mention our paper is well-written and easy to follow with a formal discussion of assumptions and theorems. (Reviewer bSAD, Reviewer ehtm)\", \"**Additional Experiments**\", \"To address the reviewers' concerns, we have conducted several new experiments and added them to the updated manuscript:\", \"**Synthetic Data**:\", \"Added experiments using `tanh` as the activation function alongside `LeakyReLU`.\", \"Included **DeCAMFounder** [1] as an additional baseline for a more comprehensive comparison.\", \"The synthetic graphs used in the original paper were generated randomly. In the updated manuscript, we have extended the evaluation by including additional experiments on a wider range of randomly generated DAGs. The results are presented in Appendix B.1, Table 3.\"], \"table_1\": \"Performance of latent hierarchical causal discovery methods on various graphs\\n\\n| Structure | Ours (SHD \\u2193) | Ours (F1 \\u2191) | KONG (SHD \\u2193) | KONG (F1 \\u2191) | HUANG (SHD \\u2193) | HUANG (F1 \\u2191) | GIN (SHD \\u2193) | GIN (F1 \\u2191) | DeCAMFounder (SHD \\u2193) | DeCAMFounder (F1 \\u2191) |\\n|------------------------|--------------|-------------|--------------|-------------|---------------|--------------|-------------|------------|-----------------------|----------------------|\\n| Tree (LeakyReLU) | **0.67** | **0.96** | 5.83 | 0.63 | 6.00 | 0.65 | 7.50 | 0.00 | 11.83 | 0.00 |\\n| V-structure (LeakyReLU)| **0.67** | **0.97** | 7.67 | 0.61 | 5.50 | 0.72 | 8.00 | 0.17 | 17.33 | 0.00 |\\n| Tree (Tanh) | **1.00** | **0.95** | 5.50 | 0.63 | 4.50 | 0.70 | 7.50 | 0.00 | 16.50 | 0.00 |\\n| V-structure (Tanh) | **1.17** | **0.95** | 4.33 | 0.79 | 4.50 | 0.76 | 9.50 | 0.36 | 18.50 | 0.00 |\\n\\n\\n\\n\\n2. **Real Data**: \\n - Added comparisons with **CausalVAE** [2] and **GraphVAE** [3], explicitly modeling latent causal structures to strengthen our evaluation. \\n\\n\\n \\n - Expanded evaluation to include the **CelebA** dataset for broader empirical validation. \\n\\nThe detailed results for CMNIST and CelebA datasets are available in Section 6.2 Table 2 and Appendix B.2 Table 4 in the updated manuscript.\", \"table_2\": \"Test Accuracy on the CMNIST dataset.\\n| | Ours | Graph VAE | Causal VAE |\\n|--------------------|--------|-----------|------------|\\n| Reverse | **0.979** | 0.665 | 0.916 |\\n| Blue | 0.753 | **0.766** | 0.653 |\", \"table_4\": \"Test AUC on the CelebA dataset.\\n| | Ours | Graph VAE | Causal VAE |\\n|---------|--------|-----------|------------|\\n| CelebA | **0.8228** | 0.500 | 0.7289 |\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"We sincerely appreciate your time and valuable feedback. With the discussion period ending soon, we hope our responses address your concerns. We understand your busy schedule, but would greatly appreciate it if you could consider our updates when discussing with the AC and other reviewers.\\n\\nThank you again for your thoughtful and constructive input!\"}", "{\"comment\": \"Thank you for your thoughtful response and for increasing your score. We greatly appreciate your engagement with our work and your constructive feedback.\\n\\nRegarding the interpretation of permutation invariance and its alignment with causal interpretation, we would like to clarify further using an example. Consider a latent graph (we drop observed variables for simplicity) where **hair color \\u2190 gender \\u2192 facial hair**. Since all variables are latent, their labels can be arbitrarily assigned. For instance:\\n\\n1. If `Z1 = gender`, `Z2 = hair color`, and `Z3 = facial hair`, the causal graph is `Z2 \\u2190 Z1 \\u2192 Z3`. \\n2. Alternatively, if `Z1 = hair color`, `Z2 = facial hair`, and `Z3 = gender`, the causal graph is `Z1 \\u2190 Z3 \\u2192 Z2`.\\n\\nWhile the labeling of the latent variables may differ, the **semantics of the causal relationships and the underlying structure of the graph remain consistent**. \\n\\nWe hope this explanation clarifies the alignment between permutation invariance and the causal interpretation of the learned graph. Thank you again for your detailed feedback and support!\"}", "{\"summary\": \"The main theoretical contribution of the paper is showing identifiability of nonlinear latent hierarchical causal models. Building on this theory, the authors propose a practical differentiable latent causal discovery approach. Experiments are performed on synthetic data as well as the coloured MNIST dataset to demonstrate efficacy of the approach.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is, to the best of my knowledge, the first to provide identifiability results for nonlinear latent hierarchical causal models. The proof technique seems correct to me, though I did not check it thoroughly (for example, the appendix).\\n\\n2. Estimating equation 9 using Donsker-Varadhan representation is novel.\", \"weaknesses\": \"1. **Experimental limitations**:\\n\\n a. **Synthetic experiments**: Instead of experimenting on just 4 structures given in figure 3, I would encourage authors to randomly generate DAGs and run experiments on these structures. For the synthetic experiments, the analysis would be stronger if the authors also try nonlinear activations for eq 1, instead of piecewise linear activation such as LeakyRELU.\\n\\n b. **Real experiments**: The baselines for the experiments on CMNIST are VAE and $\\\\beta$-VAE -- both of which do not learn a structure over latent variables -- when better baselines exist [1-3]. Applications to real world data is also limited, and even in the colored MNIST setting, only 2 digits seem to be used. \\n\\n2. **Missing/weak motivation**: It is also unclear why such models are useful in the real world: motivation for why one needs such models would make the paper more strong. In the introduction, causal discovery is motivated but the there is no true causal structure for the CMNIST data. Given this, what is the purpose for obtaining a hierarchical structure as in Fig 2b? For what tasks, is such a hierarchical representation useful?\\n\\n3. L447 - 453 mentions interventions but key details are missing regarding interventional data generation (single node or multi node interventions, soft vs hard intervention, and intervention values).\\n\\n4. **Related work**: The task of causal discovery over latent variable hierarchical models is closely related to causal representation learning but this has not been discussed and works in the space have not been cited [1, 2]. \\n\\n---\\n\\n[1] Brehmer, Johann, et al. \\\"Weakly supervised causal representation learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 38319-38331.\\n\\n[2] Subramanian, J. et al. Learning latent structural causal models. arXiv preprint arXiv:2210.13583 (2022)\\n\\n[3] He, J. et al. Variational autoencoders with jointly optimized latent dependency structure. In International Conference on Learning Representations, 2019.\", \"questions\": \"1. What is the implication of condition 3?\\n\\n2. There is a typo in equation 8, the number of small norms and large norms do not match.\\n \\n3. From eq 6, we see that $|| M_{i, :} \\\\odot \\\\pi (1 - M_{j, :})||_1 \\\\geq 2$. \\n\\nHowever in the subject to constraint in eq 8, $||M_{i,:}||_1$ times the above entity is enforced to be $\\\\geq 2$. This is a bit unclear -- can the authors clarify?\\n\\n4. Caption for figure 2c is unclear.\", \"ps\": \"Score has been increased post rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors Continued\", \"comment\": \"**W2: Experimental Evaluations**\\n\\n**Real Data Evaluation**\\n\\nWe thank the reviewer for appreciating the diversity of our evaluation methods. To further strengthen the evaluation, we have performed additional experiments:\\n- Added experiments on the **CelebA** dataset. Results are in Appendix B.2 Table 4.\\n- Incorporated two additional baselines: **CausalVAE** [6] and **GraphVAE** [7], which explicitly model latent causal structures. Results are in Section 6 Table 2. \\n\\n**Q1: Baselines** \\n\\nOur method addresses the identifiability of **nonlinear latent hierarchical models**, a setting with very limited comparable baselines. Most existing methods do not allow relations between latent variables or require strict assumptions. **We have included the baselines which model latent hierarchical structures. These baselines are closest to our setting.**\\n\\nTo provide a more comprehensive comparison, we have included **DeCAMFounder** [4] as a baseline in the updated manuscript. DeCAMFounder founder does not model latent hierarchical structures but learns causal graphs even in the presence of latent confounding. However, as it does not allow relations between latent variables, it performs poorly in our setting.\\n\\n**FOFC** [5] is another causal discovery method which aims to discover causal structures with latent variables. FOFC does not allow relations between latent variables and requires each latent variable to have at least three pure children (in contrast we require only two). We attempted to compare with FOFC, but its strict requirement for three pure children per latent variable is not satisfied by our dataset. \\n\\nOur approach shows substantial improvements in Structural Hamming Distance (SHD) and F1 scores over all baselines, confirming its effectiveness. \\n\\nPlease refer to the general response for more details on additional experiments.\\n\\n**Q2: Additional Visualizations and Plots**\\n\\nWe appreciate the feedback on incorporating additional visualizations. To address this concern: \\n- We included a **plot of performance vs. computational time** for each method in Figure 2. However, we would like to clarify that most baselines compared to are not deep learning approaches since there do not exist any for latent hierarchical causal discovery.\\n- We included loss vs epoch plots in Appendix B.1 Figure 5 as suggested by the reviewer.\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback and insightful comments. We appreciate that the reviewer has recognized the novelty and significance of our theoretical results. Below, we address each of the concerns raised and provide clarifications.\\n\\n**W1: Experimental Limitations**\\nWe have updated the experimental section of our manuscript in response to the concerns regarding the experiments. Please refer to the general response for details. In summary, we add experiments for each of the points the reviewer raised.\\n- Added experiments using **tanh** as a non-linear activation function, complementing the results using piecewise linear **leakyReLU**. Results are in Section 6 Table 1. \\n- Randomly generated a diverse set of **DAGs** and conducted experiments on these structures, providing robust and generalizable insights. Results are in Appendix B.1 Table 3. \\n- Integrated **GraphVAE** [9] and **CausalVAE** [10] as additional baselines, showcasing comparative performance on latent causal structure learning. Results are in Section 6 Table 2.\\n- Expanded the scope of real-world experiments to include the **CelebA dataset** for further validation. Results are in Appendix B.2 Section 4. \\n\\n\\n**W2: Missing/Weak Motivation**\\n\\nWe have clarified the applications of latent hierarchical causal models in the introduction of the revised manuscript. Below, we describe the importance of such models:\\n\\nLearning latent causal models can address critical challenges in **interpretability**, **distribution shifts**, and **scientific discovery** [1]. Latent hierarchical models are particularly relevant in domains such as:\\n\\n- **Gene Regulatory Networks (GRNs):** Gene expression data is observed, but transcriptional regulatory networks are latent [2].\\n- **Image Data:** Generative models for image data are hypothesized to be compositional and hierarchical with latent abstract concepts [3][4].\\n- **Complex social systems:** Hierarchical latent structures have been show to play a crucial role in understanding complex systems in political science and epidemiology. [5][6] \\n\\nAlthough latent hierarchical causal models have tangible real-world applications, the theoretical understanding of these models has been underexplored. Existing works primarily demonstrate identifiability for **linear, discrete**, or **deterministic models**, making our contribution the first to establish identifiability for **general nonlinear hierarchical latent causal models**. Additionally, these methods use discrete search which is infeasible for high-dimensional data like images. We propose a differentiable approach and demonstrate that such latent representations can be learnt for high-dimensional data. While the true latent causal graph is unknown for real image data, our results showcase the **interpretability** and **transferability** of learned representations on datasets like MNIST and CelebA.\\n\\n**W3: Intervention Details**\\n\\nWe have provided a detailed explanation of intervention data generation in Appendix B.2 under Visualization. To ensure clarity, we added a pointer to this section in the main text. Should we include a brief summary in the main text as well?\\n\\n**W4: References to Work in Causal Representation Learning**\\n\\nThank you for pointing out relevant literature. We have updated the Related Work section to include and discuss [7][8][9][10]. \\n\\n**Q1: Implication of Condition 3**\\n\\nCondition 3 ensures differentiability of the function \\u2018f\\u2019 and \\u2018g\\u2019 for theoretical results involving the rank of the Jacobian. This is a sufficient condition, though we do not believe it is necessary. Our proof (Appendix A.2) relies on the relationship `J_f = J_h(g(x)) J_g(x)`, where `p(z|x) = p(z|g(x))`. While we use leakyReLU in experiments, which is not differentiable, we believe future work can build on our results to relax this condition.\\n\\n**Q 2 & 3:Typo in Eq. (8) and Mismatch with Eq. (6)**\\n\\nWe apologize for the typo in Eq. (8). This has been corrected in the revised manuscript, resolving the mismatch with Eq. (6).\\n\\n**Q4. Caption for Figure 2c**\\n\\nThe caption for Figure 2c has been updated for clarity. Please let us know if it remains unclear.\", \"title\": \"Rebuttal by Authors\"}", "{\"summary\": \"Differentiable causal discovery has been a key focus of the causality community in the past years. Despite the advance of representation learning and deep learning, differentiable hierarchical causal discovery with latent variables has been a challenging subfield with at least empirical limited results and limited impact despite the need and call for these methods from practical applications.\\n\\nThe paper proposes a new method and investigates some of the conditions for identifiability. \\n\\nWhile the paper has some very interesting and promising components, I overall can not recommend it for acceptance in its current form.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I really like that the evaluation is not just done with respect to a causal metric but wrt to \\\"a regression classifier trained on the learned representation\\\". If the causality field would move towards the standard evaluation practices of deep learning progress would be faster and this paper is one of the few which actually does perform this evaluation!\\nHowever, when reading the paper in more detail e.g. Table 1 is then again evaluated wrt to discovery metrics only table 2 is evaluated with a learned classifier and arguably table 2 provides only a very limited setting and very limited evaluation. Especially given that these are deep learning approaches, the performance should not even reported in a table but as plots where the x-axis is training time and the y-axis performance. This would account for complexity and cost of training and really allow for a fair comparison of the approaches. \\n\\nWhile it is argued that causal representations lead to better generalizations and transfers this is so far actually not shown in the literature. DomainBed and or [1] clearly state the need for better evaluation and clearer demonstrations of the benefits beyond deriving identifiability results. I am thus really encouraging the authors to significantly extend the ablations and plot train vs performance curves and the performance of the classifier at different stages of training in a larger scale setting and across significantly more datasets. \\n\\n[1] Saengkyongam, Sorawit, et al. \\\"Identifying representations for intervention extrapolation.\\\" arXiv preprint arXiv:2310.04295 (2023).\", \"weaknesses\": \"The key claimed advantage for better identifiability results comes from the fact that instead it is assumed that \\\"not yet account for structures where measured variables have children\\\"\\n\\nThere is some exchangeability of these assumptions and in that sense I agree that the current assumption is a more practical one but it is not a novel one or a clear contribution until a clear relation between the assumptions is shown. \\n\\nThe evaluation is really lacking wrt to datasets and shown clear benefits across different settings. As mentioned I think the authors already take a very valuable step for the community by not only evaluating wrt to discovery metrics (see strengths) but adopting the established evaluation frameworks in deep learning of training a classifier on top of a learned representation. However that evaluation is unfortunately severely limited.\", \"questions\": \"It seems that the baselines are chosen from one lab only i.e. Xie et al, Kong et al and Huang et al which are used to sell the method are all from one lab.\\n\\nGiven the number of baselines available for the task that seems a bit strange. Can you please clarify?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for response Continued\", \"comment\": \"***References***\\n\\n[1]Anandkumar, Animashree, et al. \\\"Learning linear bayesian networks with latent variables.\\\" International Conference on Machine Learning. PMLR, 2013.\\n\\n[2]Kummerfeld, Erich, and Joseph Ramsey. \\\"Causal clustering for 1-factor measurement models.\\\" Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.\\n\\n[3]Huang, Furong, et al. \\\"Guaranteed scalable learning of latent tree models.\\\" Uncertainty in Artificial Intelligence. PMLR, 2020.\\n\\n[4]Adams, Jeffrey, Niels Hansen, and Kun Zhang. \\\"Identification of partially observed linear causal models: Graphical conditions for the non-gaussian and heterogeneous cases.\\\" Advances in Neural Information Processing Systems 34 (2021): 22822-22833.\\n\\n[5]Huang, Biwei, et al. \\\"Latent hierarchical causal structure discovery with rank constraints.\\\" Advances in neural information processing systems 35 (2022): 5549-5561.\\n\\n[6]Kong, Lingjing, et al. \\\"Identification of nonlinear latent hierarchical models.\\\" Advances in Neural Information Processing Systems 36 (2023): 2010-2032.\\n\\n[7] Gitter, A., et al. \\\"Unsupervised learning of transcriptional regulatory networks via latent tree graphical models.\\\" arXiv preprint arXiv:1609.06335 (2016).\\n\\n[8] Higgins, I., et al. \\\"SCAN: Learning hierarchical compositional visual concepts.\\\" arXiv preprint arXiv:1707.03389 (2017).\\n\\n[9] Liu, N., et al. \\\"Unsupervised compositional concepts discovery with text-to-image generative models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision (2023).\\n\\n[10] Weinstein, E. N., & Blei, D. M. \\\"Hierarchical Causal Models.\\\" arXiv preprint arXiv:2401.05330 (2024).\\n\\n[11] O'Brien, K. L., et al. \\\"Causes of severe pneumonia requiring hospital admission in children without HIV infection from Africa and Asia: The PERCH multi-country case-control study.\\\" The Lancet 394.10200 (2019): 757-779.\\n\\nPlease let us know if you have further concerns. We highly appreciate this opportunity to exchange opinions with you and learn from your perspective. Please kindly let us know your thoughts, and thank you again for your time and engagement!\"}", "{\"summary\": \"This paper introduces a novel differentiable causal discovery method for latent hierarchical causal models (LHCMs) and derives identifiability conditions of LHCMs in non-linear cases with relaxed assumptions (i.e., no requirement of invertible functions). In the experimental evaluation, the authors show promising results outperforming existing methods on synthetic and image data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"paper is written clearly while keeping a formal discussion of assumptions and theorems\", \"the authors derive and prove their identifiability conditions. The proofs look correct after careful checking.\", \"a novel differentiable DAG learner for LHCMs is introduced, allowing differentiable structure learners to be applied in latent variable settings\", \"the experimental section introduces an interesting experiment on image data that demonstrates that the proposed algorithm can be seamlessly integrated into the autoencoder framework, thus allowing learning of LHCMs on complex and unstructured data such as images\"], \"weaknesses\": \"**Section 2**\\n- the authors discuss \\\"differentiable causal discovery\\\" in the related work. However, most (if not all) works referenced here do not perform causal discovery. This has been shown by several works, e.g., [1], [2]\\n\\n**Section 4**\\nWhile the theorems and proofs in Sec. 4 are correct, it is unclear to me whether the identifiable model still allows for a causal interpretation if variable permutations are allowed (Theorem 3). It would be good to clarify which permutations are allowed and why the permutations do not change the causal structure (and thus $d$-separation statements). To illustrate what I mean, consider a LHCM (where observed $X$ are dropped for the sake of simplicity) $Z_2 \\\\leftarrow Z_1 \\\\rightarrow Z_3$. If (any) permutation is allowed, Theorem 3 would also allow for $Z_1 \\\\leftarrow Z_3 \\\\rightarrow Z_2$. However, this model entails different $d$-separation statements and thus has different causal semantics. Hence the causal model would not be identifiable.\\n\\n**Section 5**\\nIt is unclear to me how the acyclicity and overall model structure from Condition (1) (ii) is ensured/reflected in the objective (if at all reflected) (Eq. 10). Based on this, it is not easy to see why the proposed method is not just a structure learner, but a causal discovery method. Could the authors please provide more details on how this is achieved?\\n\\n**Section 6**\\n- Tab. 1: Why do the baselines perform so badly? Is there any specific explanation for that?\\n- Synthetic experiment: How were the ground truth structures chosen? By hand or randomly? If by hand, could the authors explain why and why these?\\n- image experiments: There is the work on causalVAEs [3], why did you not choose this as a baseline? Since it is more related to the overall problem setup of this work than standard VAEs, this baseline would make much sense.\\n\\n# References\\n[1] Reisach et al. Beware of the simulated dag! causal discovery benchmarks may be easy to game. NeurIPS 2021.\\n\\n[2] Seng et al. Learning Large DAGs is Harder Than You Think. ICLR 2024.\\n\\n[3] Yang et al. CausalVAE: Structured Causal Disentanglement in Variational Autoencoder. 2020.\", \"questions\": \"see weaknesses\\n\\n# Additonal Notes\\nNote that I decided on a score of 6 as there is no option 7. If the authors address the points in the weaknesses section accordingly, I'm inclined to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"We sincerely appreciate your time and valuable feedback. With the discussion period ending soon, we hope our responses address your concerns. We understand your busy schedule, but would greatly appreciate it if you could consider our updates when revising your rating and discussing with the AC and other reviewers.\\n\\nThank you again for your thoughtful and constructive input!\"}" ] }
Bon3TPZOG0
Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering
[ "Peng Wang", "Huijie Zhang", "Zekai Zhang", "Siyi Chen", "Yi Ma", "Qing Qu" ]
Recent empirical studies have demonstrated that diffusion models can effectively learn the image distribution and generate new samples. Remarkably, these models can achieve this even with a small number of training samples despite a large image dimension, circumventing the curse of dimensionality. In this work, we provide theoretical insights into this phenomenon by leveraging key empirical observations: (i) the low intrinsic dimensionality of image data, (ii) a union of manifold structure of image data, and (iii) the low-rank property of the denoising autoencoder in trained diffusion models. These observations motivate us to assume the underlying data distribution of image data as a mixture of low-rank Gaussians and to parameterize the denoising autoencoder as a low-rank model according to the score function of the assumed distribution. With these setups, we rigorously show that optimizing the training loss of diffusion models is equivalent to solving the canonical subspace clustering problem over the training samples. Based on this equivalence, we further show that the minimal number of samples required to learn the underlying distribution scales linearly with the intrinsic dimensions under the above data and model assumptions. This insight sheds light on why diffusion models can break the curse of dimensionality and exhibit the phase transition in learning distributions. Moreover, we empirically establish a correspondence between the subspaces and the semantic representations of image data, facilitating image editing. We validate these results with corroborated experimental results on both simulated distributions and image datasets.
[ "diffusion models", "mixture of low-rank Gaussians", "denoising autoencoder", "phase transition" ]
https://openreview.net/pdf?id=Bon3TPZOG0
https://openreview.net/forum?id=Bon3TPZOG0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "geQMLZ9bXH", "fqGDD85G1h", "O5kqi4ImCr", "96ko1n5ToO" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730134454562, 1730497692327, 1732391079246, 1730662254325 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4878/Reviewer_U6rt" ], [ "ICLR.cc/2025/Conference/Submission4878/Reviewer_sKVx" ], [ "ICLR.cc/2025/Conference/Submission4878/Authors" ], [ "ICLR.cc/2025/Conference/Submission4878/Reviewer_kQCq" ] ], "structured_content_str": [ "{\"summary\": \"Diffusion models are the dominant class of image generation models. They can effectively learn the underlying image distribution during training, despite the high dimensionality of the data. This paper offers a theoretical modeling of the image distribution using a mixture of low-rank Gaussian. With this model, the authors attempt to explain the training dynamics and failure points of training with small datasets, as well as offer several insights.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The attempt to model and quantify the training dynamics and data size requirements for diffusion models is noteworthy and possibly impactful.\", \"The motivation behind the MoLRG is intuitive and easy to follow.\", \"The paper is well written, making both the math and the figures accessible.\"], \"weaknesses\": [\"I believe the authors should highlight the paper's contribution. Whether the modeling is justified or not, I believe the paper should contain a meaningful takeaway message. For example, some numerical relationship between the denoiser's jacobian's rank and the number of samples required.\", \"The experiments conducted do not provide sufficient convincing evidence that the chosen modeling is fitting. Moreover, it is unclear what supports the application of the conclusion following Theorem 4 to real data.\", \"I am uncertain why modeling the DAE as a mixture of zero-meaned Gaussian is justified. Assuming that the data is a union of linear subspaces, using low-rank Gaussians is reasonable yet the subspaces do not necessarily coincide at the origin. Could the authors please shed more light on this choice?\", \"The use of the principle components of a Denoiser's jacobian for semantic exploration has been explored in previous work, namely [1]. Also, the connection between the proposed MoLRG and the semantic correspondence of the DAE's jacobian's principle component was unclear.\", \"[1] Hila Manor & Tomer Michaeli (2024). On the Posterior Distribution in Denoising: Application to Uncertainty Quantification. In ICLR 2024.\"], \"questions\": [\"Is it possible to show some meaningful bounds on the gap between the modeling using MoLRG and the real data distribution?\", \"Are the findings in the paper relevant specifically for diffusion models or for generative models in general? If the findings are general, it would strengthen the paper to present similar results across different generative methods. Otherwise, It would be interesting to shed some light on the difference in data size requirements of different generative modeling techniques.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work is about understanding the generalization capabilities of diffusion models from a theoretical perspective. The main assumption is that the true underlying data distribution can be approximated by a mixture of low rank Gaussians. Following this assumption, the authors propose an ideal parametrization for denoising networks of diffusion models. Combined with a few more approximations (e.g. hard max counterpart of weight assignments), they propose: 1) denoising is reduced to a sub-space clustering problem (in theorem 3) , and 2) the error in approximating the true sub-spaces is related to sub-space dimensionality and number of data points in training samples (theorem 4).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper raises an important question regarding generalization requirements in diffusion models. So the paper is significant in the sense that it is about a crucial question.\\n2. Extending previous theoretical work from mixture of full rank Gaussians to low rank Gaussians is an important and valuable step, since a mixture of low rank Gaussians is a better model of natural images (and generally real world structured data). \\n3. Additionally, showing empirical results in both toy and real data is a plus. \\n4. Finally, the paper is written clearly and is fairly easy to follow.\", \"weaknesses\": \"1. The main weakness of the paper is that the main limitation of the analysis is not acknowledged clearly enough. This limitation is due to the main assumption of the work: image data lies on a fixed union of low dimensional sub-spaces. Although better than a full-rank Gaussian model, this is still a very crude approximation. As cited by the authors, image data lies on a union of **non-linear** manifolds, which cannot necessarily be accurately approximated by a union of linear manifolds (sub-spaces). The nice linear relationships between N and d will not hold as soon as you have non-linear manifolds. For example, for k =1, you would need N=2 to get a perfect estimate of a one dimensional subspace with a linearity assumption. But as soon as you have a non-linear manifold, depending on the degree of non-linearity you would need larger N.\\nI suggest the presentation should be modified to reflect the limitation of the analysis. Otherwise, the way the paper is written now, sets up the reader for disappointment, as the results do not extend to the real image data as claimed by the authors. Nevertheless, I think the results are valuable regardless of the simplistic assumption. They just need to be upfront about the assumptions. \\n2. Modeling image distribution with a union of sub-spaces is an old idea in signal processing that goes back to wavelet thresholding and later compressive sensing literature. The paper would benefit from citing major papers where these ideas originated. Importantly, the optimal solutions under this kind of assumption has been a very active topic in that area. It would be interesting to see how these results connect to that literature. \\n3. Another assumption made is to approximate the weights with a hard-max operation. This is of course a very hard assumption that results in a lot of error for high noise levels (when noisy image is far from the union of subspaces). Importantly, this assumption simplifies the posterior mean too much, which is counter productive when the goal is to explain diffusion models (where you have large levels of noise). \\n4. There seems to be a confusion in the experimental results presented in Figure 5. As a reader, I expect the experimental result to support the theory. However, the generalization result rely on a generalization score that is defined in the appendix. The results shows that the generalization as defined in eq 48 is related to N_k/d_k in a sensible way. However, it does not support nor refutes the results presented in the 4 theories (main results). So there is a divide between the theoretical results and the empirical results which is supposed to support the theory.\", \"questions\": \"1. Figure 2 shows changes in the image as a function of moving in the direction of top singular vectors of the Jacobian of the denoiser. The semantic labels are strange, because there are multiple features changing in each column but only one is chosen as a description. The complex variations of course is a reflection of the fact that the manifolds are not linear and it is not trivial to separate the features. Overall it is not clear that this figure is trying to convey. The actual effect is not consistent with the description.\\n2. In figure 3, for real image datasets, there is a jump at the very low SNR for estimated rank from 3 of the dataset. What is causing this strange behavior? Similarly for the toy data, the behavior of UNet seems pretty strange and non monotonic. Is there an intuition or explanation for that? \\n3. In line 364, it's stated that the assumption $U_k^TU_l = 0$ follows from the observation of disjoint union of manifold. a) This is another too strong assumption about images. We know that images share features across different classes and images, so it is not natural to assume this orthogonality between the subspaces. b) Even if we assume this orthogonality, why does this follow from the observation that the manifolds are disjoint? That refers to the support to be non-overlapping. In the general case, when the manifolds are not mean zero, they can have many directions in parallel and still be disjoint. c) Finally, if this is an assumption you are making, it is in consistent with the mixture of Gaussian models you show in figure 1, where your orthogonality assumption does not hold. \\n4. In figure 4, it seems like diffusion models are performing worse than the simple PCA models in terms of separating the success and failure cases. Is it clear why? \\n5. I suggest eq 48 and 49 be moved to the main text because they are used to generate figures in the main text.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper suggests that diffusion models can circumvent curse of dimensionality by clustering to fit the intrinsic dimension which is in general much lower than ambient dimension. This is a valuable insight and the authors provided a solid theoretical analysis to support it in a special case. However, the paper suffers from two draw backs: (1) the study of diffusions along subspaces and their study using PCA techniques is not a completely novel ideas; (2) the study focuses narrowly on finite sums of low-rank Gaussians centered at the origin (up to a noise), effectively making the search space finite dimensional.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper argues that instead of generating a full ranked diffusion, the diffusion model only need to solve a subproblem corresponding to the intrinsic dimension of the data distribution. This is a very convincing idea. In addition, in the case of MoLRG (mixture of low rank Gaussians). the authors mathematically proves the problem is equivalent to a PCA optimization and established rigorous guarantees of the optimization quality.\", \"weaknesses\": \"Weaknesses:\\n1. It is not a new idea to study behaviors of diffusion models, or generative models in general, along subspaces spanned by leading eigenvectors of the Jacobian. I cannot produce a complete bibliography in this area, but the following papers may be relevant:\\n- Subspace Diffusion Generative Models, Jing et al., ECCV 2022 , which developed a new diffusion model that restricts the flow vector field to a linear subspace whose dimension shrinks as t->0 \\n- The Geometry of Deep Generative Image Models and its Applications, Wang & Ponce, ICLR 2021, whose experiments focused on GANs rather than diffusion models, but revealed that generative models identifies top eigenspaces of the Jacobian, which capture important perceptually relevant changes. \\n2. The study in this paper assumes the distribution is a sum of finitely many Gaussians centered at 0 with added noises (and even assumed later that they have the same weights). This is an oversimplification as under the current assumption, the task becomes an optimization of finitely matrices. The real-life diffusion is much more complicated as it tries to identify, instead of linear subspaces, submanifolds or equilvalently a tangent linear subspace at each point, which is an infinite dimensional task. How would the results change for data lying on low dimensional non-linear manifold? I guess the complexity of the task would depend on how fast the tangent space change among nearby points, which is quantified as curvatures of the manifold. It would be interesting to see discussions addressing this aspect.\\n3. This contrast is clearly demonstrated by the current paper's own experiments: under the simplified assumption, only a handful of samples (equal to the rank of the matrix) are needed for decent inference quality while for realistic data thousands of samples are needed ( Figure 5a vs Figure 5b ). While it is true the ranking of difficulty among different datasets coincide with that of intrinsic dimension, it should be recognized that a great amount of new details exist in data with higher instrinsic dimensions, and model quality a priori cannot be naively summarized by transferring the analysis on low-rank Gaussians in Theorems 1-4. I think it would be interesting to have more analysis on whether local intrinsic dimension are the only features that need to be handled in real work datasets, in particular, whether the shape and smoothness of the distribution also play a role.\", \"questions\": \"My questions are listed in the weakness section above. I would encourage the authors to analyze more carefully the transferability of claims from MoLRG at the origin toward MoLRG at every basepoint.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BomQa84efw
dMel: Speech Tokenization Made Simple
[ "Richard He Bai", "Tatiana Likhomanenko", "Ruixiang ZHANG", "Zijin Gu", "Zakaria Aldeneh", "Navdeep Jaitly" ]
Large language models have revolutionized natural language processing by leveraging self-supervised pretraining on vast textual data. Inspired by this success, researchers have investigated complicated speech tokenization methods to discretize continuous speech signals so that language modeling techniques can be applied to speech data. However, existing approaches either model semantic (content) tokens, potentially losing acoustic information, or model acoustic tokens, risking the loss of semantic (content) information. Having multiple token types also complicates the architecture and requires additional pretraining. Here we show that discretizing mel-filterbank channels into discrete intensity bins produces a simple representation (dMel), that performs better than other existing speech tokenization methods. Using an LM-style transformer architecture for speech-text modeling, we comprehensively evaluate different speech tokenization methods on speech recognition (ASR) and speech synthesis (TTS). Our results demonstrate the effectiveness of dMel in achieving high performance on both tasks within a unified framework, paving the way for efficient and effective joint modeling of speech and text. The code is available at anonymous_url, while generation samples are in the supplementary materials.
[ "speech", "tokenization", "synthesis" ]
Reject
https://openreview.net/pdf?id=BomQa84efw
https://openreview.net/forum?id=BomQa84efw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xtjSmpzYWN", "wZHyZYZPRh", "wWuj6XLPNm", "mK79jaofgW", "m7kXz7ugYt", "l2cfAWCuc7", "k0TtCEqz9r", "hDIpxClULV", "fCAsf4WZ3X", "aAqvlSAJS4", "WUaAiklzIp", "W9wrAQUgD3", "Tpv2R6rqCd", "TREBt4JsNo", "QQFoUmnlG5", "KVGLiqxQqW", "Eu4AoNolye", "CAcaseSZZr", "BjGiHV1mwj", "8OKXzaxR1e", "7zcpYFANwE", "5s23Mc6CtF", "3yD0lvVh9k", "0kl4bk1DCO" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729522635140, 1733051241120, 1729503691044, 1731803015083, 1733309269954, 1732350375880, 1732623475477, 1731801872493, 1731802718494, 1732949322890, 1733642715880, 1731802947191, 1729244705010, 1730673083199, 1731802801805, 1737523678281, 1732330526686, 1732563759865, 1732866756581, 1732438173406, 1730757938831, 1732318402596, 1731802040418, 1732724652346 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_wcjv" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_Wgkg" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_wr2J" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Area_Chair_83cy" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_wr2J" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_qtqg" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_wcjv" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_qtqg" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_qtqg" ], [ "ICLR.cc/2025/Conference/Submission5027/Reviewer_6UMS" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ], [ "ICLR.cc/2025/Conference/Submission5027/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes dMel, a simple method for quantizing Mel spectrograms into discrete units for LM-style decoder-only ASR and TTS. Unlike self-supervised semantic tokens and neural codecs, dMel is parameter- and optimization-free. Experimental results indicate superior ASR and TTS performance compared to prior methods like HuBERT + K-means and SpeechTokenizer.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed dMel mitigates the issues in existing speech tokenizers. First, prior works like self-supervised learning (SSL) based tokenizers require extensive pre-training and sometimes not being able to preserve acoustic details for speech generation and synthesis. Second, neural codecs preserve fine-grained acoustic representations but might not be able to perform ASR and TTS because of the weak correlations between codebooks and frames. The authors propose a parameter- and training-free approach to achieve similar ASR and TTS performance.\", \"weaknesses\": \"Despite the success of the dMel method presented in the experiment results, the following issues question its novelty and effectiveness.\\n\\n1) **Bitrate:** \\nBitrate is a crucial metric for comparing different tokenizers in prior studies but is not included in this paper. According to the provided information, dMel@40Hz, HuBERT-KM, and SpeechTokenizer, respectively, have bitrates of 12.8, 0.4, and 4kbps. The huge difference in bitrates might lead to an **unfair comparison**. Moreover, the number of centroids of K-means clustering in HuBERT-KM could be increased since 200 is considered a small codebook size (Table 2), while 500 and larger values are more commonly used in past literature.\\n\\n2) **Baselines:** \\nAdvances in speech tokenization techniques have improved many downstream applications, including ASR and TTS. However, this paper only compares dMel with HuBERT + K-means and SpeechTokenizer, where the K-means method was proposed in 2021 [1]. Also, speech tokenization papers usually consider spoken language modeling a standard evaluation task [2,3,4].\\n\\n3) **Writing:** \\nWriting could be improved with the assistance of writing tools, including LLMs. For instance, from lines 299 to 301, the original text is \\\"From Table 3, we can see that semantic tokenization (HuBERT-KM) is not good for speech reconstruction. Meanwhile, acoustic tokenizers that are optimized to reconstruct the signal directly (EnCodec and SpeechTokenizer) do well.\\\" The sentence is generally clear, but in academic writing, it's often better to use more precise language and avoid subjective terms like \\\"not good\\\" or \\\"do well.\\\" A revised version is \\\"Table 3 shows that semantic tokenization (HuBERT-KM) performs poorly in speech reconstruction, while acoustic tokenizers optimized for direct signal reconstruction (EnCodec and SpeechTokenizer) demonstrate superior performance.\\\"\\n\\n[1] Lakhotia, Kushal, et al. \\\"On generative spoken language modeling from raw audio.\\\" Transactions of the Association for Computational Linguistics 9 (2021): 1336-1354. \\n[2] Gat, Itai, et al. \\\"Augmentation invariant discrete representation for generative spoken language modeling.\\\" arXiv preprint arXiv:2209.15483 (2022). \\n[3] Borsos, Zal\\u00e1n, et al. \\\"Audiolm: a language modeling approach to audio generation.\\\" arXiv preprint arXiv:2209.03143 (2022). \\n[4] D\\u00e9fossez, Alexandre, et al. \\\"Moshi: a speech-text foundation model for real-time dialogue.\\\" arXiv preprint arXiv:2410.00037 (2024).\", \"note\": \"dMel@40Hz bitrate = $40 \\\\times 80 \\\\times \\\\log_2 16 = 12800 = 12.8$kbps\", \"questions\": \"1) What are the hyperparameters for extracting log Mel spectrograms? Window size? Stride?\\n2) Why are the model names \\\"RichASR\\\" and \\\"RichTTS?\\\" Any specific reasons?\\n3) What is the codebook utilization rate or distribution of dMel? The proposed quantization approach divides the intensity into equally-spaced bins. However, a potentially better way is to assign bin sizes according to the data distribution for a uniform codebook utilization.\\n4) Does pre-training the LM with speech-only data help downstream performance? In spoken LM applications, it is common to pre-train the LM on speech tokens with large unlabeled data.\\n5) Are there any decoding techniques involved in RichTTS and RichASR? E.g., beam search.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder of rebuttal response\", \"comment\": \"Thank you once again for your thoughtful review and feedback! As we approach the end of the discussion period, we want to ensure that our previous responses have fully addressed all your concerns. If you have any additional questions or unresolved issues that we can clarify, please don\\u2019t hesitate to let us know. We\\u2019re more than happy to assist further!\"}", "{\"summary\": \"This work propose to discretize mel-spectrum into a special kind of intensity bins, which is proved to be a simple representation but more effective than commonly used speech tokenizers (i.e., codec). The authors claim that the newly proposed dMel well carry both acoustic and semantic information within speech signal, without losing information during quantization like codec. Experimental results have proved the effectiveness in tts and asr tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"New idea of using mel spectrum, which is continuous signal, for language modeling. This is different from recently popular codec based TTS.\"], \"weaknesses\": [\"For TTS and ASR evaluation, there are only limited baselines for comparison, more powerful models like vall-e (TTS) and whisper (ASR) should also be included.\", \"The results of dMel are only reported on top of RichTTS and RichASR, experiments on more backbones are expected for better evaluation.\"], \"questions\": [\"For RichTTS and RichASR, what about the implementation details like architecture/training data (compare to speechgpt?)\", \"Is there any open-source plan to support the community?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your insightful review and for recognizing this as `pioneering work that may open a potentially new track of TTS research.` Your understanding of our core motivation - addressing the challenge of capturing both semantic and acoustic information without architectural complexity - perfectly aligns with our research goals.\\n\\nWe appreciate the reviewer bringing MELL-E (https://arxiv.org/abs/2407.08551) to our attention. However, we would like to clarify two important points:\\n\\n* MELL-E was released on July 11, 2024. According to ICLR 2025's policy, this qualifies as concurrent work, as it was published during our paper's preparation period. However, the concurrent emergence of similar ideas from different teams often suggests the research direction is promising\\n* While both works address speech representation, MELL-E employs a more complex architectural approach and operates on continuous mel features. Our method operates on discrete features (opposite direction to continuous features) and prioritizes simplicity and efficiency while achieving competitive performance. Also, our work demonstrates broader applicability across both ASR and TTS tasks\\n\\nWe believe these concurrent developments validate the importance of exploring mel-based representations for language model-based speech processing.\"}", "{\"title\": \"New Experiments Speech Synthesis with ultra-low frame rate dMel feature\", \"comment\": \"Dear Reviewers,\\n\\nWe appreciate your unanimous agreement that dMel achieves strong results while reducing downstream model complexity. We are excited to share additional results that further strengthen our claims about dMel's efficiency.\", \"key_new_findings\": \"1. We successfully pushed dMel's frame rate from 40Hz to unprecedented low levels (20Hz, 13.3Hz, and 10Hz) while maintaining competitive performance.\\n2. At 20Hz, our model achieves 5.0 WER on LibriSpeech test-clean, outperforming USLM's 6.5 WER despite using less than half the frame rate (USLM uses 50Hz).\", \"technical_details\": \"Our approach concatenates k frames into k*80 channels while maintaining the same model architecture. This reduces sequence length by factor k:\\n- k=2: 20 frames/second\\n- k=3: 13.3 frames/second\\n- k=4: 10 frames/second\\n\\nWhile our submission used k=1 (predicting 80 channels per step), these new experiments use larger k values, where the model predicts k\\\\*80 channels in each step and each speech embedding is derived from k\\\\*80 dMel features.\\n\\nTo our knowledge, we are the first to successfully operate at such low frame rates (10-20Hz) for speech synthesis using just a vanilla Transformer Decoder\\u2014a simple, well-investigated architecture! This achievement is unprecedented and addresses a key scalability challenge in speech modeling by significantly reducing sequence lengths.\", \"complete_results\": \"| | Feature | # Frames/second | WER | CER |\\n| --- | --- | --- | --- | --- |\\n| VOXTLM (official results) | HuBERT-KM | 50 | - | 3.5 |\\n| USLM (official results) | SpeechTokenizer | 50 | 6.5 | - |\\n| RichTTS (our implementation) | HuBERT-KM | 50 | 9.5 | 4.3 |\\n| RichTTS (our implementation) | SpeechTokenizer | 50 | 11.4 | 5.9 |\\n| RichardTTS | dMel | 40 | 4.3 | 1.8 |\\n| **Our Submission's Results Above / New Low Frame-Rate Results Below** | --- | --- | --- | --- |\\n| RichardTTS | dMel | 20 | 5.0 | 2.2 |\\n| RichardTTS | dMel | 13.3 | 6.8 | 3.9 |\\n| RichardTTS | dMel | 10 | 8.2 | 5.0 |\", \"these_results_further_validate_our_core_contributions\": \"1. dMel's efficiency as an encoder-free, low frame-rate feature\\n2. The effectiveness of our channel-wise feature encoding/decoding design\\n\\nWe understand that these new results may not be considered in the review process given the timing, but we believe they provide valuable additional validation of our approach.\"}", "{\"title\": \"Response - 3\", \"comment\": \"We appreciate your acknowledgment that \\u201ca higher bitrate does not necessarily imply better performances\\u201d. This directly addresses one of your initial rejection reasons about \\\"unfair comparison\\\" due to bitrate differences. We also note that you haven't responded to our clarification about the baseline concern, where we explained that SpeechTokenizer (ICLR 2024) represents a very recent and strong baseline in the field.\\n\\nIn this response, we would like to address your three new concerns that appear to motivate the rejection decision:\\n\\n\\n> 1. dMel has higher bit-rate, while many existing works are pursuing low bit-rate, which deviates from the current trend of research.\\n\\n\\nFirst, attempts to achieve ultra-low bitrate do not imply that this is the right direction for all research. Scientific progress often comes from exploring alternative approaches, and higher bit rates may offer valuable trade-offs worth investigating.\\n\\nSecond, the current trend focuses on compression-based tokenization, where compression rate directly affects token rate for Transformer modeling. However, dMel fundamentally differs in its architecture - it operates without a compression model, and our token rate is clearly presented as frame rate in Table 3.\\n\\n\\n\\n> 2. Moreover, it is unclear whether dMel is useful for joint speech recognition and generation (Table 11). Since VOXTLM operates on top of HuBERT units and performs well on ASR and TTS, these units may be more suitable for decoder-only speech LMs.\\n\\n\\nWe must respectfully point out a factual error in the statement \\\"VOXTLM performs well on ASR and TTS\\\". In fact, VOXTLM demonstrates that HuBERT tokens yield poor TTS performance. Our Table 11 shows that dMel maintains strong TTS performance while achieving meaningful ASR results. \\n\\nIn fact, no existing tokenization method has demonstrated SOTA performance for both tasks simultaneously. \\nRecent works [1,2] explicitly highlight the challenges in balancing ASR and TTS performance in a single model. Notably, [2] (EMNLP 2024) achieves superior results through dedicated model design rather than tokenization innovation, using mel-spectrogram instead of codec features. These findings provide no evidence that HuBERT or other units are more suitable than dMel for speech-text models.\\n\\n\\n\\n> 3. Another unaddressed issue is the robustness of dMel. Data-driven methods like self-supervised learning and neural codecs could be easily improved by introducing more diverse data to increase robustness [3,4]. However, it is unknown whether noise and perturbation affect dMel.\\n\\n\\nThe reviewer's concern about robustness actually highlights a key advantage of dMel. While data-driven methods can be improved by introducing more diverse data, they inherently suffer from information loss due to their neural compression nature - they learn to discard information based on training data, which may prove crucial in unseen conditions. In contrast, mel-spectrogram is a physics-based signal representation that:\\n\\n 1. Preserves frequency components through principled transformation - while it does not retain phase information, this aligns with human auditory perception which is primarily sensitive to magnitude spectrum\\n 2. Has demonstrated robust performance over decades of speech processing research\\n 3. Does not require data-dependent training to handle different acoustic conditions\\n 4. Maintains consistent behavior across various noise conditions due to its deterministic nature\\n\\nThis fundamental difference means that while data-driven methods need to be explicitly trained to handle noise and perturbations (potentially missing unknown variations), dMel's robustness is inherent in its signal processing foundation. The reviewer's suggestion of \\\"improving robustness through more diverse data\\\" actually underscores the limitations of data-driven approaches - they need to \\\"learn\\\" what mel-spectrogram already captures by design.\\n\\n\\nWe sincerely appreciate your thoughtful review and the opportunity to clarify these points. Your feedback helped us better articulate our method's unique contributions and position in the field. We believe the above clarifications address the core concerns about our work's direction, effectiveness, and robustness. We hope this discussion has been valuable in highlighting dMel's distinct advantages and contributions to the field.\\n\\n```\\n[1] [A Unified Model for Text-to-Speech and Speech-to-Text.](https://dclibrary.mbzuai.ac.ae/mletd/32/)\\n[2][STTATTS: Unified Speech-To-Text And Text-To-Speech Model] (https://aclanthology.org/2024.findings-emnlp.401.pdf)\\n```\"}", "{\"title\": \"keep my rating\", \"comment\": \"Thanks for response, i would like to keep my postive rating\"}", "{\"comment\": \"We thank reviewer for the comments and recognition of our approach. We have thoroughly examined each point and would like to provide detailed responses:\\n\\n## 1. Evaluation with Recent Benchmarks\\n\\n> Evaluate with recent Benchmark such as Codec-Superb and DASB. Compare with more baselines include: SPECTRAL CODECS and SemantiCodec.\\n\\nWe highly appreciate the reviewer for pointing out these relevant work and benchmarks, and we definitely would like to discuss these work in our next revision. However, we believe that these works should be considered as concurrent work, and according to ICLR's concurrent policy [https://iclr.cc/Conferences/2025/FAQ], that paper published within the last 4 months are considered as contemporaneous. Also, paper not published in peer-reviewer proceedings or journals are not required to compare. Therefore, we believe this should not be considered as a main weakness.\\n\\n## 2. Evaluation on Other Audio Domains\\n\\n> Evaluation on other domain audio data includes music and general audio\\n\\nAs indicated in our title and our limitation section, the current scope of this work is primarily on speech. We will consider extending dMel to music and general audio as our future work.\"}", "{\"title\": \"Response to Reviewer wcjv 1/2\", \"comment\": [\"Thank you for your detailed review. We address your concerns and questions below:\", \"## 1. Regarding Bit Rate and Compression\", \"We cannot agree on different bit-rate leading to `unfair comparison` as bit-rates don't necessarily correlate with better downstream task performance. Several work have observed this:\", \"Moshi [1] observed: \\\"Across our experiments, we make the somehow counter-intuitive observation that this gain gets more significant as we lower the bitrate.\\\"\", \"DASB [2] similarly reported: \\\"Interestingly, higher bitrates, when available (e.g., for EnCodec and DAC), tend to degrade performance.\\\"\", \"Another evidence is in VoxtLM [3] paper, they found using k=200 centroids achieved 3.5 CER for TTS but using k=1000 centroids resulted in a higher 6.1 CER for TTS. This also indicates higher bit-rate doesn\\u2019t mean better results.\", \"2. It's important to note that traditional bit-rate comparisons may not be directly applicable to dMel due to its distinct architectural features:\", \"Encoder-free Design: Unlike conventional approaches, dMel operates without an audio compression model, making traditional compression rate metrics less relevant and inaccurate to measure dMel.\", \"Parallel Processing: While dMel utilizes 80 channels per frame, these channels are processed in parallel during both encoding and decoding. Therefore:\", \"Computational complexity is primarily determined by frame rate rather than bit-rate\", \"Traditional bit-rate calculations do not accurately reflect the model's efficiency\", \"We respectfully suggest that dMel's performance should be evaluated within the context of its novel modeling approach rather than compression method, where frame rate serves as a more meaningful metric than bit-rate.\", \"We would like to add discussion about the bit-rate and frame-rate in our paper too.\", \"3. Choice of Centroids for HuBERT K-means\", \"Regarding the number of centroids used in our HuBERT k-means implementation, our choice of 200 centroids follows the methodology established in VoxTLM[3], which demonstrated superior performance with this configuration. Specifically, VoxTLM's ablation studies (Table 6) show that:\", \"k=200 centroids achieved 3.5 CER for TTS\", \"k=1000 centroids resulted in a higher 6.1 CER for TTS\", \"This empirical evidence supports our choice of centroid count.\", \"## 2. Regarding Baseline Comparisons\", \"We appreciate the reviewer's comments about baselines, but several points need clarification:\", \"Regarding baseline selection:\", \"SpeechTokenizer is a very recent work (ICLR 2024)\", \"Even Moshi [1], which you cited, builds upon SpeechTokenizer's methodology, acknowledging \\\"inspiration from previous work on SpeechTokenizer\\\"\", \"Regarding Spoken Language Modeling (SLM): The assertion that SLM is a \\\"standard evaluation task\\\" for speech tokenization papers is not accurate. Many significant works do not include SLM evaluation yet:\", \"EnCodec\", \"SpeechTokenizer\", \"Recent works mentioned by reviewer 6UMS (SPECTRAL CODECS, SemantiCodec, APCodec, Single-Codec)\", \"Regarding cited papers:\", \"[4] (AudioLM) doesn't propose new tokenization methods but focuses on modeling existing tokens\", \"[2] focuses on representation robustness for SLM rather than tokenization\", \"[1] (Moshi) was released just one week before submission deadline and it is a 67 pages paper works on both speech tokenization and SLM.\", \"## 3. Regarding Writing\", \"Thanks for spotting places with less formal style in our paper. We will correct accordingly the style, so please let us know if you found any other places. At the same time, we are concerned about feedback on usage LLMs for writing as this could violate the privacy.\", \"```\", \"[1] Moshi: a speech-text foundation model for real-time dialogue\", \"[2] DASB - Discrete Audio and Speech Benchmark\", \"[3] Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks.\", \"[4] Borsos, Zal\\u00e1n, et al. Audiolm: a language modeling approach to audio generation.\", \"```\"]}", "{\"comment\": \"Thank you for your detailed feedback. We appreciate your patience and would like to provide further clarification regarding our results and claims.\\n\\nRegarding Table 3, we want to emphasize that our intention was not to claim superiority in speech reconstruction or to present dMel as the state-of-the-art method. Instead, the table serves to demonstrate dMel's fundamental characteristics in preserving speech content. Our results show that dMel achieves comparable reconstruction quality to mel-spectrograms and existing methods, which was our primary objective for this comparison. Also, Table 3 serves a crucial purpose: it quantifies dMel's information retention relative to mel-spectrograms, which is essential since mel-spectrograms cannot be directly used for decoder-only downstream tasks. So we have to use Table 3 to measure the difference. In fact, in Table 3, these results provide no basis for claiming dMel's superiority in reconstruction quality: \\n- EnCodec has the lowest WER while dMel's WER is close to it. \\n- Mel-HifiGAN and dMel-HifiGAN has better MOS-LQO. \\n- SpeechTokenizer leads in MOS scores, while dMel-HifiGAN performing similar to EnCodec. \\n\\n\\nOur broader claim about dMel's superior performance specifically refers to downstream tasks, not reconstruction quality. The primary strength of dMel lies in its effectiveness as a foundation for generation model such as text to speech generation and speech to text generation. This distinction is important, as our focus is on dMel's utility in these downstream applications rather than pure reconstruction performance.\\n\\nWe have reviewed our manuscript carefully and believe we have not made such claims about reconstruction. However, we acknowledge the importance of clearly stating both the limitations of dMel in reconstruction and the specific purpose of Table 3 in our analysis. We will add this in our manuscripts and we hope these clarifications help present our work's contributions more accurately and address the reviewer's concerns regarding our claims.\"}", "{\"metareview\": \"This paper proposes dMel, a simple method for quantizing Mel spectrograms into discrete units. Unlike self-supervised semantic tokens and neural codecs, dMel is model-free. The authors train a transformer-based language model for speech-text modelling and evaluate their proposed tokenization approach on ASR and TTS. Experimental results indicate superior ASR and TTS performance compared to prior methods like HuBERT + K-means and SpeechTokenizer. Reviewer qtqg and wcjv requested an evaluation under the framework of spoken LM. The authors noted that some spoken tokenization papers also use similar downstream tasks. Both reviewers qtqg and wcjv suggested that the comparison is unfair due to dMel's higher bit rate. The authors argued that a comparison under the same bit rate is not required, but the reviewers remained not fully convinced. The meta-reviewer supports that a comparison under a similar bit rate is crucial (please refer to Additional Comments on Reviewer Discussion).\", \"additional_comments_on_reviewer_discussion\": \"The paper demonstrates that dMel performs well, providing sufficient evidence in ASR and TTS compared to previous speech tokenization approaches. The main concern, however, is that dMel has a much higher bit rate than other tokens. Whether a comparison of speech tokenization under the same bit rate is necessary remains a point of contention.\\n\\nBoth the reviewers and the AC consider that bitrate should not be ignored. The downstream models and tasks used by the authors are insufficient to justify that bitrate can be overlooked, as it is expected that high-bitrate methods have an advantage under the current setting. \\n\\n\\n=== Below is the opinion of AC ===\\n\\nDo we need to consider the bitrate when designing a speech tokenization approach? Let's revisit the rationale for using discrete units or speech representation learning in downstream tasks. The original signal contains more information than its quantized, compressed, or encoded version. If sufficient downstream training data and a sufficiently capable downstream model are available, using the original data outperforms the compressed version, making quantization or representation learning unnecessary. This is also demonstrated in this paper, where continuous mel-spectrograms outperform dMel in both TTS and ASR tasks.\\n\\nThe research on tokenization or compression aims to find better speech representations that allow for less data or smaller downstream models while still achieving strong performance on downstream tasks. From this perspective, the best way to evaluate a speech representation or tokenization approach is to test it under various low-resource settings. However, not all papers evaluate models in this comprehensive manner. Instead, researchers often use bitrate as a proxy. The assumption is that representations with higher bitrates are more complex, and simpler representations are preferred because more complex ones tend to overfit more easily. This assumption is not always accurate, as other factors, such as the amount of data available for downstream tasks, also play a role. This explains why, in general, higher bitrates tend to lead to better performance (see Codec-SUPERB: https://arxiv.org/abs/2402.13071). However, exceptions to this trend have also been noted, as mentioned in the rebuttal.\\n\\nHowever, this does not mean that bitrate can be disregarded during comparisons. If we want to disregard bitrate, the authors should verify that the proposed approach performs well in low-resource scenarios (e.g., with less training data or smaller downstream models). Overall, I support the reviewers' view that the consideration of bitrate cannot be ignored.\"}", "{\"comment\": \"We sincerely appreciate the reviewer noting our work's soundness and contribution, particularly in recognizing the novelty of our mel-spectrum approach. We've carefully considered your feedback and would like to address each point comprehensively:\\n\\n\\n## 1. Baseline Comparisons\\nWhile we understand the interest in Whisper/VALL-E comparisons, there are fundamental methodological reasons these were not included:\\n\\n * Neither is open-source, preventing reproducible research. Whisper uses proprietary, undisclosed training data.\\n * Such comparisons wouldn't validate our core scientific contribution\\n * Our focus is on advancing open, reproducible speech research\\n\\n## 2. On Architecture Experiments:\\n\\nWe apologize if Table 10's extensive architecture experiments weren't sufficiently highlighted, but we reported results across multiple standard architectures in Table 10:\\n * Transformer Decoder (most popular architecture)\\n * CTC-based models\\n * Sequence-to-sequence (encoder-decoder) models (whisper falls into this type of models, but was trained on ~400x more data, so we actually compare with the family of models whisper is based on)\\n\\nOur experiments demonstrate dMel's effectiveness across multiple standard architectures in a reproducible setting with open data.\\n\\n## 3. Response to questions\\n\\n> For RichTTS and RichASR, what about the implementation details like architecture/training data (compare to speechgpt?)\\n\\nThe architecture is introduced in Table12. The training data is LibriSpeech for most of the Tables except Table 5 and 6, as we introduced in Section 3.1. Table 5 and 6 are using different datasets to compare the results with different models. \\n\\n> Is there any open-source plan to support the community?\\n\\nWe will definitely open-source our full codebase and we are undergoing necessary steps to release full code.\"}", "{\"summary\": \"This work propose to solve the problem of codec: it is hard for one codebook to cover both semantic and acoustic information, but multiple codebook will complicate the architecture and require additional pretraining. Therefore, this work proposes to discretize mel-filterbank channels into discrete intensity bins, which produces a simple representation that outperforms existing speech\\ntokenization methods.\\n\\nI believe this is a pioneering work that may open a potentially new track of TTS research --> use continous mel to replace discrete codec in lm base TTS.\", \"one_question\": [\"How is it compared to another similar work MELL-E (https://arxiv.org/abs/2407.08551) that also use continous mel tokens for lm based TTS?\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"see above\", \"weaknesses\": \"see above\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents dMel, an encoder-free speech tokenizer that simplifies speech tokenization by discretizing log mel-filterbank outputs into discrete intensity bins, eliminating the need for complex encoding architectures. Unlike previous tokenization methods that separate semantic and acoustic information, dMel maintains both in a single, unified representation. The tokenization process reduces precision of each filter output per frame while retaining the essential information needed for high-quality speech resynthesis, achieved by leveraging pre-trained vocoders. Additionally, the paper explores the application of dMel in language model (LM)-style training for both automatic speech recognition (ASR) and text-to-speech (TTS) tasks. The results demonstrate that dMel performs comparably or better than existing methods in preserving semantic content and reconstructing natural-sounding audio. This efficient, unified approach to speech tokenization facilitates streamlined ASR and TTS training, advancing joint modeling of speech and text.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea of quantizing mel spectrogram as tokenization is interesting and simple (in a good way).\", \"Results on TTS and ASR show dMel quantization has a small impact on models trained on continuous representation, training downstream models on top of dMel also provided similar results to their continuous counterparts. These observations are interesting, showcasing the generalizability of dMel.\", \"Overall, I believe dMel is much more efficient in terms of model size and inference speed comparing to existing speech tokenizers (but this part is not well evaluated in the experiment section, see weaknesses).\"], \"weaknesses\": [\"As a speech tokenization paper, this work lacks a discussion on the overall bit rate for compression besides frame rate. Especially in the comparison with the prior works (e.g., Table 3). dMel is over 12.8kbps~5kbps (assuming 40 fps $\\\\times$ 32 mel filters $\\\\times$ 4 bit-per-filter)~, which is higher than Hubert-KM and Speech Tokenizer.\", \"This paper spent most of the space discussing ASR & TTS systems based on dMel. While the numbers are good, it is still not as good as a normal mel spectrogram (which is expected). This makes the content of the paper somewhat sparse, which is the biggest weakness in my opinion. The current paper seems to only suggest dMel is a spectrogram quantization approach, as it is essentially lowering numerical precision and showing the distortion is minimal on vocoder, ASR, and TTS. It would be more interesting to involve some other studies, for example:\", \"Efficiency-related studies, such as how the encoder-free and lightweight-decoder design of dMel can speed up or lower memory usage downstream applications.\", \"Applications where speech tokenization matters more, e.g., spoken LM [1,2], would better justify whether dMel can be viewed as a good speech tokenization approach.\", \"These scenarios/experiments would all be more suitable (than just plain ASR/TTS WER/MOS) for assessing the value of dMel. (I would like to note that these are not concerns that are expected to be addressed during the ICLR rebuttal period, they can be viewed as suggestions for future version of the paper)\", \"[1] https://arxiv.org/pdf/2102.01192\", \"[2] https://arxiv.org/abs/2410.00037\"], \"questions\": \"- Is there a fundamental difference between finite scalar quantization (FSQ; [1]) and dMel's quantization?\\nIf not, I think the FSQ paper should be acknowledged.\\n\\n[1] https://arxiv.org/pdf/2309.15505\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer wcjv 2/2\", \"comment\": \"## Response to the Questions:\\n\\n> What are the hyperparameters for extracting log Mel spectrograms? Window size? Stride?\\n\\nThe stride size is 1000/frame_rate. For our 40HZ feature, the stride size is 25ms, the window size is 50ms. We have introduced these in the caption of Table 10.\\n\\n> Why are the model names \\\"RichASR\\\" and \\\"RichTTS?\\\" Any specific reasons?\\n\\nWe named our model informally because of internal reasons, not because of any scientific reason - sorry if that caused confusion.\\n> What is the codebook utilization rate or distribution of dMel? The proposed quantization approach divides the intensity into equally-spaced bins. However, a potentially better way is to assign bin sizes according to the data distribution for a uniform codebook utilization.\\n\\nWe ablated dMel uniform binning with a percentile-based discretization method, which computes bin boundaries based on channel-specific statistics from the LibriSpeech training data. The latter showed competitive but slightly inferior performance compared to our proposed method and we have left this for future exploration.\\n> Does pre-training the LM with speech-only data help downstream performance? In spoken LM applications, it is common to pre-train the LM on speech tokens with large unlabeled data.\\n\\nWhile speech-only pre-training could be valuable future work, our current focus is specifically on:\\n * Demonstrating effective mel feature discretization\\n * Establishing a decoder-only architecture for conditional sequence generation, where speech is generated from text input (TTS) or text is generated from speech input (ASR)\\n\\nOur strong ASR/TTS results demonstrate these core contributions are effective without additional pre-training. Future work could explore both pre-training for multi-task scenarios and unconditioned generation tasks.\\n\\n> Are there any decoding techniques involved in RichTTS and RichASR? E.g., beam search.\\n\\nThanks for pointing out. We use top-p (p=0.95) sampling instead of beam search for RichTTS while no beam-search is used for RichASR and simple greedy decoding is done. We will add this important implementation details in our manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the authors' clarifications.\\n\\nFirst, I agree that a higher bitrate does not necessarily imply better performance. However, many prior works related to audio codec and speech tokenization, like WavTokenizer [1] and SemanticCodec [2], try to achieve ultra-low bitrate discretization. In contrast, dMel has a significantly higher bitrate compared with those previous studies, which deviates from the current trend of research. Besides the usefulness of the demonstrated tasks (ASR and TTS), I do not see other practical usages of dMel compared with other data-driven speech tokenizers.\\n\\nMoreover, it is unclear whether dMel is useful for joint speech recognition and generation (Table 11). Since VOXTLM operates on top of HuBERT units and performs well on ASR and TTS, these units may be more suitable for decoder-only speech LMs.\\n\\nAnother unaddressed issue is the robustness of dMel. Data-driven methods like self-supervised learning and neural codecs could be easily improved by introducing more diverse data to increase robustness [3,4]. However, it is unknown whether noise and perturbation affect dMel.\\n\\nHence, I intend to keep the score unchanged.\\n\\n---\\n\\n[1] Ji, Shengpeng, et al. \\\"Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling.\\\" arXiv preprint arXiv:2408.16532 (2024). \\n[2] Liu, Haohe, et al. \\\"SemantiCodec: An Ultra Low Bitrate Semantic Audio Codec for General Sound.\\\" arXiv preprint arXiv:2405.00233 (2024). \\n[3] Gat, Itai, et al. \\\"Augmentation invariant discrete representation for generative spoken language modeling.\\\" arXiv preprint arXiv:2209.15483 (2022). \\n[4] Messica, Shoval, and Yossi Adi. \\\"NAST: Noise Aware Speech Tokenization for Speech Language Models.\\\" arXiv preprint arXiv:2406.11037 (2024).\"}", "{\"title\": \"Response to Reviewer qtqg - 2\", \"comment\": \"We appreciate your timely response and constructive feedback regarding the bit rate discussion. You raise an important concern over complexity and we would like to clarify how dMel's higher bit rate impacts practical performance, hoping to address your concerns.\\n\\nWhile bit rate is important for reconstruction fidelity, the computational complexity of handling tokenization is more critical. This involves three aspects:\\n\\n* Audio-to-Token Complexity: dMel is parameter-free, unlike low bitrate tokenizers that require trained models (Table 3)\\n* Token Modeling Complexity: dMel only requires a reshaping operation and an optional linear transformation layer (we detailed this below), with softmax over a small vocabulary of 16 tokens. In contrast, low-bit rate tokenizers typically require both autoregressive and non-autoregressive models for primary codes and residual codes, plus larger vocabularies that significantly increase embedding parameters and softmax computations.\\n* Token-to-Audio Complexity: Our TTS experiments utilize a lightweight 1M parameter open-source vocoder, leveraging years of research in mel-to-audio conversion.\\n\\n\\n\\n> Frame rate does not reflect the number of bins used for quantization. The precision of quantization obviously matters (as shown in Table 9).\\n\\n\\nYes, we agree it matters for reconstruction fidelity. However, Table 9 demonstrates that increasing bins can actually worsen TTS error rates. \\n\\n\\n> For downstream applications, bit rate is at least equally as frame rate, since it directly impacts the architecture, complexity, and learning dynamics (e.g., number of prediction heads for the token, loss components, etc.) of the downstream model.\\n\\n\\n\\nYes, we partially agree: considering high bitrate for dmel, this only leads to marginal increase in complexity. \\n\\nIn fact, our model didn\\u2019t increase the number of prediction heads or changing the loss components, but with standard prediction head and cross-entropy loss operations with one additional dimension. We are not sure if this concern is from assuming our model needs 80*16 embeddings or not. But we think it is important to restate:\\n\\n* All 80 channels share the same vocabulary embeddings (16 total embeddings, not 80*16. Single prediction head, not 80 heads)\", \"implementation_requires_only\": \"* Reshaping operation from (80*x) \\u2192 (80, x) (this is the last layer\\u2019s hidden states tensor of Transformer decoder)\\n* Softmax over the reshaped tensor\\n* Standard cross entropy loss operations with one additional dimension\\n* Optional linear layer when hidden states aren't divisible by 80\\n\\nSo, a reshaping operation and an optional linear operation is the cost difference compared to vanilla Transformer. This is simpler than low bit-rate compression tokens requiring separate models for different code components (coarse-codec autoregressive and fine-codec non-autoregressive transformers).\\n\\nFrame rate remains the primary complexity driver, hence our emphasis on evaluating dMel in the frame rate context.\\n\\n\\n> At the very least, I believe bitrate should be disclosed explicitly when compared to prior works on speech reconstruction (as in those audio codec papers cited by this work) and discussed as a limitation or in the paper.\\n\\n\\nYes, we agree with your point. Through the discussion here, we admit we need to clarify the bit-rate of dMel and how it affects the comparison and downstream applications.\"}", "{\"comment\": \"I would like to thank the authors for their further clarification.\\n\\n> Yes, we agree it matters for reconstruction fidelity. However, Table 9 demonstrates that increasing bins can actually worsen TTS error rates.\\n\\nTo clarify, my point is **bitrate does matter for speech tokenization methods, and it should be preferred over frame rate for fair comparison**. This is to contradict the authors' original claim that *\\\"frame rate serves as a more meaningful metric than bit-rate\\\"*. The fact that dMel with 8 bins (9.6kbps) is significantly worse than 16 bins is strong evidence. \\n\\n**Comparing dMel against other methods by only showing the frame rate is unfair**. In fact, if we add bitrate to Table 3, it immediately reveals that dMel is only slightly better (or even worse) than SpeechTokenizer, while the latter is at a significantly lower bitrate. It is unfair to claim that dMel *``performs better than other existing speech tokenization methods''* for this very reason.\\n\\nAs for the observation *``increasing bins can actually worsen TTS error rates\\\"*, it is not something that is relevant to comparing fairly against other methods, but more of dMel's own special property.\\n\\nFor downstream applications, I am satisfied with the authors' responses.\\n\\nWhile dMel has its own value in other aspects (e.g., training-free encoder, less dependencies between codes, etc.), I believe some of the key arguments in this paper are invalid, and some of the comparisons against prior works are unfair as pointed out above. Hence, I would like to keep my initial rating.\\n\\np.s. In Table 2, bit-rate should be bitrate, and kps should be kbps.\"}", "{\"comment\": \"I would like to thank the authors for their response.\\n\\n> Recent research has demonstrated that higher bit rates don't necessarily correlate with better downstream task performance.\\n\\nYes, but my concern is that dMel is having **higher** bit rate than the baselines it's being compared against.\\n\\n> We respectfully suggest that dMel's performance should be evaluated within the context of its novel modeling approach rather than compression method, where frame rate serves as a more meaningful metric than bit-rate. We would like to add discussion about the bit-rate and frame-rate in our paper too.\\n\\nWhile I can understand the claim that frame rate is more important than bit rate for dMel, I don't fully agree with the author \\n- Frame rate does not reflect the number of bins used for quantization. The precision of quantization obviously matters (as shown in Table 9).\\n- For downstream applications, bit rate is at least equally as frame rate, since it directly impacts the architecture, complexity, and learning dynamics (e.g., number of prediction heads for the token, loss components, etc.) of the downstream model.\\n- At the very least, I believe bitrate should be disclosed explicitly when compared to prior works on speech reconstruction (as in those audio codec papers cited by this work) and discussed as a limitation or in the paper.\"}", "{\"summary\": \"The paper introduces a novel approach to speech tokenization by discretizing mel-filterbank channels. This method effectively preserves both semantic and acoustic information, offering an interpretable, model-free representation grounded in the raw acoustic space. The authors train a transformer-based language model for speech-text modeling and evaluate their proposed tokenization approach on speech recognition (ASR) and speech synthesis (TTS) tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method is efficient, as it avoids hierarchical dependencies among mel-spectrogram channels, allowing for independent modeling of each channel within each frame using a straightforward, decoder-only (LM-style) transformer architecture.\", \"The approach is robust, simple yet innovative, with comprehensive evaluations that support the design choices.\", \"The encoder operates independently of the decoder, unlike many other tokenizers, making it compatible with any vocoder that accepts mel-spectrogram inputs.\", \"A detailed analysis of the setup is provided to enhance reproducibility.\", \"The paper is well-written and easy to follow, with a comprehensive analysis included.\"], \"weaknesses\": [\"The evaluation could be more thorough by incorporating existing benchmarks such as Codec-Superb and DASB, allowing for a more comprehensive comparison of the proposed method against existing models under standardized settings.\", \"The related works section could be expanded to include methods that use frequency domain inputs, such as those discussed in the following papers:\", \"https://arxiv.org/pdf/2406.05298\", \"https://arxiv.org/pdf/2201.09429\", \"https://arxiv.org/pdf/2405.00233\", \"https://arxiv.org/pdf/2402.10533\", \"https://www.arxiv.org/pdf/2406.07422\", \"While Hubert-KM, Encodec, and Speech Tokenizer are reasonable baselines, it would be beneficial to include additional baselines with more similar setups, such as SPECTRAL CODECS (https://arxiv.org/pdf/2406.05298) or SemantiCodec (https://arxiv.org/pdf/2405.00233), for a fuller assessment.\", \"The proposed model is only evaluated on speech data, leaving other domains, such as general audio and music, unexplored.\"], \"questions\": \"refer to weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up on Rebuttal Discussion\", \"comment\": \"Dear ICLR Reviewers,\\n\\nI hope this message finds you well. As the discussion period comes to an end, we look forward to receiving your assessment of our rebuttal for paper 5027 and how the clarifications address your initial concerns.\\n\\nYour considered evaluation of the rebuttal will help ensure a thorough review process. Please let us know if any aspects would benefit from additional clarification from our side.\\n\\nThank you for your continued engagement with our work.\\n\\nBest regards,\\nAuthors of Submission 5027\"}", "{\"comment\": \"We sincerely appreciate your thorough review and recognition of our novel contributions. Below we address your key concerns:\\n\\n## 1. Regarding Bit Rate and Compression\\n\\nYou raise an important point about bit rate discussion. While dMel operates at approximately 12.8 kbps (40fps \\u00d7 80 mel filters \\u00d7 4 bits/filter), we'd like to highlight several important considerations:\\n\\n1. Recent research has demonstrated that higher bit rates don't necessarily correlate with better downstream task performance. For instance:\\n\\n * Moshi [1] observed: \\\"Across our experiments, we make the somehow counter-intuitive observation that this gain gets more significant as we lower the bitrate.\\\"\\n * DASB [2] similarly reported: \\\"Interestingly, higher bitrates, when available (e.g., for EnCodec and DAC), tend to degrade performance.\\\"\\n\\n1. It's important to note that traditional bit-rate comparisons may not be directly applicable to dMel due to its distinct architectural features:\\n * Encoder-free Design: Unlike conventional approaches, dMel operates without an audio compression model, making traditional compression rate metrics less relevant and inaccurate to measure dMel, since the encoders participate in the compression scheme of the other models.\\n * Parallel Processing: While dMel utilizes 80 channels per frame, these channels are processed in parallel during both encoding and decoding. Therefore:\\n 1. Computational complexity is primarily determined by frame rate rather than bit-rate\\n 2. Traditional bit-rate calculations do not accurately reflect the model's efficiency\\n\\nWe respectfully suggest that dMel's performance should be evaluated within the context of its novel modeling approach rather than compression method, where frame rate serves as a more meaningful metric than bit-rate.\\nWe would like to add discussion about the bit-rate and frame-rate in our paper too.\\n\\n## 2. Regarding Content and Contributions\\n\\nWe appreciate your feedback about the paper's focus on ASR & TTS systems. We would like to emphasize three key aspects that demonstrate the broader impact of our work:\\n\\n* Novel Architecture: While mel features have been extensively studied, our work is the first to investigate modeling them with a decoder-only architecture. The dMel + decoder combination represents a fundamental architectural innovation in the field.\\n* Superior TTS Performance: Contrary to the expectation that our approach might underperform traditional mel spectrograms, Table 5 demonstrates that RichTTS (dMel + TransformerDecoder) achieves lower WER compared to popular open-source mel-based TTS models including VITS, FastSpeech2, and Tacotron2. \\n* Innovative Applications: By aligning TTS model design with Language Model architectures, dMel enables the application of numerous LM techniques to speech synthesis, including:\\n * Speculative decoding\\n * KV-caching\\n * Multi-turn capabilities\\n * And potentially many more\\n\\nWe will revise the manuscript to better emphasize these broader implications and their potential impact on the field.\\n\\n[1] Moshi: a speech-text foundation model for real-time dialogue https://arxiv.org/abs/2410.00037\\n\\n[2] DASB - Discrete Audio and Speech Benchmark https://arxiv.org/abs/2406.14294\\n\\n\\n## 3. Questions\\n>Is there a fundamental difference between finite scalar quantization (FSQ; [1]) and dMel's quantization? If not, I think the FSQ paper should be acknowledged.\\n\\n\\nWe recognize that our quantization method shares some similarities with FSQ, but also differs in key aspects:\\n\\n 1) FSQ employs scalar quantization in a learned latent code space, primarily focusing on image data modeling. In contrast, our work pioneers the successful application of scalar quantization technique to the original raw mel-frequency band (mel-fb) space, serving as a training-free speech tokenization approach.\\n 2) FSQ necessitates an additional bound operation to limit the range of the latent codes. Moreover, the dimensionality of the code in FSQ is considerably smaller (\\u22646) compared to ours (80). This higher dimensionality is crucial for maintaining speech quality across a broader frequency range.\\n\\nWe will make sure to cite the FSQ paper appropriately in our revised manuscript to acknowledge this prior work. Our primary contribution is not the quantization method itself, but rather:\\n\\n* Successfully applying it to mel-frequency features for speech discretization\\n* Developing an integrated architecture that enables decoder-only speech modeling\\n* Demonstrating strong empirical results across multiple downstream tasks\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your thorough engagement with our work. We have carefully revised our submission based on your valuable feedback, with all changes highlighted in blue.\\n\\nOur approach represents a significant departure from conventional methods, which necessitated extensive discussion of several key comparisons, particularly regarding bit-rate metrics and their impact on model complexity. While our method is less dependent on bit-rate considerations, we acknowledge its importance in the literature and have maintained a comprehensive discussion of these metrics to facilitate meaningful comparisons with existing approaches.\\n\\nWe remain grateful for your constructive feedback and are committed to further improving our paper to meet the high standards of the conference.\\n\\nBest regards,\\nAuthors of Submission 5027\"}" ] }
BoXyYnpUTh
Chinese Inertial GAN for Writing Signal Generation and Recognition
[ "Yifeng Wang", "Yi Zhao" ]
Disabled people constitute a significant part of the global population, deserving of inclusive consideration and empathetic support. However, the current human-computer interaction based on keyboards may not meet the requirements of disabled people. The small size, ease of wearing, and low cost of inertial sensors make inertial sensor-based writing recognition a promising human-computer interaction option for disabled people. However, accurate recognition relies on massive inertial signal samples, which are hard to collect for the Chinese context due to the vast number of characters. Therefore, we design a Chinese inertial generative adversarial network (CI-GAN) containing Chinese glyph encoding (CGE), forced optimal transport (FOT), and semantic relevance alignment (SRA) to acquire unlimited high-quality training samples. Unlike existing vectorization focusing on the meaning of Chinese characters, CGE represents the shape and stroke features, providing glyph guidance for GAN to generate writing signals. FOT establishes a triple-consistency constraint between the input prompt, output signal features, and real signal features, ensuring the authenticity and semantic accuracy of the generated signals and preventing mode collapse and mixing. SRA constrains the consistency between the semantic relationships among multiple outputs and the corresponding input prompts, ensuring that similar inputs correspond to similar outputs (and vice versa), significantly alleviating the hallucination problem of generative models. The three modules guide the generator while also interacting with each other, forming a coupled system. By utilizing the massive training samples provided by CI-GAN, the performance of six widely used classifiers is improved from 6.7% to 98.4%, indicating that CI-GAN constructs a flexible and efficient data platform for Chinese inertial writing recognition. Furthermore, we release the first Chinese writing recognition dataset based on inertial sensors in GitHub.
[ "Inertial Sensors", "Handwriting Recognition", "Signal Generation", "Human-Computer Interaction", "Disabled People" ]
Reject
https://openreview.net/pdf?id=BoXyYnpUTh
https://openreview.net/forum?id=BoXyYnpUTh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zMQv3xdfxW", "wCh9cc6nNe", "vKnP0KJHP7", "tZfz8RlFCJ", "s6a7I8GXR7", "pOivXez0oM", "kVcaHjbZfN", "jS6eizIcD6", "hsUi9chisS", "h8foLNg1au", "gxQlhwsrU3", "d5CkRQoZmH", "buBIeJgr4l", "b5fvouWKUi", "ZlWKLJ54Yn", "Y452WYbCW9", "Uuem7O5blG", "TvEmJ4AjED", "TthCcuztUc", "FERImB6Lpk", "F9WPRMSL5G", "Ae17YhZ3KI", "9eou5DimSr", "9Aah6x3Spj", "5uyP62Kn3n", "4BnY67B6Rk", "2yuvaYYtF2" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review" ], "note_created": [ 1732713992406, 1731942697998, 1732016766359, 1731855964948, 1732611321755, 1733045025351, 1732631202510, 1731679308175, 1730446844506, 1732785736074, 1732858049471, 1730615345247, 1732966585246, 1732199213566, 1731589856107, 1732468116581, 1733150447202, 1730629793370, 1732280409639, 1733139474196, 1730721057043, 1733053390210, 1730552467027, 1732971019534, 1733119136437, 1737523773852, 1734701485081 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_Kb21" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_fQCS" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_RmES" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_a2VT" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_FZkk" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_fQCS" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Reviewer_RmES" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Submission6512/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6512/Area_Chair_7K8f" ] ], "structured_content_str": [ "{\"title\": \"Humbly Requesting Your Feedback on Our Revisions\", \"comment\": \"I hope this message finds you well. In response to your comments, we provided a detailed point-by-point reply, addressing each concern thoroughly. We believe it is crucial to highlight that CI-GAN, as a generative model for inertial sensor handwriting signals, is fundamentally different from classification models and datasets designed for visual or pen-based handwriting recognition. Comparing CI-GAN to such methods or datasets would result in a methodological misalignment. This key distinction underscores the novelty and specificity of our contribution to this field.\\n\\nAdditionally, we clarified that our work fills a critical gap by creating the first IMU dataset of Chinese handwriting and demonstrated CI-GAN's effectiveness through rigorous experiments and comparisons with established augmentation techniques. Given the absence of comparable generative models for IMU signals, our approach represents a foundational step in this area. We hope our replies provide the necessary context to address your concerns comprehensively.\\n\\nSince submitting our revisions, we have been anxiously awaiting your feedback. This work represents years of dedicated research by our team, and your recognition and support are invaluable to us. We humbly and earnestly request that you kindly review our responses at your convenience. Your endorsement would mean a great deal to us, and we deeply appreciate your time and understanding.\"}", "{\"comment\": \"1. **For W1:** Thank you for your insightful suggestion regarding the introduction of the concept of inertial data earlier in the manuscript. In response, we have revised the paper to include a clear and concise explanation of the advantages of inertial sensors and their applications in IMU-based human-computer interaction systems right at the beginning. This sets a strong foundation for understanding the study's context. We kindly invite you to review the updated manuscript and look forward to your valuable feedback.\\n\\n2. **For W2:** We have revised the contributions summary to clarify the scope of our research, ensuring it aligns accurately with the study\\u2019s focus and avoids any potential misinterpretation.\\n\\n3. **For W3:** Thank you for your feedback. In response to your suggestion, we have streamlined the description of CGE in Section 3.1 to enhance readability and reduce complexity. At the same time, we want to emphasize that CGE is fundamentally different from a standard embedding. Unlike traditional embeddings that primarily encode semantic meanings, CGE captures the glyph-specific features of Chinese characters, such as shape, structure, and writing strokes, by leveraging the inherent relationship between the character glyph and its writing motion recorded by inertial sensor signals. Additionally, the R\\u00e9nyi entropy-based regularization we designed ensures that the encoding vectors are orthogonal and maximally informative, which not only strengthens the quality of glyph representations but also provides a generalizable mechanism that could benefit other representation learning tasks. This innovative approach goes beyond conventional embeddings, making CGE a key contribution of our framework. We kindly invite you to review the revised section.\\n\\n4. **For Q1:** In our original ablation experiment, removing CGE meant eliminating both two parts. To address your concern, we have conducted an additional ablation experiment where the first part (converting one-hot encoding into dense features) is retained, while only the second part (GER) is removed.\\n|Ablation Model|1DCNN|LSTM|Transformer|RF|XGBoost|SVM|\\n|-|-|-|-|-|-|-|\\n|No augmentation|0.87%|2.6%|1.7%|4.9%|1.2%|6.7%|\\n|w/o all (Base GAN)|18.5%|14.8%|15.7%|12.4%|20.5%|8.4%|\\n|w/ OT|26.4%|28.6%|27.3%|21.0%|30.9%|20.9%|\\n|w/ FOT|39.9%|38.0%|35.3%|31.9%|46.8%|27.3%|\\n|w/ CGE|54.6%|51.2%|47.9%|38.6%|57.5%|34.1%|\\n|w/ CGE (w/o GER)|35.7%|32.1%|30.9%|33.8%|41.1%|29.0%|\\n|w/ CGE (w/o GER)+SRA|61.4%|58.1%|60.2%|51.0%|59.9%|45.2%|\\n|w/ CGE (w/o GER)+FOT|59.6%|55.2%|54.0%|53.4%|58.3%|47.5%|\\n|w/ CGE+SRA|84.9%|77.4%|86.8%|61.4%|68.9%|56.1%|\\n|w/ CGE+FOT|80.7%|80.5%|80.9%|57.2%|70.4%|59.5%|\\n|w/ CGE+FOT+SRA (CI-GAN)|95.7%|93.9%|98.4%|83.5%|93.1%|74.6%|\\n\\n The results, now included in the revised manuscript, demonstrate the significant impact of GER on the performance of the framework. Specifically, we observe that retaining the dense feature transformation without GER still improves performance over the baseline GAN, but the lack of regularization results in noticeably lower effectiveness compared to using CGE with GER fully enabled. This confirms the critical role GER plays in enhancing glyph encoding by ensuring orthogonality and maximizing the information entropy of the encoding vectors.\\n\\n5. **For Q2:** The pre-trained VAE and CGE serve fundamentally different roles in the framework and cannot be substituted for one another. The VAE is designed to extract features from inertial sensor signals, focusing on capturing signal-specific characteristics. In contrast, CGE is designed to encode the categorical features of Chinese characters.\\nIn essence, the VAE operates on the signal space, learning to represent the temporal and motion characteristics of IMU data, while CGE works in the character space, embedding class-level information that distinguishes one glyph from another.\\n\\n6. **For Q3:** $h_T$ represents the **real signal feature**, $h_G$ denotes the **generated signal feature**, $e$ is the **glyph encoding** derived from the CGE module, which encodes glyph-related features. During training, $h_T$ is extracted from real IMU signals in the dataset using the pre-trained VAE, providing the ground-truth feature representation for supervising the generator. The Forced Feature Matching (FFM) loss aligns $h_T$, $h_G$, and $e$, ensuring that the generated signals reflect both the motion dynamics of real IMU data and the glyph-specific semantics of the target character.\\n\\nThank you so much for your insightful and constructive feedback. It\\u2019s clear that you have a deep understanding of the field, and the detailed suggestions you provided have been incredibly helpful in improving the quality of our work. Your recognition is extremely important to us, and we truly appreciate the thought and effort you\\u2019ve put into reviewing our paper.\"}", "{\"comment\": \"1. **For W1:** Thank you for your constructive feedback on the visualization and clarity of the diagrams. In response, we have significantly improved Figure 1 to provide a more systematic and intuitive representation of our framework. The updated diagram now explicitly defines the key tasks and symbols, ensuring that the roles of each component, such as CGE, GAN, FOT, and SRA, are visually clear and aligned with their descriptions in the text.\\n\\ufeff\\n Additionally, we have enhanced the representation of abstract concepts like constraints and regularization techniques by incorporating detailed annotations and visual cues. For example, we illustrate how FOT mitigates mode collapse and mode mixing with specific examples, and the semantic alignment enforced by SRA is clearly depicted to highlight its interaction with other components. We sincerely appreciate your suggestion and kindly invite you to review the revised figures and look forward to your feedback.\\n\\n2. **For W2:** Unlike images, where the quality of generation can often be assessed visually, it is challenging to determine whether generated time-series signals are realistic or semantically correct. This necessitates the use of strong constraints like FOT to ensure the quality, diversity, and semantic accuracy of the generated signals. FOT achieves this by forcibly aligning the glyph encoding features, generated signal features, and real signal features using the Wasserstein distance. This alignment ensures both semantic correctness and motion fidelity in the generated signals, effectively mitigating mode collapse and mode mixing.\\n\\n To further address your concern, we have added a new section in the appendix titled **Mathematical Explanation of FOT for Preventing Mode Collapse**. In this section, we provide a rigorous mathematical derivation to demonstrate how FOT mitigates mode collapse. Briefly, FOT preserves the diversity of generated signals by penalizing incomplete mode coverage and mode mixing. We kindly invite you to review this section, which we believe offers a solid theoretical foundation for the effectiveness of FOT in addressing this critical issue. \\n\\n3. **For W3:** To address your concern, we conducted additional experiments to thoroughly evaluate the system's robustness under external disturbances, specifically by introducing varying levels of Gaussian noise to the real inertial signals during training. The Gaussian noise was added at proportions of 0.0%, 5.0%, 10.0%, and 20.0% of the original signal's standard deviation to simulate sensor inaccuracies and environmental interference. Under different levels of Gaussian noise added to the real inertial signals, we trained CI-GAN models to generate 15,000 IMU signals for each noise setting. These generated signals were then used to train six classifiers (1DCNN, LSTM, Transformer, RF, XGBoost, and SVM), and their classification accuracy was evaluated using 5-fold cross-validation. The results, presented in the table below, reflect the accuracy of the classifiers under varying noise conditions.\\n|Noise ratio|1DCNN|LSTM|Transformer|RF|XGBoost|SVM| \\n|-|-|-|-|-|-|-|\\n|0.0%|95.7%|93.9%|98.4%|83.5%|93.1%|74.6%|\\n|5.0%|95.2%|94.1%|98.0%|82.9%|93.3%|71.8%|\\n|10.0%|94.5%|92.3%|97.1%|81.7%|92.6%|70.7%|\\n|20.0%|93.9%|92.5%|95.9%|79.8%|91.0%|69.4%|\\n\\n These results demonstrate that the system maintains high performance even under significant noise levels. While performance slightly decreases with higher noise ratios, the overall degradation is minimal. This robustness is attributed to the combined contributions of Glyph Encoding Regularization (GER), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA).\\n CGE introduces a regularization term based on R\\u00e9nyi entropy, which is the first embedding targeted at the shape of Chinese characters rather than their meanings, providing rich semantic guidance for generating handwriting signals. \\n FOT establishes a triple-consistency constraint between the input prompt, output signal features, and real signal features, ensuring the authenticity and semantic accuracy of the generated signals and preventing mode collapse and mixing.\\n SRA constrains the consistency between the semantic relationships among multiple outputs and the corresponding input prompts, ensuring that similar inputs correspond to similar outputs (and vice versa), significantly alleviating the hallucination problem of generative models.\\n Together, these components ensure the system's resilience to external disturbances and its capacity to generate realistic and accurate signals under challenging scenarios.\\n\\nWe sincerely thank you for your thoughtful and constructive feedback, which has greatly helped us improve the rigor and robustness of our work. Your recognition would mean a great deal to us, and we truly hope that our revisions meet your expectations.\"}", "{\"comment\": \"1. **For W1:** Thank you for pointing out the inconsistencies and omissions in the original Figure 1. We have unreservedly revised Figure 1 according to your suggestions. Specifically, \\\"CGE\\\" is now correctly labeled instead of \\\"GER,\\\" the \\\"SRA\\\" module has been visually represented to provide a complete overview of the framework, and all modules now include both their full names and abbreviations. We kindly invite you to review our updated manuscript.\\n\\n2. **For W2&Q2:** Thank you for your suggestion. We have provided a rigorous mathematical proof demonstrating how FOT imposes strong constraints in the feature space to effectively mitigate the mode collapse problem in GANs. Due to page limitations in the main text, we included this proof in Appendix D. We kindly invite you to review the supplemental mathematical analysis and sincerely hope that this revision addresses your concerns.\\n\\n3. **For W3&Q3:** Thank you for your valuable feedback regarding the ablation studies. In response, we have expanded our experiments to include all possible combinations of CGE, FOT, and SRA, thoroughly exploring their individual and collective contributions. It is worth noting that SRA relies on input semantics and therefore must be used alongside CGE in this framework. \\nAs shown in the results, the addition of each module consistently improves performance across all tested base models, regardless of which modules are already included, demonstrating that each module contributes unique and complementary strengths to the framework.\\n|Ablation Model|1DCNN|LSTM|Transformer|RF|XGBoost|SVM|\\n|-|-|-|-|-|-|-|\\n|No augmentation|0.87%|2.6%|1.7%|4.9%|1.2%|6.7%|\\n|w/o all (Base GAN)|18.5%|14.8%|15.7%|12.4%|20.5%|8.4%|\\n|w/ OT|26.4%|28.6%|27.3%|21.0%|30.9%|20.9%|\\n|w/ FOT|39.9%|38.0%|35.3%|31.9%|46.8%|27.3%|\\n|w/ CGE|54.6%|51.2%|47.9%|38.6%|57.5%|34.1%|\\n|w/ CGE (w/o GER)|35.7%|32.1%|30.9%|33.8%|41.1%|29.0%|\\n|w/ CGE (w/o GER)+SRA|61.4%|58.1%|60.2%|51.0%|59.9%|45.2%|\\n|w/ CGE (w/o GER)+FOT|59.6%|55.2%|54.0%|53.4%|58.3%|47.5%|\\n|w/ CGE+SRA|84.9%|77.4%|86.8%|61.4%|68.9%|56.1%|\\n|w/ CGE+FOT|80.7%|80.5%|80.9%|57.2%|70.4%|59.5%|\\n|w/ CGE+FOT+SRA (CI-GAN)|95.7%|93.9%|98.4%|83.5%|93.1%|74.6%|\\n\\n We believe these additional experiments provide a clearer understanding of the contributions of CGE, FOT, and SRA. Thank you again for your insightful suggestions. We kindly invite you to review the updated manuscript.\\n\\n4. **For W4&Q1:** We appreciate your suggestion and would like to clarify the positioning of our study and its broader applications. Our work primarily focuses on the development of a robust IMU signal generation algorithm, which can produce a large volume of high-quality inertial signals. These generated signals enable IMU-based human-computer interaction systems, offering significant advantages for accessibility, particularly for individuals with visual impairments. For example, by facilitating natural handwriting interactions, our algorithm has already contributed to the production of devices designed specifically for visually impaired users.\\nThat said, **aiding disabled individuals is just one of many application scenarios for our algorithm**. By enabling the generation of diverse and high-quality IMU handwriting signals, it supports the development of IMU-based systems in education, digital handwriting analysis, and personalized training for handwriting recognition algorithms. These applications demonstrate the algorithm\\u2019s potential to revolutionize human-computer interaction by providing high-fidelity motion data.\\n\\ufeff\\n To better reflect this breadth, and considering the comments from reviewer a2VT, we have revised the manuscript to introduce the advantages of inertial sensors and IMU-based human-computer interaction systems at the very beginning. In this context, we present the example of assisting disabled individuals as a representative case, highlighting it as one key motivation but not the sole focus of our study.\\n\\nWe sincerely thank you for your valuable feedback, and we hope that our revisions have adequately addressed your concerns. Your recognition is truly invaluable to us and we look forward to your response and further insights.\"}", "{\"title\": \"Humbly Requesting Your Feedback on Our Revisions\", \"comment\": \"I hope this message finds you well. First and foremost, we truly appreciate the time and effort you have dedicated to reviewing our manuscript.\\n\\nThe moment we received your comments, our team immediately began working tirelessly to address every concern with the utmost care. We worked tirelessly, refining the manuscript, conducting additional experiments, strengthening mathematical foundation and revising figures, all with the hope that you could see our responses as soon as possible. For us, even a few seconds earlier felt meaningful, as it might allow you to review our efforts sooner.\\nSince submitting our detailed responses and revised manuscript, we have been anxiously awaiting your feedback and we are confident that our response meets the high standards of this esteemed conference. Your endorsement would mean the world to us, not only as an affirmation of our work but also as a driving force for our continued efforts in this field.\\n\\nWe humbly and earnestly request that you kindly review our revisions at your earliest convenience. Your input is invaluable, and we deeply appreciate your understanding and consideration.\"}", "{\"title\": \"We Worked Tirelessly to Address Your December 29th Comments, but Manuscript Updates Closed After December 27th, So We\\u2019ve Attached the Modifications\", \"comment\": \"After receiving your review on December 29, our entire team immediately began working on the experiments you requested and comparing them with the Diff-Writer method you recommended. Although Diff-Writer is designed to generate handwriting trajectory data, while our task focuses on generating inertial sensor signals, we recognized the difference in data modalities. Despite this, our team worked tirelessly overnight to adapt Diff-Writer to our task, successfully completing the experiments and revising the manuscript accordingly, with additional citations included.\\n\\nHowever, we discovered that the ICLR paper update channel closed on December 27, and we did not receive your review until December 29. Unfortunately, this meant that we were unable to incorporate these updates. To facilitate your review of our changes, we have included the key modifications and experimental results below for your consideration. This work represents years of effort from our team, and we sincerely hope to gain your approval.\\n\\nConsidering the character limitations, we have attached some of the modifications below for your review, as well as for the review of other reviewers and the conference chair:\\n\\n*Due to the lack of deep learning-based augmentation methods in the sensor field, we introduced the diffusion model-based approach for generating handwriting trajectory, named Diff-Writer [Ren et al., 2023]. Although this approach generates trajectory point sequences rather than the sensor signals required in our study, its ability to produce high-quality and diverse handwriting data makes it highly valuable. We adapted this method through modifications and retraining, enabling its application to our inertial signal generation task for a meaningful comparison. As shown in Table 3, Diff-Writer significantly outperforms all baseline methods except for our CI-GAN, showcasing its strength as a learning-based approach for generating handwriting data. However, as Diff-Writer was not designed for generating inertial sensor signals, it struggles to fully capture the motion dynamics and semantic fidelity required for this task. Consequently, there remains a considerable gap between its performance and that of our CI-GAN, which achieves superior accuracy across all classifiers by addressing the unique challenges of inertial signal generation.*\\n\\n\\n### Table 3. Comparison of Data Augmentation Methods for Inertial Signal Generation\\n| Data Augmentation Methods | 1DCNN | LSTM | Transformer | RF | XGBoost | SVM |\\n|-|-|-|-|-|-|-|\\n| **Cropping** [Yue et al., 2022] | 15.7% | 9.1% | 7.7% | 12.8% | 16.3% | 9.6% |\\n| **Noise Injection** [Audibert et al., 2020] | 17.3% | 11.9% | 12.2% | 8.5% | 13.8% | 10.1% |\\n| **Jittering** [Flores et al., 2021] | 20.1% | 13.0% | 14.4% | 9.7% | 17.4% | 7.5% |\\n| **APP** [Chen et al., 2021] | 22.3% | 13.6% | 19.7% | 19.0% | 25.1% | 16.3% |\\n| **AAFT** [Lee et al., 2022] | 32.1% | 20.7% | 25.4% | 27.5% | 35.9% | 19.2% |\\n| **Wavelet** [Wang et al., 2024] | 19.9% | 12.1% | 10.6% | 13.8% | 22.6% | 9.5% |\\n| **EMD** [Otero et al., 2022] | 24.4% | 17.1% | 20.9% | 17.9% | 23.4% | 12.2% |\\n| **CutMix** [Yun et al., 2019] | 21.9% | 14.8% | 15.5% | 14.7% | 18.9% | 13.1% |\\n| **Cutout** [Devries et al., 2017] | 25.6% | 16.4% | 16.9% | 18.5% | 27.1% | 16.6% |\\n| **RegMixup** [Pinto et al., 2022] | 41.5% | 27.8% | 36.8% | 38.4% | 45.9% | 30.3% |\\n| **cGAN** [Douzas et al., 2018] | 18.5% | 14.8% | 15.7% | 12.4% | 20.5% | 8.4% |\\n| **Diff-Writer** [Ren et al., 2023] | __71.3%__ | __65.9%__ | __78.7%__ | __58.9%__ | __62.5%__ | __53.3%__ |\\n| **CI-GAN (ours)** | **95.7%** | **93.9%** | **98.4%** | **83.5%** | **93.1%** | **74.6%** |\"}", "{\"title\": \"Humbly Requesting Your Feedback on Our Response\", \"comment\": \"I hope this message finds you well. First and foremost, thank you for recognizing the clarity, motivation, and experimental contributions of our paper.\\n\\nDatasets such as IAHCC-UCAS2016 and CASIA-OLHWDB are based on vision-based and pen-based systems, respectively, and are inherently incompatible with the inertial sensor signals that CI-GAN is specifically designed to address. As there are currently no publicly available datasets for inertial sensor handwriting signals, our work fills this gap by creating a framework that can generate a theoretically unlimited number of high-quality synthetic samples, providing a foundation for further research in this field. Overall, the absence of any publicly available inertial sensor handwriting datasets further underscores the novelty and necessity of our contribution.\\n\\nWe sincerely hope that our detailed responses adequately address your concerns. We humbly request that you kindly review our revisions at your convenience. Thank you so much for your time and understanding.\"}", "{\"comment\": \"Thank you for your recognition of our work, especially your positive feedback on the motivation, experiments, and writing of our paper! Regarding the two weaknesses you mentioned, we provide the following responses:\\n\\n1. **Weaknesses:** The dataset is relatively small and lacks comprehensive coverage of the Chinese character set.\\n\\n **Response:** We would like to clarify that collecting high-quality inertial sensor (IMU) handwriting signals is inherently challenging due to the nature of IMU data. Unlike visual data, IMU signals are continuous time-series waveforms, making it difficult to segment and label individual characters without auxiliary tools like optical tracking devices. Despite these challenges, we successfully collected a dataset of 4500 IMU signal samples, covering \\u201cCommonly Used Chinese Characters List\\u201d published by the Chinese government.\\n\\n While our dataset does not encompass every Chinese character, our model has learned the structural relationships and shape patterns between different character, as evidenced by the t-SNE visualizations where characters with similar structures and stroke patterns cluster closely together. As most complex Chinese characters are constructed by combining simple elements, our work represents a foundational step from 0 to 1 in this field. \\n\\n2. **Weaknesses:** The proposed method should be tested on other public available benchmarks with other SOTA methods, such as IAHCC-UCAS2016 and CASIA-OLHWDB.\\n\\n **Response:** Our CI-GAN is designed to generate handwriting signals captured by **inertial sensors**. However, the IAHCC-UCAS2016 dataset and the CASIA-OLHWDB dataset were collected by entirely different sensors. The IAHCC-UCAS2016 dataset relies on the Leap Motion, a **vision-based optical device** that captures hand trajectories in the air, while the CASIA-OLHWDB dataset uses an Anoto **digital pen** to record pen-tip trajectories data on specialized paper. Given the fundamental differences in data modalities, these datasets are not suitable for evaluating CI-GAN tailored for inertial sensor signal generation. \\n\\n In fact, there is currently no publicly available handwriting dataset based on inertial sensors, highlighting a significant gap in this field. Our work addresses this gap by creating CI-GAN to generate an infinite number of inertial-sensor-based handwriting signals.\\n\\nThank you for your insightful comment. We hope we have addressed your concerns.\"}", "{\"summary\": \"The paper presents an innovative approach to addressing the limitations in performance of inertial sensor-based systems for Chinese character recognition, which traditionally rely on extensive manual data collection. By introducing a Chinese Inertial Generative Adversarial Network (CI-GAN), the study offers a solution that generates unlimited, high-quality training samples, thereby providing a flexible and efficient data support platform for classification models. This method significantly reduces the dependency on labor-intensive data gathering and enhances the overall performance and feasibility of using inertial sensors for HCI in the context of disability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"**Clear Motivation and Feasible Approach**: The paper is driven by a well-defined goal\\u2014to improve HCI for disabled individuals using inertial sensors. The proposed solution, a generative adversarial network (GAN) for data generation, is not only innovative but also practically feasible, as evidenced by the experimental results.\", \"**Innovation and High Performance**: By introducing advanced techniques of CGE, FOT and SRA, the study significantly enhances the recognition accuracy of Chinese characters, with performance improvements reported from 6.7% to 98.4%.\", \"**Social Impact and Community Contribution**: The research addresses significant accessibility issues for disabled individuals and adds substantial value to the community by releasing the first Chinese writing recognition dataset based on inertial sensors, enabling further advancements in the field.\"], \"weaknesses\": [\"**Visualization and Clarity of Diagrams**: The diagrams in the paper could be improved for better systematic representation and intuitiveness. Visualizing abstract constraints and regularization techniques more clearly would aid in understanding the complex interactions within the model. The task and symbols need a more detailed defination to improve the understanding.\", \"**Detailed Justification of Model Constraints**: The paper could be improved by including more detailed exploration of the motivations and effectiveness of using specific constraints such as the Forced Optimal Transport (FOT). A deeper discussion on why aligning input stroke encoding features with generated signal features and real signal features; and why utilizing Wasserstein distance as regularization can mitigate mode mixing and mode collapse is necessary to validate the approach.\", \"**Analysis of Robustness Under External Disturbances**: The paper lacks a thorough analysis of the system's robustness in the presence of external disturbances. Detailed insights into how these factors affect the system and recommendations for enhancing robustness would strengthen the paper.\", \"These points should be addressed to enhance the overall comprehensibility and impact of the research.\"], \"questions\": \"As shown in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' efforts to address most of my concerns. I have decided to maintain a relatively positive rating.\"}", "{\"comment\": \"The CI-GAN is used to generate Chinese IMU signal data, which is difficult to collect. Then, the author demonstrates the quality of the generated data by improving the performance of the classifier.\\n\\nThe author clarifies that IMU signal data collection is inherently challenging and considerably more complex than collecting visual data. Is IMU-based handwriting recognition of practical value? For instance, users need to wear additional sensors, which brings extra burden and inconvenience.\\n\\nThe experiments provided are based on authors\\u2019 own dataset. Whether real or generated samples, both the number of categories (the primary Chinese character set has 3755 categories, but the author only collected 4500 samples, with 1500 for training and 3000 for testing) and the data scale make it difficult to consider as a suitable evaluation environment.\\nAdditionally, the classifiers compared (Tables 2, 3, and 4) are not accompanied by relevant references. What is the specific structure of these classifiers, and can they be considered representative methods for evaluation? Are these the best classification methods available?\\n\\nIf the author conducts tests on similar publicly available time-series datasets, the generative method could be objectively assessed. Alternatively, comparing with other generative methods (e.g., [1]) could demonstrate the effectiveness of your approach if other generators perform worse. Furthermore, generating data for an existing public dataset and achieving SOTA results, with clear improvements after adding your generated data through CI-GAN, would validate the effectiveness of your method. Unfortunately, neither of these approaches was seen in your responses or manuscript.\\nTherefore, before addressing these concerns, it is difficult for us to dismiss doubts regarding the significance and value of this work.\\n\\n\\n[1] Ren M S, Zhang Y M, Wang Q F, et al. Diff-Writer: A Diffusion Model-Based Stylized Online Handwritten Chinese Character Generator[C]//International Conference on Neural Information Processing. Singapore: Springer Nature Singapore, 2023: 86-100.\"}", "{\"summary\": \"This paper introduces CI-GAN, a generative adversarial network for Chinese writing recognition using inertial sensors, designed to aid disabled individuals. CI-GAN incorporates Chinese glyph encoding, forced optimal transport, and semantic relevance alignment to generate accurate signals. With these synthetic signals, classifier accuracy improved from 6.7% to 98.4%. The study also releases the first Chinese inertial sensor dataset for writing recognition, advancing accessible human-computer interaction.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe application of research has significant potential and creativity about addressing accessibility needs for disabled individuals.\\n2.\\tThe proposed dataset contributes valuable inertial sensor data for Chinese writing. And the research introduces a novel method using GAN for data augmentation, effectively addressing data scarcity and enhancing handwriting recognition research.\\n3.\\tThe experimental results show promising improvements in classifier accuracy.\", \"weaknesses\": \"1.\\tThe concept of inertial data is introduced only in Section 4.2, making it somewhat difficult to understand when mentioned in the earlier parts of the paper. It is recommended to provide a brief introduction to this concept earlier on.\\n2.\\tThe first point in the summary of contributions mentions that it \\\"provides new tools for the study of the evolution and development of pictograms,\\\" which may not be suitable for the contributions summary, as it seems the research does not cover this aspect.\\n3.\\tThe description of CGE mentioned in Section 3.1 seems to be just an embedding? In my opinion, the current version of the introduction may be somewhat complex.\", \"questions\": \"1.\\tAccording to my understanding, CGE can be divided into two parts: 1. converting one-hot encoding into dense features, and 2. using \\u03b1-order R\\u00e9nyi entropy regularization in GER. Therefore, in the ablation study in Section 4.4, what specific configuration is being ablated when CGE is removed? Which part of these two components is being eliminated? Additionally, can this ablation experiment validate the effects of the glyph encoding regularization (GER) proposed in Section 3.1?\\n2.\\tWhat is the difference between the pre-trained VAE mentioned in Section 3.2 and CGE in Section 3.1? It seems that both can extract glyph features. Can VAE replace CGE?\\n3.\\tWhat are h_G, h_T, and e in Section 3.2? It seems that e comes from the GAN input, h_G comes from the GAN output, but where does h_T come from during training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Requirements Completed Through Tireless Efforts\", \"comment\": \"1. **Response for Practicality of IMU-Based Handwriting Recognition:** Thank you for your comment. As we've emphasized in the paper and previous responses, IMU-based handwriting systems are portable, lightweight, and resilient to environmental factors such as lighting and occlusions, making them ideal for a wide range of real-world scenarios. They can be easily integrated into wearable devices, providing an intuitive and seamless interaction, especially for users with visual impairments. However, the challenge arises during the dataset creation phase, where we need to segment the real IMU signals to accurately match them with the corresponding writing motions for training classifiers. This is where our CI-GAN comes into play, as it helps generate high-quality inertial signals by learning the relationship between writing motions and sensor data, significantly reducing the complexity of the dataset creation process.\\n\\n2. **Response for Comparison Methods:** It is important to note that the field of inertial sensor signal generation currently lacks established, widely available methods, and many image-based augmentation techniques are not directly applicable to this domain. Despite these challenges, we have adapted and applied over 10 recent, influential data augmentation methods to the field of inertial sensor signal generation, thereby creating a comprehensive comparison.\\n **Importantly, we have included the Diff-Writer method, which you recommended, in our comparison. Diff-Writer significantly outperforms all comparison methods except for our CI-GAN, highlighting its strength as a learning-based approach. However, since Diff-Writer was not designed for generating inertial sensor signals, it struggles to fully capture the motion dynamics and semantic fidelity required for this task. As a result, there remains a gap between its performance and that of CI-GAN, which excels in generating accurate and realistic IMU signals by addressing the unique challenges of inertial signal generation.**\\n\\n Although Diff-Writer generates trajectory point sequences rather than sensor signals, our team has made substantial efforts to adapt and apply it to the task of sensor signal generation. We retrained and modified the model to accommodate our specific requirements, demonstrating the versatility of the Diff-Writer. Additionally, we have cited all the comparative methods and relevant literature, ensuring that our work is positioned within the current state of research. We kindly invite you to review our updated manuscript.\\n\\nIn summary, while publicly available inertial sensor datasets are limited, we have made every effort to demonstrate the effectiveness of CI-GAN through thorough comparisons with existing methods. Our entire team worked tirelessly, without sleep, to adapt the trajectory generation method you recommended to our inertial sensor signal generation task. We are confident that these experiments will address your concerns and demonstrate the significant value of our work. Your feedback and recognition are deeply important to us, and we eagerly await your review.\"}", "{\"comment\": \"We sincerely thank the reviewers and the conference chair for their valuable feedback and thoughtful consideration of our paper. First, we want to clarify that collecting handwriting samples of Chinese characters is not easy. During data collection, volunteers wrote different Chinese characters continuously. We had to accurately locate the signal segments corresponding to each character from long signal streams, as shown in APPENDIX. B. **However, accurately segmenting and extracting signal segments requires synchronizing optical motion capture equipment and then comparing the inertial signals frame by frame with the optical capture results to find all character signal segments' starting and ending frames.** Therefore, we expended significant time and effort to obtain 4,500 signal samples in this paper, establishing the first Chinese handwriting recognition dataset based on inertial sensors, which we have made open-source partially. By contrast, our CI-GAN can directly generate handwriting motion signals according to the input Chinese character, eliminating the complex processes of signal segmentation, extraction, and cleaning, as well as the reliance on optical equipment. We believe it provides an efficient experimental data platform for the field.\\n\\nUnlike the fields of CV and NLP, many deep learning methods have not yet been applied to the sensor domain. More importantly, unlike image generation, where the performance can be visually judged, **it is challenging to identify semantics in waveform by observation and determine whether the generated signal fluctuations are reasonable, which imposes high requirements on generative model design.** Therefore, we had to design multiple guidance and constraints for the generator, resulting in the design of Chinese Glyph Encoding (CGE), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA).\\n\\n* CGE introduces a regularization term based on R\\u00e9nyi entropy, which increases the information content of the encoding matrix and the distinctiveness of class encodings, providing a new category representation method that can also be applied to other tasks. As far as we know, this is the first embedding targeted at the shape of Chinese characters rather than their meanings, providing rich semantic guidance for generating handwriting signals.\\n* FOT establishes a triple-consistency constraint between the input prompt, output signal features, and real signal features, ensuring the authenticity and semantic accuracy of the generated signals and preventing mode collapse and mixing.\\n* SRA constrains the consistency between the semantic relationships among multiple outputs and the corresponding input prompts, ensuring that similar inputs correspond to similar outputs (and vice versa), significantly alleviating the hallucination problem of generative models. Notably, the June 2024 **Nature** paper \\\"Detecting Hallucination in Large Language Models Using Semantic Entropy,\\\" published after we released our paper, shares a similar idea with our proposed SRA. They assess model hallucination by repeatedly inputting the same prompts into generative models and evaluating the consistency of the outputs. Their approach essentially forces the model to produce similar outputs for similar prompts. Our SRA not only achieves this but also ensures that the relationships between prompts are mirrored in the relationships between the outputs. This significantly reduces hallucinations and enhances the model's practicality and stability.\\n\\nCGE, FOT, and SRA not only guide and constrain the generator but also interact with each other, as shown in Section 3.4. The Chinese glyph encoding not only provides semantic guidance to the generator but also supplies the necessary encoding for FOT and SRA, and it is also supervised in the process. FOT and SRA share the VAE and generated signal features, providing different constraints for the generator, with FOT focusing on improving signal authenticity and enhancing the model's cognition of different categories through the semantic information injected by CGE, thereby mitigating mode collapse and mode mixing. In contrast, SRA ensures consistency between the relationships of multiple outputs and prompts through group-level supervision, which helps alleviate the hallucination problem of generative models.\\n\\nIn summary, the three modules proposed in CI-GAN are innovative and interlinked, significantly enhancing the performance of GANs in generating inertial sensor signals, as evidenced by numerous comparative and ablation experiments. **This method is a typical example of deep learning empowering the sensor domain and has been recognized by the industry and adopted by a medical wearable device manufacturer.** It has the potential to become a benchmark for data augmentation in the sensor signal processing field. We sincerely hope we have addressed the concerns of the reviewers, and once again, we thank everyone for their review and suggestions for this paper.\"}", "{\"comment\": \"Thank you for your review of our paper. We have distilled your comments into four key points and addressed each one individually.\\n\\n1. **Review Comment:** Comparing CI-GAN with other high-performing classification models, such as in reference [1].\\n\\n **Response:** CI-GAN is a generative model, not a classification model. CI-GAN can generate high-quality inertial measurement unit (IMU) signals for Chinese handwriting recognition, but can not classify or recognize handwriting characters. Therefore, comparing a **generative model** to **classification models** may be a methodological misunderstanding. We acknowledge that the classification and recognition methods you suggested are excellent, and we will cite these references to highlight their contributions. However, as a generative model, CI-GAN is fundamentally different from classification models, making a direct comparison with them highly impractical.\\n\\n2. **Review Comment:** CI-GAN generation effect can be verified on other open source datasets of Chinese character data, such as IAHCC-UCAS2016, CASIA-OLHWDB, and ICDAR 2013.\\n\\n **Response:** The IAHCC-UCAS2016, CASIA-OLHWDB, and ICDAR 2013 datasets are used for handwriting recognition tasks, based on visual or pen-tip trajectory data. CI-GAN is designed for generating IMU handwriting signals rather than recognizing them, so it would be challenging to evaluate a generative model on a classification dataset. \\n\\n3. **Review Comment:** The dataset collected is small in scale, with data from only nine individuals and without full coverage of the complete Chinese character set.\\n\\n **Response:** We would like to clarify that IMU signal data collection is inherently challenging and considerably more complex than collecting visual data. Handwriting signals are continuous, and therefore each segment corresponding to a specific character must be extracted from the continuous stream of handwriting signals. For images, videos, or pen trajectories, such segmentation is relatively straightforward due to visual cues. However, for IMU signals, which are time-series waveforms, it is extremely difficult to identify the start and end points of each character segment visually, requiring auxiliary optical equipment for precise annotation. \\n\\n We invested significant time and effort to obtain the 4,500 handwriting signals presented in this study, creating the first IMU dataset of Chinese handwriting that covers the official set of commonly used characters in China. This effort underscores the value of our CI-GAN model, which eliminates the need for such labor-intensive annotation by directly generating IMU signals for each Chinese character. This dataset sufficiently supports the training of our generative model for the IMU signal generation task, and our experiments have validated the practical effectiveness of CI-GAN on this scale and quality of data.\\n\\n4. **Review Comment:** Comparing the CI-GAN with other generative models.\\n\\n **Response:** It is important to highlight that this study is the first to propose a generative deep learning model for generating IMU handwriting signals. CI-GAN is specifically designed to address the unique challenges of IMU handwriting signal generation, with tailored modules and optimization strategies to suit IMU data. In the field of IMU signal generation research, there is currently no precedent or comparable model, meaning that no readily available generative model exists for direct comparison.\\n\\n Therefore, we conducted a thorough comparison of CI-GAN with twelve commonly used data augmentation methods spanning five major categories. This comprehensive evaluation demonstrates the rigor of our approach and the scientific validity of our results. While you suggested that we use Diffusion models or other image-based generative models for comparison, these models were originally designed for image data. Applying them directly to IMU signal generation would involve technical and theoretical misalignment. Adapting such image-based generative models to IMU signal generation would require substantial modifications to both model structure and algorithms, along with re-training and validation on IMU data. This adaptation alone would justify an entirely new research paper, extending far beyond the scope of our current study.\\n\\nIn conclusion, we sincerely hope that our replies address your concerns satisfactorily. Your understanding and recognition of our efforts are of utmost importance to us.\"}", "{\"title\": \"Humbly Seeking Your Feedback on Our Submission\", \"comment\": \"I hope this message finds you well. I apologize for reaching out again, but it has now been two weeks since the discussion phase began, and I have not yet received any feedback from the reviewers. This prolonged silence has left us feeling quite anxious about the progress of our submission.\\n\\nThe moment we received the initial reviews, my team and I dedicated ourselves wholeheartedly to addressing every concern and suggestion. We worked tirelessly, even overnight, to conduct additional experiments and revise the manuscript with utmost care. Our only hope was that the reviewers could see our responses as soon as possible, even a few seconds earlier, as each second feels like it might bring this paper closer to a positive resolution.\\n\\nWe humbly and earnestly request you to kindly review our response and revisions, and provide your thoughts at your convenience. Thank you very much for your understanding and for the time and effort you have dedicated to reviewing this work.\"}", "{\"title\": \"Urgent Request for Your Feedback\", \"comment\": \"I\\u2019m very sorry to trouble you again, but I\\u2019m writing to humbly ask if you could kindly review our responses to your feedback. The rebuttal period is coming to a close in less than a day, and while three other reviewers have kindly accepted our paper, we are still awaiting your final input. Your feedback is incredibly important to us. We are genuinely grateful for your consideration and sincerely hope to hear from you soon. Thank you so much for your understanding and support.\"}", "{\"summary\": \"The paper propose CI-GAN, which enhances Chinese writing recognition for disabled users, generating high-quality samples and improving classifier performance significantly.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The article is clearly written and easy to understand.\\n\\n2. The motivation is clear: translating subtle movements of user\\u2019s hand into written text can help disabled people of writing. \\n\\n3. Experiments demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. The dataset is relatively small and lacks comprehensive coverage of the Chinese character set, which may not support the generation of more complex Chinese characters.\\n\\n2. The proposed method should be tested on other public available benchmarks with other SOTA methods, such as IAHCC-UCAS2016 and CASIA-OLHWDB.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Hoping Our Response Meets Your Expectations\", \"comment\": \"I hope this message finds you well. Considering the discussion phase has now been ongoing for 10 days, and I have yet to receive any response. This has left us feeling increasingly anxious about the status of our submission. This paper holds immense importance to us, and we have devoted considerable effort to meticulously address every concern and suggestion raised in the initial reviews. Given the contribution of our work in advancing the fields of AI for sensors, and its potential impact on improving human-computer interaction for individuals with disabilities, we are eager to receive the reviewers' feedback on our responses.\\n\\nWe sincerely believe that the experimental results and the detailed explanations provided in our response have adequately resolved the issues you pointed out. This work is not only a critical part of our research but also has the potential to make a meaningful contribution to the field.\\n\\nI humbly and earnestly request you to kindly review our responses and share your feedback at your convenience. Your input is invaluable, and I sincerely hope that our diligent efforts will meet your expectations.\\n\\nThank you so much for your understanding and for taking the time to support us in this process.\"}", "{\"title\": \"Urgent Request for Your Feedback\", \"comment\": \"I hope this message finds you well. The rebuttal period is coming to a close in less than a day, and while three other reviewers have kindly accepted our paper, we are still awaiting your final input. Your feedback is incredibly important to us.\\nIn response to your suggestion, even though the method you recommended and our task involve different modalities, our team worked tirelessly, without sleep, to adapt your approach to our specific task. Your feedback is extremely valuable to us, and we would be deeply grateful if you could spare a moment to review our final updates. We are genuinely grateful for your consideration and sincerely hope to hear from you soon. Thank you so much for your understanding and support.\"}", "{\"summary\": \"This paper tackles the challenge of data scarcity in Chinese writing recognition using inertial sensors by proposing the Chinese Inertial Generative Adversarial Network (CI-GAN). CI-GAN includes three innovative modules, Chinese Glyph Encoding (CGE), Forced Optimal Transport (FOT), and Semantic Relevance Alignment (SRA), to generate high-quality inertial signal samples. CGE captures the shape and stroke of Chinese characters, FOT ensures feature consistency to prevent mode collapse, and SRA aligns the semantic relevance of generated signals to their glyph structures. With CI-GAN, the authors establish a flexible data platform for Chinese writing recognition and claiming to release the first inertial-sensor-based dataset on GitHub.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of CI-GAN is a novel approach for enhancing data availability in Chinese inertial writing recognition, with modules designed specifically to tackle challenges unique to Chinese characters.\\n2. The improvement from 6.7% to 98.4% in classifier performance highlights the potential of CI-GAN-generated data to enhance recognition accuracy, indicating practical benefits for downstream applications.\", \"weaknesses\": \"1. In Figure 1, CI-GAN is presented as a framework overview, yet it lacks consistency in terminology, with CGE mislabeled as \\\"GER\\\" and FOT written in full without abbreviation. Additionally, SRA is not visually represented in the figure. This detracts from the clarity of the diagram and makes it harder for readers to grasp the full framework.\\n2. The paper\\u2019s theoretical foundation could be strengthened. The current theoretical analysis is minimal, with only a few formulas provided. More detailed mathematical explanations, particularly for FOT\\u2019s role in preventing mode collapse, would lend greater credibility to the approach.\\n3. The ablation studies are somewhat limited, and additional experiments testing more comprehensive combinations of CGE, FOT, and SRA would provide a clearer understanding of each module's contribution. More exhaustive ablation tests would validate the effectiveness of the modules individually and collectively.\\n4. The example in Figure 1, intended to illustrate the framework's application for disabled individuals, doesn\\u2019t effectively convey this purpose. Including a more relatable example that directly addresses accessibility for disabled users would better align with the stated motivation of the study.\", \"questions\": \"1. Can you clarify how Figure 1 relates to accessibility for disabled individuals, as the example seems disconnected?\\n2. Could you provide more theoretical details on the FOT component to reinforce its foundation?\\n3. Are more exhaustive ablation studies possible to validate the contributions of CGE, FOT, and SRA individually?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Humble Reminder on Review Status and Request for Feedback\", \"comment\": \"I hope this message finds you well. We greatly appreciate your recognition of the motivation, experiments, and clarity of our work, and we have made every effort to thoroughly address the points you raised. As we mentioned in our response, we have provided detailed clarifications on the dataset limitations and the challenges related to testing on public benchmarks. We also added further explanations on the dataset creation process and how we overcame the unique difficulties involved in generating IMU-based handwriting signals. Given the importance of your feedback to the final decision on our manuscript, we kindly request that you review the updates we made based on your suggestions. Your recognition of our efforts would mean a great deal to us, and we are eager to hear your final thoughts. Thank you once again for your time and valuable input.\"}", "{\"summary\": \"In this paper, the author proposes a method of sensor-style Chinese character data generation based on GAN. It mainly consists of three modules CGE, FOT and SRA. CGE encodes Chinese characters according to glyphs. FOT uses a ternary consistency constraint to monitor the consistency of the predicted sample, the real sample, and the glyph encoding vector. The SRA module aligns glyph and semantic encoding. The author collected 4500 samples of 500 Chinese characters, including 1500 samples in the training set and 3000 samples in the test set. The author uses the proposed CI-GAN to generate additional training sets to augment the original data set. The validity of the generated data is proved by comparing the recognition effect of training with different data quantities. The effectiveness of the proposed module is verified by ablation experiments.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper makes a strong contribution to research in accessible human-computer interaction, focusing on Chinese handwriting recognition for disabled individuals. It addresses an important issue by introducing CI-GAN, a generative model with unique modules\\u2014Chinese Glyph Encoding, Forced Optimal Transport, and Semantic Relevance Alignment\\u2014that effectively tackle the challenges of data scarcity and segmentation. According to the visualization results of Chinese glyph encodings, the module proposed in this paper is effective in encoding Chinese character shapes. The experimental results in Table 3 and Table 4 prove that the generated data has useful value.\", \"weaknesses\": \"The proposed method section does not provide sufficient comparisons and analysis. The collected real data and generated data are insufficient, and the quality of the constructed dataset is not high. The experimental section lacks important comparative experiments and analysis, making it difficult to demonstrate the effectiveness of the proposed method. The specific issues are as follows:\\n\\n1. **Methodology**: The authors propose a GAN-based generation method but do not compare its generation quality with other generative approaches, such as diffusion models or other GAN-based methods.\\n\\n2. **Dataset**: The dataset collected is small in scale, with data from only nine individuals and without full coverage of the complete Chinese character set. \\n\\n (1) The complexity of different Chinese characters is very different, and the author only shows the generation and classification results of relatively simple Chinese characters in this paper, it is impossible to evaluate the model's generation effect on complex Chinese characters. \\n\\n (2) Writing habits vary greatly among individuals, leading to significant differences in handwriting styles. With data from only nine participants, how can the authors ensure that the generated data quality aligns with real-world scenarios?\\n\\n3. **Experiments**: \\n\\n (1) Comparative methods lack citations.\\n\\n (2) The algorithm\\u2019s performance has not been tested on other public datasets. Whether the CIGAN generation effect can be verified on other open source datasets of Chinese character data, such as IAHCC-UCAS2016, CASIA-OLHWDB (ICDAR 2013 Chinese Handwriting Recognition Competition).\\n\\n (3) The authors did not compare their method with other high-performing algorithms for Chinese character recognition, such as the one mentioned in [1]. As far as I know, [1] achieved a recognition accuracy of 96.78% on the dataset of all Chinese characters in the Level 1 Character Set (IAHCC-UCAS2016) and 97.86% on ICDAR-2013. I suggest the authors compare their method with more state-of-the-art (SOTA) approaches.\\n\\n (4) The experiments lack further analysis, such as individual-level performance testing and performance evaluation across characters with different stroke complexities. \\n\\nThese improvements would better support the effectiveness and applicability of the proposed approach.\\n\\n[1] Gan J, Wang W, Lu K. A new perspective: Recognizing online handwritten Chinese characters via 1-dimensional CNN[J]. Information Sciences, 2019, 478: 375-390.\", \"questions\": \"Repeat:\\n1. **Methodology**: The authors propose a GAN-based generation method but do not compare its generation quality with other generative approaches, such as diffusion models or other GAN-based methods.\\n\\n2. **Dataset**: The dataset collected is small in scale, with data from only nine individuals and without full coverage of the complete Chinese character set. \\n\\n (1) The complexity of different Chinese characters is very different, and the author only shows the generation and classification results of relatively simple Chinese characters in this paper, it is impossible to evaluate the model's generation effect on complex Chinese characters. \\n\\n (2) Writing habits vary greatly among individuals, leading to significant differences in handwriting styles. With data from only nine participants, how can the authors ensure that the generated data quality aligns with real-world scenarios?\\n\\n3. **Experiments**: \\n\\n (1) Comparative methods lack citations.\\n\\n (2) The algorithm\\u2019s performance has not been tested on other public datasets. Whether the CIGAN generation effect can be verified on other open source datasets of Chinese character data, such as IAHCC-UCAS2016, CASIA-OLHWDB (ICDAR 2013 Chinese Handwriting Recognition Competition).\\n\\n (3) The authors did not compare their method with other high-performing algorithms for Chinese character recognition, such as the one mentioned in [1]. As far as I know, [1] achieved a recognition accuracy of 96.78% on the dataset of all Chinese characters in the Level 1 Character Set (IAHCC-UCAS2016) and 97.86% on ICDAR-2013. I suggest the authors compare their method with more state-of-the-art (SOTA) approaches.\\n\\n (4) The experiments lack further analysis, such as individual-level performance testing and performance evaluation across characters with different stroke complexities. \\n\\nThese improvements would better support the effectiveness and applicability of the proposed approach.\\n\\n[1] Gan J, Wang W, Lu K. A new perspective: Recognizing online handwritten Chinese characters via 1-dimensional CNN[J]. Information Sciences, 2019, 478: 375-390.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank You for Increasing Your Rating in Recognition of Our Improvements\", \"comment\": \"We sincerely appreciate your positive recognition of the revisions we have made, and we are grateful for the time and effort you dedicated to reviewing our work. Your support means a great deal to us, and we are pleased that we could address your concerns effectively.\"}", "{\"title\": \"Urgent Request for Your Feedback Before Rebuttal Deadline\", \"comment\": \"I hope this message finds you well. As the rebuttal period is drawing to a close, I\\u2019m writing to humbly request your feedback on our revised manuscript. Three of the reviewers have already kindly accepted the paper, and your feedback is truly essential to us. We understand that you are very busy, but if you could spare a moment to review our response, we would be incredibly grateful.\\n\\nWe sincerely hope to receive your thoughts before the rebuttal period ends.\\nThank you so much for your consideration.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces a specific Generative Adversarial Network (GAN) for generating inertial sensor-based writing signals of Chinese characters, named CI-GAN. The authors have collected a small-scale dataset consisting of 4500 signal samples from only nine individuals, without full coverage of the complete Chinese character set. Experimental results show that the synthetic signals generated by CI-GAN can significantly improve character recognition accuracy. However, the proposed method focuses on a narrow application domain, specifically the generation of inertial sensor-based writing signals for Chinese characters. This limited scope may be more suitable for specialized conferences or journals focused on pattern recognition or signal processing rather than the broader audience of ICLR. Moreover, the dataset used for training and evaluation is relatively small, thus the experimental results are not sufficiently convincing. Based on these considerations, the decision is not to recommend acceptance at this time.\", \"additional_comments_on_reviewer_discussion\": \"This paper was reviewed by five experts in the field and finally received diverse scores: 6, 3, 6, 5, and 6.\\nThe major concerns of the reviewers (FZkk & RmES) are: \\n1.\\tthe dataset collected is small in scale without full coverage of the complete Chinese character set, comprising data from only nine individuals\\n2.\\tthe proposed method is not compared with other generative approaches, such as diffusion models or other GAN-based methods,\\n3.\\tquestionable practical value of IMU-based handwriting recognition.\\n\\nThe authors didn\\u2019t successfully address these concerns during the discussion period. I fully agree with these concerns and, therefore, make the decision to reject the paper.\"}" ] }
BoRmf8wDZ7
Gaussian Masked Autoencoders
[ "Jathushan Rajasegaran", "Xinlei Chen", "Ruilong Li", "Christoph Feichtenhofer", "Shiry Ginosar", "Jitendra Malik" ]
This paper explores Masked Autoencoders (MAE) with Gaussian Splatting. While mainstream self-supervised learning frameworks such as MAE operate on low-level pixels, the image synthesis community has evolved to use latent, mid-level representations for better generative visual data modeling. Our approach, named GMAE, aims to reconcile these two and get the benefits of both worlds. Like MAE, it reconstructs the image end-to-end in the pixel space; however, it also introduces an intermediate, 3D Gaussian-based representation and renders images via splatting. We show that GMAE can enable various zero-shot learning capabilities (e.g figure-ground segmentation, image layering, edge detection, etc) while preserving the high self-supervised representation quality from MAE. Notably, we are the first to employ Gaussian primitives in an image representation learning framework beyond optimization-based single-scene reconstructions. We believe GMAE will inspire further research in this direction and contribute to developing next-generation techniques for modeling high-fidelity visual data.
[ "Representation learning", "Gaussian Splatting" ]
Reject
https://openreview.net/pdf?id=BoRmf8wDZ7
https://openreview.net/forum?id=BoRmf8wDZ7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwLTIXpQOQ", "zNRCcTbZ2C", "wX3AzCvZFd", "wR5miZlnkG", "vrZ5msFpkI", "vChnFVxXcz", "v1S46WaPPC", "tONpLHR470", "lyWWBUvzNY", "l0uF5PWem4", "kOMBXEyZ2r", "irjHYd8ozy", "igWVPLEfqM", "gCjfLjBEK0", "e5x9x7gMrm", "ZaIVaPPLmA", "TKJd9FAWjf", "P0yEDUjQoa", "LuJTczx0C4", "GmNIF4FlqZ", "CuwHokrvmm", "8EAV5rHGUO", "2X8Tvnp4BA", "0tN7f00zeR" ], "note_type": [ "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730267866650, 1730658377555, 1734464261950, 1732417334332, 1732262289216, 1733279108436, 1730825159793, 1732490170430, 1732262239476, 1732490142007, 1732262324754, 1732262262235, 1730017782798, 1733279113050, 1732646009122, 1733289801579, 1730701820296, 1732262187434, 1737524175663, 1732262359548, 1732407483820, 1732262149813, 1732567659459, 1730674483348 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_Zk2V" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_PWCZ" ], [ "ICLR.cc/2025/Conference/Submission12248/Area_Chair_6S5U" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_PWCZ" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_5Et3" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_PMrf" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_5Et3" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_qLSS" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_Zk2V" ], [ "ICLR.cc/2025/Conference/Submission12248/Authors" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_uCko" ], [ "ICLR.cc/2025/Conference/Submission12248/Reviewer_uCko" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors present a self-supervised image representation learning method to extend Masked Autoencoders with 3D Gaussian Splatting framework. The general framework is a ViT based auto-encoder which takes masked patches from given images as input. The key idea is that instead of predicting image patches, the ViT based decoder regresses the parameters of 3D Gaussians for further rendering. To validate the importance of this technical upgrade, the authors perform comparisons on both supervised tasks and unsupervised tasks as well as ablation studies on different training mechanisms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Generally, the paper is well written. I can understand it easily.\", \"As for me, the idea of applying learned 3D Gaussians as primitives for downstream tasks in unsupervised learning is novel and critical to the computer vision community.\", \"The authors perform evaluation for both supervised and unsupervised tasks to show the empirical significance of their 3DGS based network upgrades.\"], \"weaknesses\": \"+ The evaluation datasets and used\\u00a0baselines seem to be a bit outdated. The latest baselines (MAE and\\u00a0MAE-VQGAN)\\u00a0were published in 2022 while the latest testset (PASCAL) was published in 2015. Could the authors evaluate their method on some datasets listed in Figure 8 of SAM [1] with modern large-scale unsupervised learning methods? For example, datasets like COCO-Stuff or ADE20K? And baselines like SAM or DINO v2? Or other related datasets and baselines?\\n+ There are no failure case examples to justify the possible future work of GMAE\\u00a0method.\\u00a0Ideally, the failure cases might reveal limitations in the Gaussian representation and highlight scenarios where the method struggles compared to pixel-based approaches.\\n+ Some typos which include:\\n1. L313, \\\"the ViT base model\\\" --> \\\"the ViT based model\\\";\\n2. L533, the \\\"For example,\\\" are repeated twice.\\n\\n[1]\\u00a0Segment Anything, ICCV 2023\", \"questions\": [\"As mentioned in L200 and L227, how to get the query tokens? What these tokens could be?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces GMAE, a method that integrates MAE with Gaussian Splatting. It claims that GMAE is a better way to represent mid-level features. Instead of reconstructing masked pixels, GMAE predicts a set of Gaussians, each parameterized by a 14-dimensional vector. The authors have meticulously designed the model, and some observations align closely with those of MAE. Empirical results demonstrate that GMAE achieves performance comparable to MAE on supervised tasks and exhibits satisfactory zero-shot capabilities in unsupervised tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea looks intriguing to me. The non-uniformity inherent in Gaussian representation distinguishes it from traditional patch-based methods.\", \"Experimental results are comprehensive and convincing.\", \"The numerous visualizations offer an intuitive grasp of the methodology and outcomes.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": [\"In supervised tasks, the model primarily utilizes the ViT encoder, without incorporating Gaussian representations. The effectiveness of Gaussian representations is demonstrated in unsupervised tasks. Demonstrating a positive impact on image generation would significantly enhance the paper\\u2019s contributions.\", \"The limited number of Gaussians employed constrains the model\\u2019s reconstruction capabilities for image generation. If increasing the Gaussian count presents a bottleneck, this limitation could hinder its application in image generation tasks.\", \"Overall, my main concern lies in the scalability of the method. But as an initial attempt, I think the paper is above the acceptance threshold .\"], \"questions\": [\"In line 089, the statement \\u201cthe addition of splatting increases compute time by 1.5%\\u201d would be more informative if the authors provided the absolute compute times for both MAE and GMAE, facilitating a clearer comparison.\", \"In Figure 7, arranging the visualization of Gaussian layers from shallow to deep depths could be more intuitive, as objects closer to the camera often hold greater significance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces a novel combination of MAE and Gaussian Splatting but fails to demonstrate significant advantages over standard MAE in key benchmarks, with zero-shot results like edge detection and figure-ground segmentation remaining weaker than simple baselines. The proposed Gaussian representation, while intriguing, the claimed frequency-based depth layering remains unconvincing. Additionally, the evaluation on outdated datasets and limited comparisons with state-of-the-art methods further undermines the practical impact and scalability of the approach.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about limited comparisons to state-of-the-art methods (e.g., DINOv2, SAM) and outdated datasets, weak zero-shot task performance, and unclear justification for using Gaussian Splatting over alternatives like NeRF or MPI. The authors addressed these by adding depth estimation results (outperforming DINOv2), scaling Gaussian representations to 4096 Gaussians with residual MLPs, clarifying the emergent layering effect, and committing to further evaluations on modern datasets. While these updates demonstrated scalability and emergent properties, I agree with the reviewers and I am unconvinced about the practical advantages and broader impact of the approach.\"}", "{\"comment\": \"Thank you for your responses! However, I am not entirely clear on how the residual gaussian process works. Specifically, how does a k-layer MLP project changes from 256 gaussians to k*256 gaussians? Could you please elaborate more on this process? And during inference, how does this residual gaussian work? Will this introduce significant additional computation/latency?\"}", "{\"comment\": \"We thank the reviewer for comments on our intriguing idea, meticulously designed model, and satisfactory zero-shot capabilities. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal.\\n\\n**\\u201cDemonstrating a positive impact on image generation would significantly enhance the paper\\u2019s contributions.\\u201d**\\n\\nWe agree! However, our current goal is to show the representation capabilities of the self-supervised models rather than their generation capabilities, which would require re-thinking the model, for example, using DiT models. While this is not the scope of the paper, to archive this number of gaussians was one of the main bottlenecks, which the reviewer also mentioned. Based on the new results we have fixed this problem and were able to scale up to 4096 gaussians. This could be a potential tokenizer to train generative models now, however since that is beyond the scope of this work, we will leave it for future work. \\n\\n**\\u201cIf increasing the Gaussian count presents a bottleneck, this limitation could hinder its application in image generation tasks.\\u201d**\\n\\n\\nWe agree with the reviewer, that the limit on the number of gaussians is a limitation in the current model, and the current design we are limited by memory. We tried to solve this problem by learning more gaussians step by step, as residual gaussians. First, we pretrained the model with 256 Gaussians, and after that, we initialize an k MLP layers, to project changes in 14 features to k*256 Gaussians. These mlp layers essentially learn the small changes from the main 256 gaussians. At the start these mlp layers are initialized with zero weights, hence they don't affect the reconstruction for the original 256 gaussians. \\n\\nWith this setting, we were able to train up to 4096 gaussians, but not limited (we will update here with more gaussians), since it is not limited by memory anymore. Below we show the reconstruction FID as we increase the number of gaussians step by step. Finally, we also fine tune the model (4096*) without masking and on full reconstruction, and it achieves 18 rfid, without perceptual loss or vae. \\n\\n| Number of Gaussians | rFID |\\n|---------|--------|\\n| 256 | 89.45 |\\n| 1024 | 80.32 |\\n| 4096 | 63.87 |\\n| 4096* | 18.71 |\\n\\n\\n**\\u201cWould be more informative if the authors provided the absolute compute times for both MAE and GMAE\\u201d**\\n\\nFor MAE training on V100 gpus, takes on average time: 0.6471 seconds and Standard deviation: 0.0209 for 10 samples for forward and backward pass vs GMAE takes on average a mean time: 0.7044 seconds and a standard deviation: 0.0053 for 10 samples including forward, rendering, and backward pass. Which it added only a small overhard to the training, which recent advances in gslpat optimizations can also be incorporated to give faster training. \\n\\n**\\u201cArranging the visualization of Gaussian layers from shallow to deep depths\\u201d**\\n\\nThanks for this suggestion, we have added figures from shallow to deep in the appendix on lots of unfiltered samples.\"}", "{\"comment\": \"**gaussians by 16x**: We have added a figure in the updated paper, in section A2 figure 13. The residual MLPs act on the latents from the decoder to project small changes to the initial Gaussians. The decoder produce (256 * d) vector (d is the hidden_dim of the decoder). In our pretrained model we only had one MLP-layer to project this to (256 * 14) Gaussians. Now, after this model is pretrained, we add k new MLP-layers each with the same as before (to project from d to 14 dim). New Gaussians are taken as initial Gaussians + small changes learned by the residual heads. This process is also explained in section A2 figure 13.\\n\\nIn Terms of computation, this does not add any significant overhead, since we only had a few small mlp heads (in our case maximum 16), and the splatting and rendering was not affected by more Gaussians. GMAE (with 256 Gaussians) takes on average a mean time: 0.7044 seconds and a standard deviation: 0.0053 for 10 samples including forward, rendering, and backward pass. GMAE (with 256*16=4096 Gaussians) takes on average a mean time: 0.7093 seconds and a standard deviation: 0.0054 for 10 samples including forward, rendering, and backward pass. This is not a significant increase in compute. However, we still need at least 1 epoch of finetuning, to get a better rFID than the pretrained model. But this can be sped up with better hyper-parameters.\\n\\n**why not just using a layered image representation**: We agree with the reviewer that MPI is another valid representation. In the same spirit Gaussians are also another valid representation. The scope of this work is not to compare these intermediate representations but rather show the benefits of using Gaussians as intermediate representations. \\n\\n**main advantage**: We agree with the reviewer that we don't have evidence of very strong applications yet, but we hope this would need more exploration and our work is useful for the research community as a first step.\"}", "{\"summary\": \"This paper proposes to use 3D Gaussians a-la Gaussian Splatting as a mid-level representation that are predicted from masked input patches a-la Masked Autoencoders. Training is done by splatting gaussians differentiably to render RGB from a fixed camera using MAE training losses (unnormalized). Authors pre-train a ViT-B for 400 epochs using this method and show a slight performance drop vs. regular MAE training on Imagenet classification and COCO instance segmentation.\\n\\nThey also show that the predicted gaussians are layered in depth from the fixed camera by frequency. The authors attempt to utilize this property to show some results on zero-shot edge detection and figure-ground segmentation, where they perform similarly or worse to a very selected set of baseline methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality:\\nThe proposed method of using 3D gaussians as their intermediate representation is original and interesting. However, the related work section misses very related work and focuses on some more irrelevant topics (discussed more later)\", \"quality\": \"The proposed representation seems to learn better reconstructions compared to MAE. However, beyond this, I personally do not agree with the proposed evaluations to show the benefits of this representation. No comparisons are made to any other similar intermediate representations that could be thought of (discussed more later).\", \"clarity\": \"The paper is presented clearly and laid out well. Some statements are misleading (discussed more later).\", \"significance\": \"I believe this paper has potential to be significant to the community. However, the current results show that GMAE does not improve over MAE meaningfully (and is evaluated very sparsely across the design space) and the learnt gaussians are not very useful in the zero-shot case even in the tasks that the authors decide to focus on i.e. figure-ground segmentation and edge detection.\", \"weaknesses\": \"\\\\textbf{Related Work:}\\nThe paper does not talk about any related work on using mid-level representations in vision beyond using learned \\\"tokens\\\". The authors misrepresent MAE as only training for pixel reconstruction. MAE has an ablation experiment where they also use tokens to explore the \\\"best of both worlds\\\" approach that the authors suggest they take. MAE-VQGAN proposed in Bar et al. 2022 is also a tokenized MAE learner. Other mid-level representations can be thought of that are similar to this method. For example, one could directly predict a multi-plane representation and render it. One could use superpixels a-la superpixel sampling networks (Jampani et al.) as the mid-level representation. There is no discussion on other possible methods and prior mid-level representations used in vision. Other papers have proposed losses that learn self-supervised grouping, (which is one of the benefits according to the authors), such as those based on Slot Attention or Leopart (Ziegler et al, CVPR 2022). In the discussion, the paper claims -- \\\"Nonetheless, we have shown that one no longer has to choose between pixels and latent representations for visual modeling.\\\". This is misleading compared to related work as mentioned above. \\n\\n\\\\textbf{Why is the gaussian representation better?}\\nThere are claims across the paper that the gaussian representation is better due to its efficiency (the proposed model is slower than MAE while performing worse), due to its non-isotropic representation vs. grids (no comparisons are made to back the claim that this is useful for pre-training). The only real benefit shown in the paper is that GMAE reconstructions are higher-fidelity as opposed to MAE. However, the authors immediately claim \\\"L362: As a result, our reconstructions can be used directly for other tasks without needing to add a GAN loss or an upsampling layer on top.\\\" which is again unsubtantiated in the paper. Which other methods need a GAN loss or upsampling layer on top? The other tasks proposed here are figure ground segmentation and edge detection, where the model performs poorly overall. Discussed more in the next section. \\n\\n\\\\textbf{Frequency clustering in depth}\", \"the_authors_make_the_following_claims\": \"\\\"This may be due to the fact that with random initialization, the points closer to the camera represent low-frequency information, while the points far from the camera model the high-frequency information\\\"\\n\\n\\\"The layer-wise rendering highlights the model\\u2019s ability to separate objects and represent them in distinct frequency layers\\\"\\n\\n\\\"In the real world, backgrounds tend to have low-frequency regions while objects usually have high-frequency details. This correlation leads to our zero-shot results.\\\"\\n\\nThese are incompatible claims and I think these are mis-leading when looking at the results. Objects are clearly not separated across frequencies. Low frequency shapes of most objects seem to be captured in the initial layers and higher frequencies of their shapes in later layers. Figure 6 and 7 corroborate this. Claiming that objects are separated and represented in distinct frequency layers does not appear true from the results and does not follow the prior claim of frequency based clustering. Individual instances of objects are not separated in any way. The edge detection results show lots of spurious edges coming from the gaussian representation which only make edge prediction worse. The argument that backgrounds tend to have low-frequency regions while objects.. is barely enough to make the claim that objects are separated in the model. The examples shown are few and relatively simple with one bird on a tree and clear background. Yet, the model is unable to separate the tree branches from the bird, and even the bird is not clearly segmented. I believe the assertion that frequency based depth ordering happens. The follow-up claim that this leads to emergence of objects or even parts is a stretch. \\n\\n\\n\\\\textbf{Experimental details and strength of results}\\nThe authors only train ViT-B for 400 epochs. The authors could have pre-trained for 1600 epochs, or tried a ViT-L architecture. Currently there is no clarity whether this approach will scale to a larger ViT or if it will continue to improve with additional training as MAE does. The ablation studies over c, masking ratio and loss masking, normalization and the usage of batch size 4096 show that sufficient GPU resources were used in pre-training. At least pre-training ViT-B till 1600 epochs should have been possible for the authors. It would be very useful to add these results. Without these results, it is impossible to verify whether GMAE scales as MAE does. \\n\\nFor the figure-ground segmentation results, there are no details on the experiment. What layer was used for figure ground segmentation in the layering? No discussion on the baselines is presented. Models such as Leopart (Ziegler et al.) need to be compared. Their results on zero shot segmentation are way more advanced while not needing a sparse gaussian representation that the authors claim is the reason why their figure ground segmentation results are strong.\\n\\nThe edge prediction results are worse than using a Sobel filter for edges. There are clearly numerous spurious edges in the qualitative result that probably come from gaussians that represented interior regions of objects that do not correlate with any real edges.\", \"questions\": \"Please address the issues brought up in the weaknesses section.\\n\\nExperimental results on ViT-B trained to 1600 epochs and ViT-L could be very useful, but I understand that it is unreasonable to ask for these results within the rebuttal period. \\n\\nI believe the authors need to focus more carefully on their evaluation. If depth based frequency layering is all that the model achieves over standard MAE, could this be done without using this intermediate representation? What other tasks can be helped by such a representation? Clearly figure-ground segmentation and zero-shot edge detection do not benefit from this method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your question. We have added a figure in the updated paper, in section A2 figure 13. The residual MLPs act on the latents from the decoder to project small changes to the initial Gaussians. The decoder produce (256 * d) vector (d is the hidden_dim of the decoder). In our pretrained model we only had one MLP-layer to project this to (256 * 14) Gaussians. Now, after this model is pretrained, we add $k$ new MLP-layers each with the same as before (to project from d to 14 dim). New Gaussians are taken as initial Gaussians + small changes learned by the residual heads. This process is also explained in section A2 figure 13.\\n\\nIn Terms of computation, this does not add any significant overhead, since we only had few small mlp heads (in our case maximum 16), and the splatting and rendering is didnt affected by more Gaussians. GMAE (with 256 Gaussians) takes on average a mean time: 0.7044 seconds and a standard deviation: 0.0053 for 10 samples including forward, rendering, and backward pass. GMAE (with 256*16=4096 Gaussians) takes on average a mean time: 0.7093 seconds and a standard deviation: 0.0054 for 10 samples including forward, rendering, and backward pass. This is not a significant increase in compute. However, we still need at least 1 epoch of finetuning, to get a better rFID than the pretrained model. But this can be sped up with better hyper-parameters.\"}", "{\"comment\": \"We thank the reviewer for their comments on our interesting approach, wide variety of experiments and clear writing. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal.\\n\\n**\\u201cThe number of Gaussians used in GMAE is significantly lower than reconstruction methods\\u201d**\\n\\nWe agree with the reviewer, that the limit on the number of gaussians is a limitation in the current model (it's a limitation as this is the first time someone combines Gaussians to a representation learning based framework), and the current design we are limited by memory. We want to point out that we are not focused on reconstruction quality here and the focus is on the meaning of the learned Gaussians. However, here we modify the architecture to show the potential of further increasing the number of Gaussians.\\n\\nWe tried to solve this problem by learning more Gaussians step by step, as residual gaussians. First, we pretrained the model with 256 Gaussians, and after that, we initialize an k MLP layers, to project changes in 14 features to k*256 Gaussians. These MLP layers essentially learn the small changes from the main 256 gaussians. At the start these MLP layers are initialized with zero weights, hence they don't affect the reconstruction for the original 256 gaussians. \\n\\nWith this setting, we were able to train up to 4096 gaussians, but not limited (we will update here with more gaussians), since it is not limited by the memory size anymore. Below we show the reconstruction FID as we increase the number of gaussians step by step. Finally, we also fine tune the model (4096*) without masking and on full reconstruction, and it achieves 18 rFID, without perceptual loss or vae. \\n\\n| Number of Gaussians | rFID |\\n|---------|--------|\\n| 256 | 89.45 |\\n| 1024 | 80.32 |\\n| 4096 | 63.87 |\\n| 4096* | 18.71 |\\n\\n\\n\\n\\n**\\u201cAny benefits to manually reducing redundant DoFs when training such models?\\u201d**\\n\\nThe 14 DoFs includes 3 for location, 3 for color, 4 for rotation as quaternion, 3 for scale and 1 for opacity. If we treat all GSs as 2D instead of 3D living on the image plane, then we could reduce the DoFs to 11 (2 for location, 2 for rotation and 2 for scale). But in this case we lose the ability to model the image as \\\"layers\\\" because all GSs effectively placed at the same depth level. Since, our aim is to have this extra degree of freedom to allow the model to place gaussians at different depth levels, so that we can get these emergent zero-shot capabilities. We will train a model with 2DGS, and get back before the discussion period ends. \\n\\n**\\u201cit would be interesting to compare this model with MAEs in a depth-prediction\\u201d**\\n\\nWe thank the reviewer for this valuable feedback, and we have evaluated our models on depth estimation tasks. Please find the depth estimation results below, on NYU depth estimation tasks, GMAE outperforms dinov2 models and perform the same as MAE or slightly better. \\n\\n| Model | RMSE |\\n|---------|--------|\\n| DINOv2 | 0.4761 |\\n| MAE | 0.4345 |\\n| GMAE | 0.4336 |\\n\\n\\n**\\u201cHow well does a decoder generalize to Gaussian counts other than the one it was trained on?\\u201d**\\n\\nAt test time we can only infer with a fixed number of gaussians with the current design, but we plan to explore training the models on Matryoshka style loss, which can allow us to use variable number of gaussians at test time and gives coarse to fine reconstructions as we increase the number of gaussians. We will add a discussion regarding test time scaling of the number of gaussians in the paper.\"}", "{\"comment\": \"We thank the reviewers for their feedback and thanks for their suggestions. We are still working on getting results on COCO-stuff and ADE20K. We will try our best to get these results before the end of the discussion period.\"}", "{\"comment\": \"Official Response by Author for the Reviewer Zk2V\\n\\nWe thank the reviewer for comments that mention that our \\u201cidea is novel and critical to the computer vision community\\u201d, and that acknowledge our well written paper and wide range of experiments. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal and we fix the typos in the paper. \\n\\n\\n**\\u201cThe evaluation datasets and used baselines seem to be a bit outdated\\u201d**\\n\\nFor edge detection, even SAM is evaluated on the BSD500 dataset, and for the segmentation tasks, we are following the protocol and the code in MAE-VQGAN. We have added new results on NYU-scenes on depth estimation. We appreciate the suggestion to compare to DINOv2 and added it as an additional baseline. We will add more tasks which are relevant for this during this discussion period. \\n\\n| Model | RMSE |\\n|---------|--------|\\n| DINOv2 | 0.4761 |\\n| MAE | 0.4378 |\\n| GMAE | 0.4336 |\\n\\n\\n**\\u201cThere are no failure case examples to justify the possible future work of GMAE method.\\u201d**\\n\\nThanks for bringing up this point. We did share unfiltered, randomly chosen samples from our models, in appendix figure 12. A few failure modes we would like to list are a) in the cases of layering, sometimes the layers are still not fully disentangled from colors. b) another limitation of this work is the use of L2 loss, which tends to give results which are blurry, compared to, for example, diffusion style loss. c) our model suffers from a feedforward vs optimization tradeoff, optimization approach like gsplat training would give better results while being slow, while gmae on other hard, being a feedforward fast approach, but giving not the best reconstruction results. d) the number of gaussians was another main limitation of our work, however, based on the new results, we have fixed this by learning additional residual gaussians. Now we can learn up to 4096 gaussians. \\n\\n\\u201cHow to get the query tokens? What could these tokens be?\\u201d\\nQuery tokens are learned tokens, which were trained end-to-end during the training of the encoder and the decoder, and we initialize them randomly with zero mean and 0.02 variance. At test time, The query tokens and the latent tokens from the encoder are concatenated and passed through the decoder, and the query tokens are projected into gaussians for rendering. We will make this point more clear in the updated manuscript.\"}", "{\"comment\": \"We thank the reviewer for comments on our well written paper, high quality visualizations, and good quality of a scientific paper. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal.\\n\\n**\\u201cThe method looks very unnatural and simply combines 2 popular ideas: 3d gaussians and MAEs.\\u201d**\\n\\nWe agree with the reviewer, these are two bit orthogonal approaches. However, we'd like to cast the comment with positive light, being **very unnatural** meaning that we are doing something novel, and we showed that Gaussian and MAE can be integrated into the same framework. If **simple** combination can already show interesting results, it would be valuable to explore and share these findings with the community. \\n\\nour combination is unique and we showed some interesting results. We were interested in learning from large scale images, and MAE is proven to be a best approach to learn good **semantic** representations, on the other hand, Gaussians are good way to learn **structure/geometry**. A combination of these two approaches allowed us to learn good semantic representations (as shown in ImageNet, COCO experiments), as well as some geometry based on 2.1d representations (layering and edge detections). \\n\\nIn addition to this, our new results on depth estimation shows it is more competitive than dinov2, and also our new results on scaling the number of gaussians shows this could be helpful with image generation tasks. \\n\\n**\\u201cZero-shot capabilities are not convincing\\u201d**\\n\\nOur goal is to demonstrate that these capabilities emerge purely from self supervised pre training objectives, without any task specific design choices. None of the results were explicitly trained for the zero-shot tasks. While we agree that task-specific models which were trained for these tasks might get better results, we only showed few zero-shot capabilities by only training on self-supervised objectives. We believed there might be more tasks which can be explored, from our models. We only showed a few examples, but since these are tasks the model was not optimized for, there might be more tasks that can be done with our models. Could the reviewer please clarify regarding \\\"generative methods\\\" or \\\"generative multi-plane images\\\", which shows zero-shot capabilities also emerge while the models are not specifically pre-trained for those tasks.\\n\\n**\\u201cI would hope to see is having some 3D capabilities\\u201d**\\n\\nit would be impossible to get full 3D representation from a large scale collection of 2D images from different scenes. We aim to get to some level higher than 2D and learn a 2.1D representation which would allow us to learn layering of the objects and scene. Hopefully, with addition of videos to the training data, we may be able to learn some slightly higher than 2.1D representations. We leave this exploration for future work. \\n\\n\\n**\\u201cWhat exactly is the main advantage of the proposed model? For which use-case would one realistically choose to use it?\\u201d**\\n\\nWe agree the tasks we have are not fully utilized, and if we care about a single zero shot task, we can simply use the best methods available there to get better performance. But rather, this work shows there is another way to train self-supervised models and how to get zero-shot capabilities from these models. From this we showed, a subset of zero-shot capabilities emerge from GMAE models, but this is not limited, and this opens more possibilities to explore more zero-shot capabilities in self-supervised models as a meta task. \\n\\nIn addition to this, we also showed that in depth estimation case GMAE performs better than dinov2 and same as mae self-supervised models. We also showed a new way to increase the number of gaussians by 16x factor, by learning residual gaussians. This could be used as a better initialization for gaussian splatting, or for generative models. \\n\\nFinally, we wanted to address that, equipped w/ Gaussians, we can unlock the use of decoders after MAE pre-training. Traditional SSL methods only care about the encoders; we show that with the Gaussians in the decoder, we can actually use the Gaussians to do more tasks (zero-shot). Of course these are some initial positive signals, but we believe these signals are interesting and worthy of sharing to the community.\\n\\n\\n**\\u201cHow much slower each training iteration has become compared to MAE?\\u201d**\\n\\nFor MAE training on V100 gpus, takes on average time: 0.6471 seconds and Standard deviation: 0.0209 for 10 samples for forward and backward pass vs GMAE takes on average a mean time: 0.7044 seconds and a standard deviation: 0.0053 for 10 samples including forward, rendering, and backward pass. It added only a small overhead to the training, which recent advances in gslpat optimizations can also be incorporated to give faster training.\"}", "{\"summary\": \"The paper proposes a new self-supervised learning method called Gaussian Masked Autoencoder (GMAE) which combines the advantages of pixel-level and intermediate representation learning. It uses 3D Gaussian distributions as intermediate representations to capture richer image information and improve image generation and processing abilities. GMAE achieves comparable performance to supervised learning while significantly improving zero-shot learning tasks such as segmentation, layering, and edge detection. The paper also demonstrates the potential of GMAE in downstream tasks like visual recognition and object detection.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a novel approach to self-supervised learning by introducing Gaussian Masked Autoencoder (GMAE) that utilizes 3D Gaussian distributions as intermediate representations. This idea is different from traditional methods that use pixel-level reconstruction, and the authors demonstrate the effectiveness of this approach through experiments.\\n\\n2. The paper is well-written and easy to understand. The authors provide details about the implementation and evaluation metrics, making it possible to replicate the results. The empirical results presented in the paper are convincing and support the claims made by the authors.\", \"weaknesses\": \"1. Lack of comparison to state-of-the-art methods: The paper does not compare the proposed method to other existing methods for self-supervised learning, such as contrastive learning or clustering-based methods. This makes it difficult to assess the relative performance of GMAE compared to other approaches.\\n\\n2. I didn't understand the immediate motivation for choosing Gaussian splatting to enhance MAE in this paper. What qualities does Gaussian splatting have to help MAE? Can Gaussian splatting be replaced with NeRF or other 3D representations?\", \"questions\": \"See Weaknesses. I will change my rating based on the responses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Related works and Introduction:** We have fully revised the paper to accommodate the changes from the reviewer. Please see the updated manuscript. We have included super pixel sampling networks, multi-plane images, leopart clustering and slot-attention.\\n\\n*Mid-level Representations: Image can be constructed by operating functions on some representations. One line of approach keeps the representations in the latent spaces, and uses a pretrained decoder network to reconstruct the image. VAE (Kingma, 2013) with image synthesis (Rombach et al., 2022; Li et al., 2023) are good examples of this case, along with MAE He et al. (2022) and BEiT Bao et al. (2021). Other lines of approaches follow structured representations to represent an image. There are various such options: super-pixels, Gaussians, SVG code, and multi-plane images, etc. For example, Super-pixel Sampling Networks (Jampani et al., 2018) learns to predict super-pixels as the representation to reconstruct and to predict segmentations and flow. Multi-plane images is another way to represent an image (Tucker & Snavely, 2020), where an image is composed of multiple layered planes and can be learned end-to-end. There are hybrid approaches also. For example, Slot Attention (Locatello et al., 2020) learns an intermediate representation for objects by adding a bottleneck in the model architecture. Similarly, Leopart (Ziegler & Asano, 2022) learns to cluster the patches based on self-supervised clustering. In this paper, we take another approach which uses 3D Gaussians as intermediate representations to reconstruct an image.*\\n\\n**1600 epoch results**: We have pretrained a vit-l model on 1600 epochs, and then fine tuned for imagenet classification. This model achieved 85.0% on imagenet classification accuracy. This shows that our model does scale with model size and training epochs\\n\\n**specific losses**: We agree with the reviewer that utility outweighs here on Dino case. We believe that our approach is the first step towards learning both semantics and geometric representation and we hope this will open up further research opportunities.\\n\\n**Frequency based clustering**: We also agreed on this on the paper. We showed that the correlation on depth and frequencies are the reason we get a layering effect. This is explained in section 4.4. \\n\\n**MAE vs GMAE on depth**: As mentioned in the paper, our goal is to not get better than MAE, but rather get more capabilities of vision models, via self-supervised pre-training.\"}", "{\"comment\": \"Thanks to the authors for their rebuttal.\", \"re\": \"mid-level representations, I should have also mentioned multi-plane images (see Tucker et al. as an example). The novel-view synthesis literature has explored various mid-level representations over the years that can be re-rendered from novel views. Gaussian Splatting and Nerfs also descendants from work in this literature. A multi-plane image representation might have all the same depth layering benefits that are proposed here.\\n\\nArguing that GMAE doesn't use \\\"specific losses\\\" like SSNs or Slot Attention definitely says that the proposed method is simpler in comparison. However, it has nothing to say about whether it is better since all these methods are self-supervised at the end of the day. As an analogy, DinoV2 is widely used now and is far from simple in terms of the different losses used. However, they show how each component of their loss is important and its impact in the community is undeniable. So I hope the authors understand why I do not buy into this distinction.\\n\\nMy argument about frequency clustering is still valid. In the layers, it can be seen that it's not just depth, but the different frequency parts of objects that are differentiated in depth. In Figure 7 (which are all very simple examples of a single bird on a clear background), it can be seen that low-frequency colours on the birds show up in early layers, followed by high frequency details in later layers. Given how gaussian splats are rendered and because GMAE does not need to model multi-view accuracy, it makes sense that the gaussians are not placed on direct geometry but instead lower frequencies are placed closer to the camera followed by higher frequencies away from the camera. This is also clear since the sky and background shows up in earlier layers.\\n\\nThanks for adding the depth estimation results! It's great to see that GMAE beats DinoV2 there. However, the fact that there is no significant difference from MAE is also concerning. The major argument across the paper is that there is emergent depth layering, shown through experiments where the results are far from convincing. However, if there is no direct effect on improving downstream depth estimation itself, then the argument in the paper becomes much weaker.\\n\\nOverall, the results in this paper are far from convincing that there is any benefit to using single image Gaussian predictions that are splatted for rendering for representation learning. For edge detection, the authors say, \\\"Rather, our goal is to demonstrate that by just employing a self-supervised MAE loss, we can get these capabilities to emerge without designing specific datasets or architecture for each of these tasks individually\\\". I would like to clarify here that the authors have not shown any useful edge detection capabilities that cannot be solved with a simple filter. These filters are not SOTA, they are extremely simple image processing methods. \\n\\nOverall, I would like to reiterate that the idea in this paper is novel. However, I haven't seen experimental evidence in this paper that this novel idea is useful. I believe it needs more work and exploration to find merits of this work, which are not in figure-ground segmentation or edge detection in the way that is presented in the paper. Depth estimation is a great addition and the early results show no significant difference to MAE. \\n\\nI haven't seen any evidence here to improve my proposed score of 3 and urge the authors to work further on this paper, since it is an interesting idea that perhaps needs a different application than proposed here.\"}", "{\"comment\": \"**\\\"MAE + 3DGS\\\"**:\\n\\nThank you for sharing your detailed perspective. We appreciate the value of impactful A + B papers and agree that their influence lies in demonstrating meaningful synergies between the components. This principle has guided our efforts in this work, and we would like to provide further clarity on why we believe the combination of MAE and 3DGS presents a valuable and insightful contribution.\\n\\nMAE excels as a powerful framework for representation learning, particularly for the encoder. However, its design inherently discards the decoder post-pretraining, leaving its potential for downstream tasks unexplored. In contrast, 3DGS is a lightweight yet effective framework for 3D reconstruction, with untapped potential in representation learning. By integrating 3DGS into the decoder of MAE, our proposed GMAE synergistically combines the strengths of both approaches, resulting in:\\n- **Zero-shot capabilities** within the decoder, a novel property that the standard MAE framework does not enable.\\n- **Enhanced representation learning for 3DGS**, unlocking its utility beyond traditional 3D reconstruction tasks.\\n\\nWe believe these results reflect a meaningful synergy, showcasing how the strengths of MAE and 3DGS complement each other to address their respective limitations.\\n\\nRegarding your comment on the \\u201cunnatural\\u201d combination, we apologize for the misinterpretation and casted our excitement into the response. On the other hand, while we agree the insightfulness depends on the results, we also want to respectfully point out that the interpretation of the results can vary among researchers. From our perspective, GMAE not only provides novel capabilities but also addresses gaps in both constituent methods, which we believe advances the field.\\n\\nFinally, while we acknowledge that many potential A + B combinations are possible, such possibilities *do not diminish* the contributions of our work. Each combination must be evaluated on its own merits, and we hope the insights and illustrations provided in the work can already justify the *value* of GMAE for the research community.\\n\\nThanks again for the feedback and holding us to a high standard.\"}", "{\"summary\": \"The paper introduces Gaussian Masked Autoencoders (GMAE), an extension of MAEs that integrates a learned intermediate Gaussian representation, rendered into an image using Gaussian splatting. This intermediate representation is shown to offer several benefits over fixed patches, such as foreground-background separation, edge detection, and image layering, while achieving performance on par with or even surpassing standard MAEs in image recognition and downstream tasks. The authors support their claims with a comprehensive suite of experiments.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper explores an interesting topic of adding additional inductive biases to self-supervised image representation learning techniques.\", \"The writing is clear and well-structured.\", \"The experiments section includes a wide variety of downstream applications and comparisons.\"], \"weaknesses\": [\"As talked about in the Discussion section, the number of Gaussians used in GMAE is significantly lower than the quantities typically used in scene reconstruction applications, where Gaussian splatting is well-known. This is because each Gaussian corresponds to a unique token in the lightweight decoder, so increasing their number would cause considerable slowdowns.\", \"Minor typo on Line 503: Fig 12 \\u2192 Fig 11\"], \"questions\": [\"The GMAE model assumes a fixed camera projection for its rendering. Therefore, it most likely does not need all 14 degrees of freedom that is normally used in scene reconstruction applications. Do you think there is any benefits to manually reducing these redundant DoFs when training such models?\", \"Since the gaussian representation introduces a 3D inductive bias for the model, it would be interesting to compare this model with MAEs in a depth-prediction downstream task. In theory, GMAE should be much better equipped for solving such tasks. Do you expect to see a significant improvement over MAEs, or would the limited number of gaussians not allow for such a thing?\", \"As discussed in Section 4.1, the decoder is decoupled from encoder tokens, allowing the number of Gaussians to be increased arbitrarily after training. However, it\\u2019s mentioned that four separate models were trained to decode 64, 128, 256, and 512 gaussians, respectively. How well does a decoder generalize to Gaussian counts other than the one it was trained on? For example, does a decoder trained with 256 gaussians achieve better performance is evaluated with 1024 gaussians?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors --- continued\", \"comment\": \"**Experimental details and strength of results**\\n\\n**The authors only train ViT-B for 400 epochs.**: We agree that to verify the scalability of this approach, we should train bigger models for longer, and we will start running this experiment and should be able to get the results by the end of this discussion period. \\n\\n**figure-ground segmentation results**: We thank the reviewer for pointing Leopart (Ziegler et al.), which is very relevant. We will try to run our models on these baselines, and get results before the end of the discussion period. We also want to add that, the training objectives of Leopart are designed for clustering, and with self-supervised training they achieve great results. However, we would like to point out that our objective is not specifically designed for clustering. \\n\\n**zero-shot edge detection does not benefit from this method.** --- Our claim in this paper is not that we improve upon SOTA on these tasks. Rather, our goal is to demonstrate that by just employing a self-supervised MAE loss, we can get these capabilities to emerge without designing specific datasets or architecture for each of these tasks individually. For edge detection, the model is not trained for this task, and the objective does not enforce edge detection.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewer for comments on our paper where the \\u201cidea is novel\\u201d, presented in a \\u201cwell written paper and wide range of experiments which are reproducible\\u201d. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal. We will also open source the code and the models.\\n\\n**Comparison with contrastive learning methods.**\\n\\nWe thank the reviewer for the suggestion. We will add more baseline contrastive approaches in Table 2a. We will also explore more tasks. For example, we have already added a depth estimation task, and we show GMAE performs better than DINOv2 in depth estimation. We want to highlight that, even after zero-shot capabilities were introduced, the model does not lose its encoder representation power. \\n\\n\\n**\\u201cWhat qualities does Gaussian splatting have to help MAE? Can Gaussian splatting be replaced with NeRF or other 3D representations?\\u201d**\\n\\nWe were interested in learning from large scale images, and MAE is proven to be the best approach to learn good **semantic** representations. On the other hand, Gaussians are good way to learn **structure/geometry**. A combination of these two approaches allowed us to learn good semantic representations (as shown in imagenet, coco experiments), as well as some geometry based on 2.1d representations (layering and edge detections). \\n\\nRegarding NERF vs Gaussians, the more important property we see from Gaussian Splatting is that it's an unstructured sparse representation that automatically allocates more resources on finer details. Neither of the NeRF variants come with this property. Though some other 3D representations such as point cloud or meshes also allow spatially variant sparsity, they are much harder to optimize than 3D Gaussians. So we see Gaussians are a great way to represent not only 3D content, but also (layered) 2D content in the machine learning context.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I thank the authors for their response. Now my concerns about failure cases and expositions are addressed. But my concerns about the evaluation protocols still remains. I think results on more complicated datasets like COCO-Stuff or ADE20K and with more complex tasks like segmentation could better reveal the effectiveness of GMAE. But as an initial attempt, I also do not want to be too harsh. Given that I am not an expert in this area, I would stay slightly positive towards this submission but will not strongly champion it.\"}", "{\"comment\": \"We thank the reviewer for comments on the \\u201dpaper is original and interesting\\u201d, \\u201cpaper has potential to be significant to the community\\u201d and \\u201cpaper is presented clearly and laid out well\\u201d. Here we answer the questions the reviewer is asking, and we are happy to answer more questions or run more experiments for the rebuttal.\\n\\n**Related works**\\n\\nWe agree that our proposed mid-level representation is but one of many existing options, with many more to come. While we cited several of these related works, we will also make sure to modify the introduction and related works to further highlight this fact in the updated version of the submission. \\n\\nWe agree with the reviewer that there are various options for mid-level representations. We thank the reviewer for pointing out Superpixel Sampling Networks (SSN), and we will discuss them in the related works section. However, there are a few differences we would like to point out. While there is a similarity between gaussians and superpixels as intermediate representations, unlike SSN, our models are trained for self-supervision without any task specific data. Edges and segments are extracted from our model, rather than trained for end-to-end as in Superpixel Sampling Networks.\\n\\nOur approach differs from SSN or Slot attention in several ways. Unlike SSN or Slot attention, we don't utilize any specific loss function, and we simply use L2 reconstruction loss. In terms of architecture, we don't add any specific changes to the model. In addition to this we also show that our method can learn representations that are both good for recognition and detection as well as mid-level zero-shot problems.\\n\\nWe also cited Bar et al. in our main paper, discussed it in the related works section, and compared to it in table 3. We found that our method outperformed Bar et al. in figure ground tasks. In addition to that, bar et al only produce discrete tokens, and it is hard to reason about or manipulate the discrete token, since they don't have any explicit representation. \\n\\n\\\"Nonetheless, we have shown that one no longer has to choose between pixels and latent representations for visual modeling.\\\" \\u2014 based on the additional related works discussed above, we see this can be misleading and we will remove this in the paper. \\n\\n\\n**\\u201cWhy is the gaussian representation better?\\u201d**\\n\\nWe believe that, GMAE adds extra capabilities while persevering the representation quality along the MAE axis. We explored new directions such as layering, segmentation, and edge detection. The rebuttal also includes new experimental results on metric depth estimation where GMAE performs better than dinov2 on NYU depth estimation. We also explored the generation quality of GMAE and with new changes to the architectures, we were able to train with 4096 gaussians and more, without memory issues. All these additional capabilities are due to the use of gaussians as an intermediate representation, and it that could open up new capabilities in the future. Below we address a few concerns regarding the GMAE model. \\n\\n**Slower**: GMAE is not significantly slower than MAE. For MAE training on V100 gpus takes on average: 0.6471 seconds with standard deviation: 0.0209 for 10 samples for forward and backward pass. GMAE takes on average: 0.6944 seconds and a standard deviation: 0.0053 for 10 samples including forward, rendering, and backward pass. This adds only a small overhead to training. Recent advances in GSplat optimizations can also be incorporated to result in faster training. \\n\\n**GAN loss**: The MAE github mentions that an addition of a GAN loss leads to better reconstructions. According to off-line communications with MAE authors, the additional adversarial training only improves reconstruction quality (the images become sharper), but it does not improve representation quality. Bar et al, uses discrete tokens from VQ-GAN and is trained with a GAN loss to have sharp reconstructions.\\n\\n**Frequency clustering in depth**: \\u201cIndividual instances of objects are not separated in any way\\u201d \\u2014 this is true, and we do not claim this, as based on the training recipe it is **unlikely** that the model would be able to group based on instances. Our claim was \\u201cseparate objects in the z direction\\u201d, this is layering rather than instance segmentations. We agree with the reviewer that these are not explicit segmentation rather, reasonable reconstructions of different layers.\"}", "{\"comment\": \"I am thankful to the authors for their thorough response. Let me follow up on the discussion and clarify some of the raised concerns:\\n\\n> **The method looks very unnatural and simply combines 2 popular ideas: 3d gaussians and MAEs.**\\n\\nI highly value *unusual* (surprising) ideas, but when describing the \\\"MAE + 3DGS\\\" as unnatural, I did not see this in a positive light. I believe that an \\\"A + B\\\" type of a paper is influential only when the synergy brings a lot of benefits (e.g., MAE is \\\"image transformer + masked pretraining\\\" and showed unexpected simplicity and high quality, ViT is \\\"transformer + image classification\\\" and showed unexpected scalability, etc.). But with the current results, it's unclear why doing it. One can come combine any \\\"A\\\" (self-supervised learning, continual learning, meta-learning, few-shot learning, multi-task learning, curriculum learning, adversarially robust training, training in a latent space of a VAE) with any \\\"B\\\" (3DGS, NeRF, long videos, megapixel images, protein structures, table-based data structures, etc.) and do a research project with it. Whether this project would be insightful or not depends on the results. For the current submission, maybe I am just short-sighted, but I just do not see why exactly it's worth combining 3DGS and MAE and what this opens for the community.\\n\\n> **Zero-shot capabilities are not convincing**\\n\\nI have not found any substantially improved performance in the downstream applications which would urge the community to switch to the proposed setup from other designs. If one trains a diffusion model on ImageNet, they can do segmentation/depth estimation on top of its representations very accurately. For GMPI, I meant this work: Zhao et al., \\\"Generative Multiplane Images: Making a 2D GAN 3D-Aware\\\".\\n\\n> **I would hope to see it having some 3D capabilities**\\n\\nIf the goal is to get a 2.1D representation, then why not just using a layered image representation (like an MPI) directly? I think that would give both edges and depth in the same manner.\\n\\n> **What exactly is the main advantage of the proposed model**\\n\\nIn the response to this concern, the authors mentioned several potential downstream applications, but it's unclear whether these applications are actually achievable with the proposed design. The authors claim that GMAE can be useful for generative modelling, gaussian initialization, etc., but there is no evidence for that.\\n\\nAlso, could the authors please provide a pointer for \\\"We also showed a new way to increase the number of gaussians by 16x factor, by learning residual gaussians\\\"? I have not found it in the latest manuscript version.\\n\\n> **How much slower each training iteration has become compared to MAE**\\n\\nThis concern has been fully addressed and I don't have it anymore. I am thankful for the clarification.\"}", "{\"summary\": \"The paper proposes to replace the decoder module of MAE with a gaussian splatting decoder\\u2264\\u00b5t: instead of decoding query pixel patches, it proposes to predict the parameters of 3D gaussian splats, which are then rendered onto a static camera position. The authors provide ablations on the number of gaussians used. They show several zero-shot capabilities which arise from such design: background/foreground segmentation and edge detection. The paper is well written and have high quality visualizations. They also analyze some properties of how the gaussians get distributed:\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"I guess the main strength is some zero-shot capabilities, like foreground/background separation and edge detection.\", \"Despite to unconventional design, it does not lead to the loss of the main quality of self-supervised methods:\", \"The overall idea is quite unusual which, I believe, is a good quality of a scientific paper.\", \"Writing is very clear and the presentation quality is high.\"], \"weaknesses\": [\"The method looks very unnatural and simply combines 2 popular ideas: 3d gaussians and MAEs. There are no particular advantages or insights in combining them. I feel the benefits are marginal and not worth the complications of the design.\", \"Zero-shot capabilities are not convincing: there are easier ways to obtain them with a higher quality (e.g., generative methods or generative multi-plane images with similar layered representations).\", \"The main advantage I would hope to see is having some 3D capabilities, but they are lost due to rendering from a static position.\"], \"questions\": [\"What exactly is the main advantage of the proposed model? For which use-case would one realistically chose to use it?\", \"How much slower each training iteration has become compared to MAE?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Bo62NeU6VF
Backtracking Improves Generation Safety
[ "Yiming Zhang", "Jianfeng Chi", "Hailey Nguyen", "Kartikeya Upasani", "Daniel M. Bikel", "Jason E Weston", "Eric Michael Smith" ]
Text generation has a fundamental limitation almost by definition: there is no taking back tokens that have been generated, even when they are clearly problematic. In the context of language model safety, when a partial unsafe generation is produced, language models by their nature tend to happily keep on generating similarly unsafe additional text. This is in fact how safety alignment of frontier models gets circumvented in the wild, despite great efforts in improving their safety. Deviating from the paradigm of approaching safety alignment as prevention (decreasing the probability of harmful responses), we propose backtracking, a technique that allows language models to "undo" and recover from their own unsafe generation through the introduction of a special [RESET] token. Our method can be incorporated into either SFT or DPO training to optimize helpfulness and harmlessness. We show that models trained to backtrack are consistently safer than baseline models: backtracking Llama-3-8B is four times more safe than the baseline model (6.1\% $\to$ 1.5\%) in our evaluations without regression in helpfulness. Our method additionally provides protection against four adversarial attacks including an adaptive attack, despite not being trained to do so.
[ "AI safety", "Generation algorithm", "Backtracking" ]
Accept (Oral)
https://openreview.net/pdf?id=Bo62NeU6VF
https://openreview.net/forum?id=Bo62NeU6VF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y5YG1dCEVs", "uOAxtHEcgc", "pWub0wL1mV", "jIheWbNgbQ", "iCx6NaRHwF", "e7sgOEIdjL", "Vd7nISsv2f", "NzlOoarO3w", "Kx9F0PHbw4", "Hk79XbuCpA", "FE60igTMBW", "DviSq04qWI", "9MrGz5IqJ8", "2ZiWvAAZH1", "11WqxjDG7x", "0hv75BLxHA" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732086758565, 1730695459051, 1732633116227, 1732087297770, 1732087042301, 1732566141295, 1732641271394, 1732185611778, 1730688428049, 1734937007992, 1730097841460, 1737523494583, 1730492331956, 1732087501773, 1732630809134, 1732602305538 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2265/Authors" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_Gb5k" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_WShb" ], [ "ICLR.cc/2025/Conference/Submission2265/Authors" ], [ "ICLR.cc/2025/Conference/Submission2265/Authors" ], [ "ICLR.cc/2025/Conference/Submission2265/Area_Chair_eJE3" ], [ "ICLR.cc/2025/Conference/Submission2265/Authors" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_KgTS" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_WShb" ], [ "ICLR.cc/2025/Conference/Submission2265/Area_Chair_eJE3" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_KgTS" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_4GF6" ], [ "ICLR.cc/2025/Conference/Submission2265/Authors" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_4GF6" ], [ "ICLR.cc/2025/Conference/Submission2265/Reviewer_Gb5k" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate your positive assessment of our work.\\n\\n> Compatibility with a streaming API\\n\\nWe agree that implementing backtracking in a streaming environment could be inconvenient for users visually, since backtracking \\u201cremoves\\u201d part of the generation history upon backtracking. However, we would like to note that in the context of safety, it is always better to undo an unsafe generation even if it may be visually annoying. Also, backtracking is reasonably precise: it usually does not kick in unless the generation is unsafe, so a backtracking model would behave similarly to a non-backtracking model in non-safety-related contexts.\\n\\n> Risks in reprogrammable backtracking\\n\\nIn our work, we do not explore malicious system prompts and assume the model takes on a default \\u201chelpful\\u201d and \\u201charmless\\u201d role, which would be the case for most black-box LLMs (e.g., ChatGPT, Gemini). We agree that it may be possible to inhibit backtracking through malicious system prompts, but even with backtracking inactive, the model should be in principle at least as safe as the non-backtracking baseline. Safety under model adaptation (e.g., fine-tuning and system prompting) is an open research question out-of-scope for this work.\\n\\n\\n> Do we keep the undone history in the dialogue and KV cache or do we also undo them altogether and only keep the refined context?\\n\\nWe keep the history in-context. The reason is that we suspect that the presence of the \\u201cfailed generation\\u201d actually provides a signal for the model to self-correct and generate more safely, and [1] suggests that this self-correcting approach could be effective. Another issue with dropping context is that the model would repeat its own unsafe generation with high probability. In the extreme case, if greedy decoding is used, the model is guaranteed to be stuck in a backtracking-regeneration loop after it backtracks once.\\n\\n[[1] Generating Sequences by Learning to Self-Correct](https://arxiv.org/pdf/2211.00053)\"}", "{\"summary\": \"The paper proposes a novel technique called Backtracking, to enhance the safety of large language models by allowing them to undo unsafe responses. Traditional safety methods focus on prevention by reducing the likelihood of harmful outputs. However, backtracking enables models to recognize and \\\"reset\\\" unsafe content mid-generation using a designated \\\"[RESET]\\\" token. This approach, which can be incorporated into both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), improves safety without reducing the model's helpfulness. Evaluation shows that backtracking significantly reduces unsafe output rates and defends against various adversarial attacks, including an adaptive one designed to counteract the reset function. The results suggest backtracking is a promising addition to existing alignment techniques for safer language model deployments\\u200b\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Reflection-based Defense Yields More Flexibility: The proposed method has significant flexibility compared to other methods. With proper prompting, unlike traditional filter-based adversarial attack defense, it is easy to use a safe backtracking model with updatable preference and knowledge, depending on the user's demand.\", \"Straight Forward Intuition and Easy Maintenence: The internal mechanism of the proposed backtracking defense is comparatively very transparent. When unexpected defensive actions are taken by the models, it is very easy to convert such error log into new DPO data.\"], \"weaknesses\": [\"Discrepancy between the Streaming User Interface and Model Behavior: When undoing some of the generation outputs, the system should delete the undone harmful trigger prompts and update it with the model's remedy context afterwards. This could cause some unexpected discrepancies and difficulties in general designs of the pipeline.\", \"Risks in Reprogrammable Backtracking: Since we now rely on the model's own reflection to backtrack the harmful trigger prompts, it is possible, if the model itself is reprogrammed in system prompt with malicious intents, the backtracking's flexibility can be abused to jailbreak the defense against attacks.\"], \"questions\": \"Do we keep the undone history in the dialogue and KV cache or we also undo them altogether and only keep the refined context?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response! You've answered my questions and I've raised my score.\"}", "{\"comment\": \"We appreciate your positive feedback on our work. We answer some of your questions below.\\n\\n> What if we skip backtracking SFT training?\\n\\nEmpirically, skipping backtracking SFT training breaks the model. The technical reason is that DPO imposes a \\u201czero-forcing\\u201d reverse-KL regularization between the optimized policy and the reference (SFT) policy. Specifically, if we didn't use backtracking SFT to \\u201cwarm-up\\u201d the model, the reference policy would assign practically zero probability to [RESET], and the resulting KL-penalty would be massive if the DPO policy assigns non-zero probability to [RESET]. We also note that it is standard practice to perform SFT before DPO [1].\\n\\n> The effect of scale on safety violation rates with backtracking, and difference in backtracking effectiveness on Gemma vs. Llama\\n\\nIntuitively, we expect larger models to be able to learn to backtrack more precisely and self-correct more reliably, a trend we do observe between Gemma\\u20132-2B and Llama-3-8B (Table 1), where the larger Llama model seems to gain more from backtracking. While we don\\u2019t have the resources to scale experiments to larger models, we expect the backtracking technique to be useful particularly for large models capable of self-critique, as even the state-of-the-art models remain far from being perfectly safe.\\n\\n[1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model\"}", "{\"comment\": \"Thank you for your detailed read of our paper and your insightful feedback. We try to address your questions below.\\n\\n> There is no analysis of no analysis of how the proposed backtracking method compares/complements with existing methods for reducing unsafe generations.\\n\\nThe primary method for reducing unsafe generations is SFT or RL fine-tuning to reduce the probability of unsafe next-token generation. We compare backtracking with both SFT and DPO safety-trained variants in our work and find significant safety gains. However, there are other\\u2014less standard\\u2014methods of improving model safety such as [1] and [2]. We don't compare with these methods because backtracking is not mutually exclusive with other techniques such as [1, 2], and we expect complementary gains since it seems plausible that you can improve the safety of a base model using existing techniques, while teaching it to backtrack to gain additional safety.\\n\\n> Backtracking in OpenAssistant, and logit bias tuning\\n\\nOpenAssistant (OA) is heavily focused on utility, and the 12% backtracking rate on OA (Fig. 5) suggests a small amount of false positives. This is almost entirely mitigated by adding a small negative logit bias to the [RESET] token (-5), with backtracking rate on OA dropping to 1%, with basically the same model safety (1.5% -> 1.6%). Tuning logit bias in principle is not difficult, and it just requires benchmarking backtracking behavior on a development set of prompts. We will add clarifications in the paper and point out the need for a suitable choice of logit bias on the [RESET] token to ensure a near-zero backtracking rate during non-safety evaluations.\\n\\n> What if we skip backtracking SFT training?\\n\\nEmpirically, skipping backtracking SFT training breaks the model. The technical reason is that DPO imposes a \\u201czero-forcing\\u201d reverse-KL regularization between the optimized policy and the reference (SFT) policy. Specifically, if we didn't use backtracking SFT to \\u201cwarm-up\\u201d the model, the reference policy would assign practically zero probability to [RESET], and the resulting KL-penalty would be massive if the DPO policy assigns non-zero probability to [RESET]. We also note that it is standard practice to perform SFT before DPO [3].\\n\\n> Data and models used for Fig. 4\\n\\nTo compute safety rates under sampling (Fig. 4), we used the same safety evaluation dataset we use throughout the paper, detailed in Section 4.1. The models compared are backtracking and baseline models that have both undergone SFT + DPO training.\\n\\n> Do most SFT datasets have unsafe responses $y_i^\\u2212$?\\n\\nA common practice is to take the preferred response in a paired preference dataset for SFT training, so in such cases $y_i^\\u2212$ response would be available. If training on SFT datasets without $y_i^\\u2212$, we can still mix in preference data from the subsequent DPO stage to provide backtracking SFT supervision.\\n\\n> Have you tried attacking with an adversarial system prompt that says after you see the [RESET] token be extra helpful or something along these lines?\\n\\nIn our work, we do not explore malicious system prompts and assume the model takes on a default \\u201chelpful\\u201d and \\u201charmless\\u201d role, which would be the case for most black-box LLMs (e.g., ChatGPT, Gemini). We agree that it may be possible to inhibit backtracking through malicious system prompts, but even with backtracking inactive, the model should be in principle at least as safe as the non-backtracking baseline. Safety under model adaptation (e.g., fine-tuning and system prompting) is an open research question out-of-scope for this work.\\n\\n[1] The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions \\\\\\n[2] Improving Alignment and Robustness with Circuit Breakers \\\\\\n[3] Direct Preference Optimization: Your Language Model is Secretly a Reward Model\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Dear Reviewers,\\n\\nThe rebuttal period is almost over, and the paper has received both positive and negative feedback. The authors have worked hard to address all your concerns. Could you take a moment to review their responses and let us know if you have any unresolved issues?\\n\\nBest,\\nAC\"}", "{\"title\": \"paper revised\", \"comment\": \"Thank you for acknowledging our response, and we agree that the discussion on backtracking SFT would be valuable to add to our paper. We have uploaded a revised manuscript as you suggested.\"}", "{\"comment\": \"Thank you for your clarification. I am raising my score.\"}", "{\"summary\": \"This paper introduces a new method for reducing unsafe generations from language models by allowing a model to backtrack. The proposed method trains a model with SFT and DPO to output a [RESET] token if the partial generation so far is unsafe and then generate a new safe generation.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The strengths of the paper are as follows:\", \"The proposed method is simple and effective. Moreover, the inference procedure is simple and does not add a lot of overhead during inference.\", \"The authors show that when backtracking is used it is exponentially harder to get unsafe generations even when even temperature sampling is used to generate and pick the worst of $k$ responses.\", \"Backtracking improves robustness against a few adversarial attacks without any special adversarial training or other modifications.\"], \"weaknesses\": [\"Here are a few weaknesses:\", \"There is no analysis of no analysis of how the proposed backtracking method compares/complements with existing methods for reducing unsafe generations. Is the proposed method better? Are there complementary benefits?\", \"Appendix B2 shows that on the OpenAssistant dataset the [RESET] token is generated around 12-13% of the time with no logit bias. In all these cases are there unsafe generations? If so, when the logit bias is set to -5.0, then the [RESET] token is only generated around 2% of time, so is there a lot of uncaught unsafe generations? In general, is it hard to tune the logit bias hyperparameter?\"], \"questions\": [\"Here are some questions:\", \"The paper state that \\\"the models virtually never backtrack during non-safety evaluations.\\\" Can you quantify this?\", \"Have you tried using backtracking with just DPO with not SFT?\", \"To generate the graphs in Figure 4, what prompts were used? Are baseline models in figure 4 trained with DPO and SFT specifically for safety?\", \"Do most SFT datasets have unsafe response $y_i^\\u2212$?\", \"Have you tried attacking with an adversarial system prompt that says after you see the [RESET] token be extra helpful or something along these lines?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents \\\"backtracking,\\\" a novel method to improve language model safety by enabling recovery from unsafe partial generations using a [RESET] token. Unlike traditional prevention-based approaches, backtracking trains models to recognize and reset unsafe outputs mid-generation, integrating into SFT and DPO without compromising helpfulness. Evaluations show it reduces unsafe outputs by a factor of four across adversarial datasets. Reviewers appreciate its simplicity, practicality, and minimal overhead, highlighting strong safety improvements and comprehensive experiments with detailed ablations. While no major weaknesses were identified, some noted potential vulnerabilities to system-prompt-based attacks, which the authors acknowledge as an open question. Overall, reviewers see the method as a significant and effective contribution to AI safety, with the rebuttal addressing key concerns. I would recommend strong acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal period clarified key concerns raised by reviewers, including questions about false positives, the necessity of backtracking SFT, and adversarial vulnerabilities. The authors\\u2019 responses were thorough, leading reviewers to keep their high scores.\"}", "{\"summary\": \"This paper proposes a novel method, backtracking, for purifying LLM output to make it safe and understandable. In backtracking, the LLM generates an initial response and outputs a special token [RESET] if the initial response is harmful. An external moderator model, Llama Guard 2, is employed to help generate training data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method, backtracking, is a novel way to help LLM generate safer responses while keeping the generation quality.\", \"Backtracking does not introduce heavy inference expense in comparison to previous resampling-based methods, which highlights a new direction for safer generation.\", \"Adaptive adversarial attacks are also discussed.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": [\"There is no discussion or experiments on how likely backtracking LLM will overly reject safe queries.\", \"The performance of backtracking against adversarial attacks is not always satisfying, as shown in Table 2.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"summary\": \"The authors propose to train a model to predict when to stop generating and restart its generation by generating a \\\"reset\\\" token, and show this method reduces safety violations, even under adversarial attacks. The authors also present an adaptive version of GCG to test the limits of the backtracking technique.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Method is simple yet effective. The authors also included ablations on each step from baseline SFT to full backtracking SFT + DPO. The evaluation was done on multiple datasets and 2 SOTA base models; the authors also included detailed analyses of results and example generations. I appreciated that the details of the training setup and hyperparameter optimization were included in the appendix. Finally, the tradeoffs and limitations of assuming a single reset per generation was discussed.\", \"weaknesses\": \"No major weaknesses, although given that there is a large difference in relative drop in violation rates between the 2 models tested, it would be useful to show how this method affects larger model sizes and other base models beyond Gemma and Llama (e.g., Qwen, Mistral).\", \"questions\": [\"Table 1: What is the delta of performing backtracking SFT vs baseline SFT when backtracking DPO is performed in both cases? Is backtracking SFT necessary to achieve the backtracking DPO results reported?\", \"What is the effect of scale on safety violation rates with backtracking?\", \"There appears to be a rather large difference in the violation rates between the two models (Gemma/Llama) for backtrack SFT + DPO, even more so if we look the relative drop instead of absolute numbers. How can this be explained?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your positive feedback. We address some of your concerns below.\\n\\n> No discussion or experiments on how likely backtracking LLM will overly reject safe queries\\n\\nYou may have missed our results in Appendix B.2 (referred to in the main paper in Section 4.2, L318), where we evaluated how often the Llama model backtracks on a set of safe queries (validation set of OpenAssistant). We find that with a suitable choice of logit bias on the [RESET] token, the model backtracks for only 1% of safe queries (12% without tuning the logit bias).\\n\\n> The performance of backtracking against adversarial attacks is not always satisfying, as shown in Table 2.\\n\\nWe acknowledge that neither backtracking, nor any other method, is not a perfect defense against adversarial attacks, but we want to point out that backtracking is a technique meant to improve model safety *generally*, and the additional robustness against strong adversarial attacks (e.g., GCG and AutoDAN) without special training suggests that backtracking could become a component in a truly robust LLM system.\\n\\nDespite much progress, safeguarding LLMs against adversarial attacks remains an open problem, and is not an issue that can be addressed with a single technique. We note backtracking performance is strongly dependent on how it is trained: our models were not trained against adversarial attacks at all and still helped on such attacks. We expect that backtracking training in the presence of adversarial attacks (i.e., combining it with the idea of adversarial training [1]) would greatly improve robustness in the adversarial setting. However, we believe the results in the paper alone already demonstrate the value of our technique across a number of settings, and leave such approaches to future work.\\n\\n[1] Towards Deep Learning Models Resistant to Adversarial Attacks\"}", "{\"comment\": \"Thanks for the response, it would be good to add the observations from the first response to the paper.\"}", "{\"comment\": \"I have read the response and would like to keep my scores.\"}" ] }
Bo5eKnJPML
A Reasoning-Based Approach to Cryptic Crossword Clue Solving
[ "Martin Andrews", "Sam Witteveen" ]
Cryptic crossword clues are challenging language tasks for which new test sets are released daily by major newspapers on a global basis. Each cryptic clue contains both the definition of the answer to be placed in the crossword grid (in common with regular crosswords), and ‘wordplay’ that *proves* that the answer is correct (i.e. a human solver can be confident that an answer is correct without needing crossing words as confirmation). This work describes an LLM-based reasoning system built from open-licensed components that solves cryptic clues by (i) hypothesising answers; (ii) proposing wordplay explanations; and (iii) using a verifier system that operates on codified reasoning steps. Overall, this system establishes a new state-of-the-art performance on the challenging Cryptonite dataset of clues from The Times and The Telegraph newspapers in the UK. Because each proved solution is expressed in Python, interpretable wordplay reasoning for proven answers is available for inspection
[ "NLP", "Cryptic Crosswords", "Reasoning", "Proof/Verification" ]
Reject
https://openreview.net/pdf?id=Bo5eKnJPML
https://openreview.net/forum?id=Bo5eKnJPML
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xwYlo66E06", "tAvJUactUF", "sGcCI6gMU5", "qCDw7Bhp07", "frtdb7UQkC", "dE9rbDVrJx", "d8ZR9NST94", "cepAy2FHqa", "bqrVjWXE2B", "bZ19FA8qet", "WrpRAJ4DRZ", "PUUZBdF6JK", "OGH8xgPOEH", "K9qQxGfb2t", "JLIXI8FEyn", "HNbWcPJgEm", "E1e8iLhyMe", "DNPjTovQa6", "6erQ31DwzR", "4bujY3kVKw" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review" ], "note_created": [ 1733312639800, 1732564275419, 1733168693990, 1730689440846, 1732620296779, 1729715192298, 1732302722989, 1733171821433, 1730647971547, 1732303684422, 1730497511532, 1732566723766, 1732304390764, 1732303892464, 1732655022455, 1732303077523, 1732792739502, 1732699838952, 1737523873575, 1734766867054 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_2vAu" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_HU2t" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_2vAu" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_5DQK" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_PnNh" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Reviewer_HU2t" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Submission7903/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7903/Area_Chair_ATvD" ] ], "structured_content_str": [ "{\"comment\": \"We greatly appreciate the reviewers' time and effort in evaluating our paper, including their careful review of our revisions. Your feedback has been invaluable in strengthening our work.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you authors for your detailed response and clarifying my questions.\"}", "{\"comment\": \"In our comments added directly above, and also in the section for all reviewers\\n(including the updated version of the paper that we uploaded last week),\", \"we_addressed_the_following_weaknesses_and_questions_that_you_had\": \"* We explained the structure of cryptic problems more clearly, adding diagrams of 4 of the example\\nproblems in (new) Figure 2 (which shows how the wordplay elements in each clue interact to form the final answer)\\n + We also added a reference to Appendix A.1, where there is a longer background description of cryptics with examples of many clue elements \\n* We added motivating language to the paper, which (hopefully) sets the context as being one of looking \\nat reasoning problems more generally (such as mathematics and its related issues with hypothesisation and verification)\\n + Our choice of cryptic crosswords as a reasoning test-bed gives us a different mix of problems to surmount\\n (such as the flexibility of the language used, which leads to verification being a weaker signal). But\\n the fact that Cryptics are less familiar (than (say) maths puzzles or programming challenges) doesn't \\n mean that they don't warrant being taken seriously\\n* Finally, in the updated paper, we added some information about the construction and sensitivity of the ICL prompts, \\nalong the lines given above in our previous reply.\\n\\nIs there anything that you need from us to answer any remaining questions that you have? We believe that our \\nnew version of the paper substantially tackles all the issues that you identified. \\n\\nWe sincerely hope that you can further support our work and would appreciate it if you could let us know whether \\nyou are satisfied with our responses to your previous questions.\"}", "{\"summary\": \"The paper proposes a reasoning-based system for solving cryptic crossword clues using fine-tuned LLMs and a Python-based verification process. The model generates candidate answers and wordplay suggestions, formulates these into Python code, and verifies correctness via assertions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**Originality**\\nThe paper offers a unique approach to solving cryptic crossword clues. Like math proofs, the authors combined LLMs for candidate generation, wordplay suggestions, and formalized Python-based verifiers.\\n\\n**Quality**\\nThe paper details the fine-tuning of models for clue and wordplay generation and the creation of a domain-specific verifier, including formalization in Python. The experiments demonstrate a clear improvement over previous state-of-the-art results on the Cryptonite dataset, significantly outperforming prior baselines.\\n\\n**Clarity**\\nThe paper includes figures containing examples that help in understanding.\\n\\n**Significance**\\nThe paper shows an improvement in the cryptic crossword domain.\", \"weaknesses\": \"1. The mention of a 23.5% accuracy on the Cryptonite dataset using GPT-4-turbo in the paper [1] suggests that a more rigorous baseline is possible for this study. Using the same prompting strategy on the Gemini-Flash that is used in the paper could provide better insight into the improvement of the proposed method.\\n\\n2. The paper's reliance on the Wordplay dataset, gathered from newspapers (The Times, etc.), raises the possibility of data leakage, as The Times also contributes to the Cryptonite test set. If fine-tuning data includes patterns or clues from the test set, results may be artificially inflated.\\n\\n\\n[1] Saha, Soumadeep, et al. \\\"Language Models are Crossword Solvers.\\\" arXiv preprint arXiv:2406.09043 (2024).\", \"questions\": \"1. Partial correctness metrics, as reported in [1], can provide a more nuanced understanding of model success on cryptic crosswords, where the model might have answered some rows/columns but not the whole crossword.\\n\\n2. Some minor corrections on line 200 (\\\"from\\\" is repeated twice)\\n\\n[1] Saha, Soumadeep, et al. \\\"Language Models are Crossword Solvers.\\\" arXiv preprint arXiv:2406.09043 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Exploiting Partially Filled Grids\\n\\nWe have run an analysis to address the Exploiting Partially Filled Grids section of Saha et al (2024), \\nthe results are as follows (note that these are on the Cryptonite splits, \\nrather than their 'Init', though the percentage differences are large enough to draw some initial conclusions):\\n\\n| model | Hint (%) | samples | val-overall | val-quick | val-hard | test-overall | test-quick | test-hard |\\n| :---------- | ----------: | ----------: | ----------: | ----------: | ----------: | ----------: | ----------: | ----------: |\\n| Gemini-Flash | 25% | 200 | 37.0% | 38.5% | 36.9% | **45.5%** | 66.7% | 43.8% |\\n| Gemma2-9B-it | 25% | 200 | 37.5% | 38.5% | 37.4% | 44.0% | 66.7% | 42.2% |\\n||\\n| FastText k=0 NN | 25% | 200 | 15.5% | 15.4% | 15.5% | 21.0% | 33.3% | 20.0% |\\n| FastText k=0 NN | 50% | 200 | 52.5% | 38.5% | 53.5% | **62.0%** | 46.7% | 63.2% |\\n| FastText k=0 NN | 70% | 200 | 79.0% | 61.5% | 80.2% | **81.0%** | 100.0% | 79.5% |\\n\\nThe first two rows show the effect of simply filtering the output of our Gemma2-9B fine-tuned candidate answer \\nproposal model, based on a random letter pattern (using the same formulation as Saha et al (2024)) and then\\nusing the rest of our pipeline. Here, our paper's approach outpaces their reported GPT-4T results, but is itself\\nlimited by our first stage Gemma2 model's 'Top-20' candidate answers\\nonly containing the correct answer only around 45% of the time \\n(as astutely observed by another reviewer). Since the existing Gemma2 has not been fine-tuned on the filtering \\ntask, the other top-20 results are essentially restricting our systems performance, \\neven at this low level of partial fill.\\n\\nTo gain an insight into higher levels of partial fill, \\nand perhaps bolster our earlier comments on the relevance of these statistics,\\nthe 'FastText k=0 NN' lines correspond to running a nearest neighbour (i.e. k=0) search.\\nFor each question, we search for the whole clue (without any special processing)\\nover the FastText (Bojanowski et al (2016)) embeddings of a crossword word list \\n(The UK Advanced Cryptics Dictionary - Beresford (2000)) which\\ncontains 250,378 valid words that might be found in crosswords \\n(Note that the wordlist is not exhaustive, \\nsince 7.0% of the gold answers do not appear in the list). We then\\nuse the closest found word as the 'FastText k=0 NN' model answer. While the 25% line is lower-performing \\nthan the models considered in our work, the 50% (realistic for solvers) and 70% lines show \\nthat even this simplistic nearest neighbour search outperforms all the models of Saha et al (2024). \\n\\nThe kNN systematic approach would (naturally) form the baseline for our models in the partially filled grid case, \\nand thus opens up an interesting line of attack for future work \\n(although, as mentioned, we prefer to focus on the stand-alone clue problem, \\nsince the reasoning aspect there is most applicable across other fields, such as mathematical theorem proving, \\nwhere we see no clear analogy with partially filled answers).\"}", "{\"summary\": \"The paper introduces a novel reasoning approach to generate answers for cryptic crossword clues. It is a multistage approach which does the following:\\n1. First, a fine-tuned LLM suggests possible answers for the entire clue.\\n2. Second, for each candidate answer, another LLM generates possible definition/wordplay suggestions.\\n3. Third, for each suggestion, another few-shot LLM generates python code which wraps their domain-specific language to prove the wordplay. If there's an error in the verifier, it is fed back to the LML in a loop up to K times to improve/correct the proof. If there is no error, the answer is used as the final prediction. \\n\\nThis system improves overall performance on the Cryptonite dataset from 15.7% to 32.5%, and the authors give some analysis on the benefits and shortcomings of their approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The authors introduce a novel approach to a uncommonly studied task, which has interest to the NLP/reasoning communities for its difficulty and unique reasoning requirements, and to the cryptic crossword hobbyist community. They provide motivation for why this task is relevant and should be worked on.\", \"Their approach is effective, more than doubling the previous SOTA performance on the same task. Additionally, it offers several \\\"obvious\\\" avenues for improvement as it breaks the task down into more easily tunable parts.\", \"Finally, the paper is very clearly written and communicates their relatively unknown task and novel method effectively.\"], \"weaknesses\": [\"A full ablation study would make clear the shortcomings of each part of the pipeline. While the authors demonstrate the effectiveness of just using the top candidate answer, it would be helpful to know the benefit of adding the definition/wordplay model (combined with e.g., a simple LLM-based reranker) compared with the full pipeline.\"], \"questions\": \"- When you say `The Wordplay dataset follows the train, validation, and test split defined by\\n Cryptonite`, I assume you mean there is no contamination between the two, i.e., no clue in the\\n train dataset for Wordplay will appear in the Cryptonite val/test splits. Is this correct or did\\n you have mean something different?\\n- The performance of your pipeline is of course capped by the recall of the candidate answer\\n suggestion model, which seems to be about $40$% at $N=20$. This means that the rest of your pipeline\\n can correctly choose about $32.5 / 40 \\\\approx 81$% of the results, which seems promising. Have you\\n considered just making N as large as possible? It might be an easy (albeit inefficient) way to\\n improve overall performance by a bit, assuming the rest of the pipeline is robust to many wrong\\n candidates.\\n- Also, I'd be curious to know how much the 'verification' system helps. While your system is\\n certainly effective, I wonder how much the verification system improves over a \\\"dumb\\\" reranker\\n (e.g., scoring the probability of each (answer, definition/wordplay) pair using the LLM and\\n choosing the best).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Re: Weaknesses\\n\\n### '23.5% GPT-4 accuracy mentioned in Saha (2024)'\\n\\nThe result reported in preprint [1] for GPT-4-T with 5-shot prompting on \\nCryptonite was indeed a surprisingly strong 23.50\\\\% \\n(similarly for their 18.70\\\\% score on the 'Init' version),\\nand we will change ~L174 to reflect this \\n(now that it is clear what '5-shot' means, from their Appendix).\\nWe can also include that score 'as reported' in the comparison Table 1 - \\nplacing it alongside the Gemini-Pro 1.0 line.\\n\\nHowever, also in [1] (Table 3), the authors report that giving \\nGPT-4-T instructions to use a Chain-of-Thought approach (with 3-way self-consistency) only \\nresulted in an uplift of +2.15% in the final accuracy on 'Init'.\\nThis is a little puzzling, since it indicates that\\nit is likely that GPT-4-T's performance on the test set does not correspond to the kind of deductive process \\nthat might be reasonably expected. Considering the typical accuracy uplifts when CoT is used for GSM8K, for instance, \\nmodels would usually benefit much more from CoT\\n(PaLM improved from 18% to 57% on GSM8K in Wei et al, 2022). Unfortunately, \\nwe don't have any visibility into the training set given to GPT-4-T, \\nwhich apparently stands head-and-shoulders above the other commercial LLMs for this task.\\n\\nAs shown in our Table 1, our method does not require\\na commercial/proprietary LLM to work effectively :\\nGemini-Flash itself does not add much to the final performance,\\nsince it can be replaced with (non-finetuned) Gemma2 9B-it in the Formalisation/coding part \\n(to produce a fully local-LM solution that scores 29.0% on the Cryptonite test set)\\n\\n**EDIT** : We have run the additional baselines as requested - please see the \\\"Up-to-Date Baselines\\\" comment above.\\n\\n### Reliance on Wordplay Dataset and potential data leakage\\nThe Wordplay Dataset splits are specifically chosen to match \\nthe train/validation/test splitting mechanism of the Cryptonite dataset. Thus, \\nany clue leading to the answer \\\"EXAMPLE\\\" would only be in one split (the training set in this case) \\nof both the Wordplay and Cryptonite datasets.\\n\\n\\n## Re: Questions\\n### Partial Correctness Metrics\\nThe partial correctness metrics reported in [1] (Table 1) simulate solving the entire grid.\\nThe 'known letters' scores shown in [1] appear to have been obtained \\nby randomly revealing letters within the answers\\n(which is an approximation, since the Cryptonite dataset itself doesn't \\ninclude information about which letters in an answer are given by cross-words).\\n\\nWe agree that this is an interesting dimension to examine for models that are trained to answer the clue directly.\\nHowever once the models are part of a system, \\nit would only shed light on the first stage in our model's processing - since that is the source of our candidate answers.\\nA natural step here for a *system* would be to iteratively gather the first 20 candidate\\nanswers that meet the 'revealed letters' criteria, \\nwhich is tantamount to using an online crossword solving aid (frowned-upon by human solvers).\\n\\nFWIW, the inclusion of the '70%' column in [1] Table 1 makes no sense \\nin the context of Cryptonite/Init, since the 'Ximenean rules' for crossword grid\\nconstruction (Macnutt, 1966) mean that grids that appear in papers don't\\nhave that density of checking letters.\\n\\n**EDIT** : We have performed some analysis addressing the Partial Correctness Metrics question in an overall Author comment (above), since it is also somewhat related to a point made by another reviewer.\\n\\n### Refs: \\n* Wei et al, 2022: \\\"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models\\\", https://arxiv.org/abs/2201.11903\\n* Macnutt, 1966. \\\"Ximenes on the art of the crossword\\\", Published by Methuen, ASIN: B0000CN2M9.\\n\\n(typo fix also implemented - many thanks for catching it)\"}", "{\"comment\": \"In our 3 sets of comments added directly above, and also in the 3 comment sections added for all reviewers\\n(including the updated version of the paper that we uploaded last week), \\nwe addressed the following weaknesses that you identified, and questions that you had :\\n\\n* We have updated the baselines used in the paper beyond those obtained in Saha et al, (2024). The new\\nGPT4 5-shot results (that we show in updated Table 1) are +4.1% higher than the results previously given \\nin Saha et al's arXiv paper.\\n + Even so, our results with the Gemini-Flash formaliser, are still well ahead of that baseline\\n + When the whole pipeline consists of open-licensed models (with a non-finetuned Gemma2-9it acting as formaliser) \\n we still beat the updated GPT4 5-shot results (though by a slimmer margin)\\n + We have included standard deviation figures for the results in Table 1 - \\n and would be happy to discuss further statistical measures that show that both results are measurably\\n better than the baselines\\n* We discuss the rationale for the ordering of the pipeline in the new version of the paper \\n(at the beginning of the Methods section)\\n + This new section of the paper is a summary of the points made in our direct answers above\\n* The paper now includes more emphasis on the ability of our open-licensed pipeline to beat the GPT4o 5-shot baseline\\n* With respect to measuring results against the 'Init' split of the Rozner (2021) dataset, \\nthe paper goes into more detail about why Cryptonite has been the focus of the work, and also mentions \\n(in the Appendix) some potential hazards of having multiple datasets with cross-mismatched splits. \\n + We particularly do not want to risk the key Wordplay dataset losing its train/val/test clarity by \\n creating new versions that are split in such a way as to contaminate commercial model training \\n against the Cryptonite test split (this issue is described more fully in the Appendix)\\n\\nIs there anything that you need from us to answer any remaining questions that you have? We believe that our \\nnew version of the paper substantially addresses the issues that you identified.\"}", "{\"summary\": \"The authors propose a method that achieves state of the art performance on cryptic crosswords. The method uses LLMs to solve the subtasks using various strategies. First, Gemma2-9B is fine-tuned using low-rank adapters on answer-candidate generation, which returns a list of answers which match the 'pattern' (the length of the answer). Second, Gemma2-9B is again fine-tuned using low-rank adapters, but this time for 'wordplay suggestion', which takes each answer candidate and generates multiple definition/wordplay pairs, which are meant to explain why the answer is correct. Third, prompting is used in combination with Gemini-Flash to formalise the wordplay into Python 'proofs'. The Python proofs are run and any errors are reported back to Gemini-Flash, which is given a chance to correct its proof (until the answer is valid, or the maximum of 2 rewrites is reached). In the experiments, the formalisation process is shown to be helpful in the general, although for the 'Quick' subset, using the most frequent answer candidate proves too strong a baseline.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"State of the art result on complex task.\", \"The reported results show that LLMs are very helpful as components in a system to solve complex tasks, while also not sufficient to solve it completely on their own.\"], \"weaknesses\": [\"I think more space could be devoted to explaining cryptic crosswords. As someone previously unfamiliar with them, it took quite long for me to understand what exactly was happening in the example given in the introduction. Maybe a table or diagram that explains exactly how each part in the wordplay is related to the parts in the clue?\", \"A solid motivation for why this work is important is missing. What is the primary reason to be interested in LLM performance on this task? And relatedly, what should be the general take-away after reading it?\"], \"questions\": \"How were the in-context learning prompts constructed, and how many versions did you try? What is your sense of the sensitivity of the final accuracy w.r.t. the prompt, how important is it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Re: Weaknesses\\n### Baselines\\n\\nMany thanks for catching the size of T5-large (11B->770M) - this makes the parameter counts and scores make more sense,\\nsince (despite having been trained in 'earlier times') the T5 series are very effective models. \\n\\nOther papers (such as the arXiv paper of Saha et al, 2024) \\nhave details about the 5/10-shot scores achieved by \\nMistral 7B, Llama3 8B, Mixtral, Llama2 70B and Llama3 70B \\n(these all score below the rule-based Deits figures reported). Note that \\nthe 770M T5 model was fine-tuned on the Cryptonite training set, \\nrather than being used to predict answers in 5-shot (or 10-shot) way.\\n\\nIn terms of baselines, the Gemini-Pro 1.0 result was used to illustrate that\\n(at the time) commercial models had difficulty with cryptic crosswords \\nas a whole. Clearly, the result for GPT-4-T (23.5% on Cyptonite the test set) \\nreported in the arXiv paper of Saha et al (2024) \\nshows that some commercial models can achieve excellent results \\n(though it is quite possible that they have been specifically trained on Cryponite itself) -\\nso we plan to include it in the same bracket as the Gemini-Pro 1.0 result.\\n\\nOur goal here, though, is not to run a competition between commercial models :\\nTheir training data is opaque, and knowing which brand currently performs best is of limited/transitory value.\\n\\nThe use of Gemini-Flash in this paper is in the (limited) role of a Formaliser, \\nand the last line of Table 1 shows that it can be replaced by Gemma2-9B-it (without fine-tuning)\\nand still achieve SoTA results.\\n\\n**EDIT** : We have run the additional baselines as requested - please see the \\\"Up-to-Date Baselines\\\" comment above.\\n\\nFinally, while the results in Table 1 have different numbers of samples, the accuracy rates are comparable,\\nstandard deviation measures can be added to the table to make the differences clearer.\\n\\n\\n### Order of Operations\\n\\nThe order chosen (where answer candidates are used to generate potential reasoning chains, which are then verified)\\ndoes indeed differ from (say) ReAct. \\n\\nThis approach was used based upon watching human solvers - \\nwho report (/ observe on YouTube) going through the following steps:\\n(a) attempt to parse the clue in a number of ways, trying to isolate the definition from the wordplay;\\n(b) seeing which parts of the wordplay they are most confident about;\\n(c) 'having a hunch' of the final answer; and\\n(d) gaining a full understanding of how a clue's wordplay works \\n(i.e. can every element be explained) as proof of the overall process.\\n\\nAround L44, the description of the \\\"EXAMPLE\\\" process has been laid out in a way that \\nis more of a logical explanation. A more typical thought process for the clue \\n\\\"Cut up over politician on the French case (7)\\\"\\nmight first recognise that \\\"politician\\\"=\\\"MP\\\" and \\\"the French\\\"=\\\"LE\\\" - and \\\"on\\\" and \\\"over\\\" are very positional words - \\nleading to checking whether there's a 7-letter word for \\\"Cut\\\" or \\\"case\\\" that includes \\\"MPLE\\\".\\nThe final verification would see that \\\"Cut\\\"=\\\"AXE\\\" and \\\"up\\\" is a reversal indicator (for a down clue). \\\"EXA-MP-LE\\\" : QED.\\n\\nObserving the behaviour of (say) GPT-4 (using CoT) (or more recently o1), \\nsuggests that even very capable models tend to fixate early on during the reasoning process, \\nand are only rarely able of completely re-hypothesising. We also noticed LLMs being \\ncaught up with the literal ('surface') meaning of the clue, \\nwhich is often deliberately misleading. The combination of these elements \\ncaused us to re-consider how to approach this kind of problem.\\n\\nGiving our process candidate answers up-front \\n(so that they try to fit the reasoning to the answer, with varying degrees of success)\\nbakes the re-hypothesisation in. One weakness, however, is the looseness of the language used\\nin these puzzles - \\\"case\\\" and \\\"EXAMPLE\\\" are only easily recognisable as synonyms in a specific sense.\\n\\nClearly, the idea of iterative proof refinement is very attractive (along with RL, etc), \\nand we hope to use cryptic puzzles as a test-bed for this in future work.\\n\\nBut, similarly to IMO problems (rather than more logically approachable mathematics reasoning, say, GSM8K or MATH) \\nwe anticipate that the 'Aha!' moment is going to be a sticking-point. It is with interest that we saw Alpha-Geometry adopt the approach of suggesting interventions, and then rolling out what-if proofs from there.\"}", "{\"summary\": \"This paper identifies the gap that cryptic crosswords, a well-known language-oriented reasoning puzzle, have received little attention from the community. To this end, the authors propose a system that combines LLMs and a Python interpreter to solve cryptic clues. In the paper, they elaborate on the system architecture and run experiments using Gemma and Gemini model family to show better performance than the baselines on Cryptonite benchmark.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Identifying the gap that cryptic crosswords have received little attention from the community, especially post LLM era, the authors propose a framework that combines LLMs and a Python interpreter to solve cryptic clues, and show better performance than baselines on Cryptonite benchmark.\", \"weaknesses\": \"1. This paper may lack enough strong baselines and the comparisons with these baselines may not be entirely fair. There are 3 baselines including rule-based, a fine-tuned T5 large, and Gemini Pro 1.0 zero-shot. First, The T5 large model only has 770M parameters while Gemma 9b (which is used in this paper) has 9B parameters. Comparing Gemma 9B-FT with T5, we can see that changing the model with the same training data improved the accuracy from 7.6% to 15.9%. Second, Gemini 1.5 Flash is used in the proposed system while the baseline is zero-shot Gemini Pro 1.0 which shows inferior performance across general benchmarks. Third, while the papers mentions limited computation resource, rows in table 1 are reported based on different number of examples, which make them not comparable.\\n\\n2. It may not be clearly stated in the paper the reason of following the proposed order: i.e. generating answer candidates first, then generating possible definitions and wordplays, and then doing verification. Reasoning tasks typically generate reasoning steps followed by the final answer and verification, e.g. ReAct format. In this case, possible definitions and wordplays can be followed by the final answer candidates and external verifiers. This is also consistent with line 44 regarding the reasoning steps. Have you tried different variations and concluded that the proposed order is the best?\", \"questions\": \"1. Regarding weakness 2, have you tried different variations and concluded that the proposed order is the best?\\n2. Since the current system combines multiple open source fine-tuned models (Gemma) and a closed source model (Gemini) for different stages, I wonder if you have tried a single model performing all different tasks, either prompting or fine-tuning, which makes the system simpler?\\n3. The last paragraph in section 2.2 mentions that an apple-to-apple comparison with Rozner et al. (2021) is hard because of different split setting. Could you elaborate more on the challenge of comparing using the same data? Is it because the test split in one approach is used for training of another approach, or their model is not publicly available?\\n\\nMinor points => Typo(s): \\n1. Line 155, T5 large is 770M\\n2. Line 200, the first sentence in section 3.1 has two \\u201cfrom\\u201d\\n3. Line 344, that -> than\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Up-to-Date Baselines\\n\\nWe have run 5-Shot prompting benchmarks to increase the strength of the baseline, \\nand the following results can be substituted for the 'Gemini-1.0-Pro' line in the paper's Table 1:\\n\\n| model | samples | val-overall | val-quick | val-hard | test-overall | test-quick | test-hard |\\n| :---------- | ----------: | ----------: | ----------: | ----------: | ----------: | ----------: | ----------: |\\n| Gemma2-9B-it | 1000 | 5.7% | 11.5% | 5.2% | 4.5% | 10.5% | 4.0% |\\n| Gemini-1.5-Flash-001 | 1000 | 6.6% | 12.5% | 6.1% | 6.5% | 11.8% | 6.1% |\\n| GPT-4o (2024-11-25) | 1000 | 29.8% | 45.0% | 28.5% | 27.6% | 47.4% | 26.0% |\\n\\nInterestingly, this shows that:\\n(a) GPT-4o_2024-11-25 gives stronger results than those of GPT-4-Turbo_2024-04-09 given in Saha (2024);\\n(b) Gemini-Flash (which was used in development of the formaliser) is not particularly good at solving cryptic clues in itself;\\n(c) The Gemma2-9B model gets a large uplift from fine-tuning on the Cryptonite training set (compared to 5-shot prompting).\\n\\nGiven the large gap between the performance of GPT-4o and those of \\nClaude 3 Sonnet / GPT 3.5 Turbo reported in Saha (2024), \\nit seems reasonable to postulate that the GPT-4 model\\nhas also been fine-tuned on the Cryponite dataset itself. Further evidence being that, \\nas shown in Saha (2024), the GPT-4 model hardly benefits from CoT prompting (with 3-way self-consistency) - \\nit seems that GPT-4 is familiar the facts, but not the reasoning.\\n\\nWe will update the paper throughout to reflect the new baseline - against which we note we still\\nhave a state-of-the-art result, both with Gemini-Flash as the Formaliser, and (narrowly) with\\na fully Open-licensed solution (using Gemma2-9B models throughout). An additional aspect to note is \\nthat the method provided in the paper provides clearly interpretable output, since\\nthe reasoning steps are made directly available (as in Figure 3).\"}", "{\"comment\": \"## Re: Weaknesses\\n\\n### Pipeline Ablation \\n\\nWe agree that doing an ablation using a trained re-ranker for each of the definition / wordplay steps would make\\nan excellent addition to the paper (please see below).\\n\\n\\n## Re: Questions\\n\\n### Cryptonite vs Wordplay splits\", \"yes\": \"The splits of these two datasets has been made deliberately consistent\\n(i.e., no clue in the train dataset for Wordplay will appear in the Cryptonite val/test splits).\\n\\nMoreover, the code for generating the wordplay dataset intentionally does not output\\nwordplay for the test split - to avoid easily digestible test set data appearing in the wild\\nas a result of the construction of the wordplay dataset.\\n\\n### Number of candidate answers\\n\\nThe choice of N=20 was, frankly, mostly one of computation/time resources. We would be happy to add a graph of the frequency of the correct answer being in the top-N for different N (the data up to 20 already exists). \\n\\nHowever, our observation has been that (supposing the most-frequent answer is not the correct one), there\", \"is_a_balancing_act_to_be_performed\": \"Choosing a non-most-frequent answer (for large N) becomes very risky,\\nsince the 'tail' of the answers (across the English language) is rather long, and the proofs are not iron-clad\\n(since the proving DSL includes functions such as 'is_synonym()' - which is rarely a solid Yes/No decision, \\nsee the \\\"EXAMPLE\\\" example on page 1 : Here, 'case' is a true, but weak, synonym for \\\"EXAMPLE\\\" ). Thus, \\ntrying for large N (indeed, one could imagine just throwing all '7 letter words' at the proving system)\\nwould conceivably cause more harm than good.\\n\\n\\n### Pipeline Ablation : Use of a 'dumb' reranker\\n\\nMany thanks for this suggestion - this would clearly be a good addition the paper.\\n\\nAnecdotally, to some extent we tested the first of these steps (choosing a candidate answer) \\nby having the model make multiple suggestions until it chose to halt \\n(trained to halt at the first correct suggestion). This was somewhat effective \\n(though the experiment was directed at whether self-correction \\ncould be trained for - as part of another line of investigation).\\n\\nThe second step (whether proposed wordplay 'makes sense', independent of the formalisation/prover) \\nhas also been informally tested - given the observation (illustrated in Section 4.2 of the paper)\\nthat wordplay for incorrect answers is rather clearly absurd. \\n\\nThere is an additional interesting question, though, of whether 'provable' vs 'non-provable' wordplays \\nfor a correct candidate answer can be distinguished without the help of the Formaliser+Python elements. This, \\nperhaps, may be difficult to train for (given the limited number of successful proofs overall).\\n\\nWe will push to get the first two ablations done by the end of the review period - many thanks for the suggestion.\"}", "{\"comment\": \"## Re: Questions\\n### Weakness 2 : Order of operations\\n\\nDiscussed above. Practically speaking, the paper's order of operations constituted the bulk of the research performed\\n(other orders were considered, but eliminated prior to major investment of effort due \\nto anticipated short-comings). \\n\\nOne other element being considered was the quantities of data/compute available: \\n(a) 400k+ Cryptonite clue->answer pairs; \\n(b) <10k clue-> wordplay pairs; \\n(c) infinite patience for the Python prover; and\\n(d) limited time/resources for running many long-context experiments through an LLM \\n(despite Gemini-Flash being remarkably convenient and low $/token)\\n\\n### Closed vs Open models\\n\\nIn fact, although the Formaliser model on which most experiments/trials were conducted was the \\nGemini-Flash model, Table 1 (last line) shows the result for Gemma2-9B-it (without fine-tuning)\\non the same task. The 'headline result' decreases from 32.5% to 29.0% - which would still be \\nSoTA, even with the updated GPT-4-T numbers. \\n\\nThus the Gemma2-9B base (either fine-tuned on Cyptonite, Wordplay or its '-it' variant) are \\nquite capable of excellent performance. As identified above \\n(and where we may have appeared over-defensive about baselines),\\nthis does raise the question of how best to include commercial/proprietary results.\\n\\n\\n### Comparisons vs Rozner et al (2021)\\n\\n**EDIT** : This comment has been expanded upon below - since it raises an additional issue about commercial vendor training that deserves more attention\\n\\n\\n## Text Fixes\\n\\nTypo fixes (and the size of T5-large) have been incorporated : Many thanks.\"}", "{\"comment\": \"Thank you for the rebuttal. My concerns are addressed and I have raised my score to 6.\"}", "{\"comment\": \"## Re: Weaknesses\\n### Explanations of Cryptic Clue mechanisms\\n\\nWe would certainly be happy to add diagrams for each of the clues used \\nin the body of the paper (\\\"EXAMPLE\\\", \\\"DELVE\\\", \\\"HERON\\\", etc). And, of course, \\nthe current more detailed exposition in Appendix A should also be referenced.\\n\\n### Motivation\", \"there_are_several_motivations_to_study_how_model_performance_can_be_improved_on_the_cryptic_crossword_solving_task\": \"(a) The task is something that thousands of people find a satisfying intellectual challenge on a daily basis.\\nSolving these puzzles requires understanding multi-layered language constructs, blending logic, wordplay, and contextual nuance. This provides a unique challenge for evaluating and improving LLMs\\u2019 capabilities in NLU and reasoning.\\n\\n(b) There is ample data (decades of solved puzzles, each containing over 20 clues, from multiple major newspapers),\\nincluding new (original) test data being created daily : \\nThis contrasts with (for instance) IMO/AIME problems, where there is a much lower number of novel problems available. \\n\\n(c) The raw performance figures for LLMs on this task are perhaps less interesting than the techniques required to \\nperform the reasoning itself. The method in this paper, though this aspect wasn't emphasised, \\nalso explicitly reveals the reasoning (i.e. validated wordplay) required to solve each problem.\\nThere is typically one 'true' reasoning path that 'works', although it might be expressed in slightly different\\nways by different solvers.\\n\\nThis work is important because it is the first to apply a hypothesis/reasoning/verification system to a task\\nwith this degree of flexibility.\", \"general_take_away\": \"Cryptic crosswords have been shown to be a fertile test-bed for the next generation of reasoning systems\\n(which may include an LLM as a component, but operate in a more iterative manner).\\n\\n\\n## Re: Questions\\n### Construction of ICL prompts / Sensitivity\\n\\nThe LoRA-fine-tuned Gemma models had simple instructions for the \\ntasks (clue->candidate answer) and (clue+answer->wordplay). These were not iterated \\non specifically, because (following the SFT process on actual data)\\nthe Gemma models just use the prompt text as a signpost to go into \\n'remember the specific task' mode (in the author's mental model).\\n\\nSome iteration was performed for the LLM (Gemini-Flash) prompts, \\nsince it was thought that the transformation from Wordplay to \\nPython 'assert' statements would require much more background knowledge.\\n\\nHowever, the 'prompt engineering' was more to do with the structural outline\", \"than_specific_word_smithing\": \"The overall prompt starts with a (terse) introduction \\nto the specialised 'defined terms' for cryptic puzzles;\\nthen 20 examples of wordplay (which are cheap/short);\\nfinishing with a limited number of Wordplay->Python conversion demonstrations\\n(which are longer, and more expensive, given that these are\\nhand-crafted, and selected to demonstrate parts of the DSL in action).\\n\\nSome more detail-oriented work was done on how best to prompt Gemini-Flash to \\ngenerate code using a doc_string (or otherwise), and whether to prompt for\\ncode completion or generation of whole functions, \\nfor instance. The final prompt is shown in Appendix A.4.6.\"}", "{\"comment\": \"### Revised PDF\\n\\nAn updated PDF has been uploaded, including many changes that have been prompted by reviewer feedback:\\n\\n* **Motivation** - the initial focus has been altered to emphasise \\nthat the core challenge of this work is one of \\nreasoning using LLMs. Cryptic crosswords is a fascinating test-bed for this\\n* **Diagrams** - 4 separate cryptic clues are shown in diagram form (our new Figure 2), \\nwhich aim to make the principles of cryptic solution clearer for those unfamilar with the topic\\n* **Pipeline Ordering** - an introductory section discusses the choices made, which may have seemed unusual at first\\n* **Wordplay dataset splits** - the deliberate alignment of the Wordplay splits with Cryptonite have been highlighted\\n* **Candidate list size** - this was additionally commented on in the Methods section\\n* **ICL Prompting** - this has been clarified a little, specifically which parts of the process required care,\\nand also the degree of transferability of the ICL prompts from Gemini-Flash directly to Gemma2-9B-it \\n(an off-the-shelf open-licensed model)\\n* **Baselines** - updated baselines have been incorporated into the results. We note that our system\\nstill achieves a new state-of-the-art result, \\nand our same pipeline built using fully open-licensed models would also beat the previous state-of-the-art\\n* **Ablations** - 2 ablation results were added, showing the effect of using average-logprob measures from\\nthe LLMs generating (a) candidate answers, and (b) wordplay hypotheses. The second of these \\n(which showed limited performance) was a little surprising. Overall, these ablations showed that \\nneither 'dumb ranker' would be able to remove the need for formalisation/verification step. Many thanks are due \\nto reviewer `2vAu` for this insightful suggestion.\\n* **Testing on 'Init'** - we have updated the discussion in Section 2.2 to explain our focus on the Cryptonite\\ndataset, and added a more detailed break-down in the Appendix. The appendix explanation also\\nidentifies the looming problem of cross-contamination of training vs test data\\n(specifically for commercial models) between Cryptonite and the Rozner datasets\\n* **Partial Correctness Metrics** - this aspect \\n(more related to solving full grids than the standalone solving that was the main focus of our work),\\nalong with a viable solution approach, is now described in our Methods Section 3.6. For reasons of space, \\nthe corresponding table of results is in the Appendix.\\n\\nThe authors are confident that the paper has been significantly improved through the review process so far,\\nand welcome the opportunity for further discussion that ICLR has provided by \\nextending the author/reviewer discussion period.\"}", "{\"comment\": [\"### Comparisons vs Rozner et al (2021)\", \"At the start of our research program, the Cryptonite dataset of Erfat et al (2021) was chosen as being the focus,\", \"over the approximately contemporaneous dataset from Rozner et al (2021) (denoted Rozner below), for the following reasons:\", \"Cryptonite was larger (523k clues, compared to 142k in Rozner)\", \"Cryptonite consists of clues from The Times and The Telegraph (whereas Rozner is the UK's Guardian). While these are all fine newspapers, it is clear that in the cryptic crossword community (found online via websites for wordplay discussions, or YouTube channels) that The Times is considered the Gold Standard of cryptic crosswords.\", \"Indeed one of the Guardian's own cryptic blog posts (https://www.theguardian.com/crosswords/crossword-blog/2024/nov/04/cryptic-crossword-ai-conquer-human-solvers-artificial-intelligence-software) directly states: \\\"The Times hosts an annual crossword-solving competition and it remains, until such time as the Guardian has its own version, the gold standard.\\\"\", \"In the author's view, The Times deserves its role as Gold Standard due to (a) adhering to / upholding the Ximinian standard for what is allowed in clues; (b) doing so for decades; (c) maintaining high consistency of clue difficulty within puzzles (where solvers frequently complain that the Guardian clues can often be rather haphazard)\", \"The Cryptonite dataset was made available for direct download - even though the licensing is (politely) 'fuzzy', it remains a useable research dataset (and seems unlikely to be challenged by The Times, since it is not possible to reconstruct their full puzzles from the clues given as individual line-items, due to deduplication, for example)\", \"The Rozner dataset required researchers to 'scrape their own data', likely because while the data was being retrieved from a public website, the data itself could reasonably be assumed to be copyrighted. This slight inconvenience had a useful impact (please see below)\", \"Unlike the Cryptonite dataset, the Rozner dataset does not include Across/Down markers for the clues - which makes some of the clues difficult to resolve (for instance \\\"EXAMPLE\\\" on the paper's first page can only be read correctly if one sees that it is a Down clue - which converts 'up' into a reversal indicator)\", \"The Cryptonite dataset splits were set in stone. Rozner, though, had a series of splits (random, disjoint, and 'init'):\", \"The 'random' split was clearly shown to be a poor way of separating train/test due to close overlaps\", \"The 'disjoint' split is similar in spirit to the Cryptonite methodology\", \"The 'init' split had the additional twist that common prefixes would only be found in their own splits. This had a catchy intuition, although it's not clear from a cryptic cluing perspective whether this has much genuine basis. While there are some prefixes that are common (eg: 'EX-' is easily clued by refering to divorce, etc), the impact seems overall marginal (particularly given the accuracy rates reported)\", \"Our paper describes a system trained on Cryptonite clue/answer training data, and also (as a component) the Wordplay dataset (which abides by the Cryptonite splits too).\", \"If it is considered essential, we *could* test our existing (Cryptonite trained) system on the Rozner 'Init' test set. However, while Saha (2024) could have the flexibility to run tests on either dataset (since no training was performed), our own 'Init' test would be clearly mis-aligned vis-a-vis the data split.\", \"But there is also a structural reason against re-training the paper's system on the Rozner 'Init' split for (specifically) Wordplay. The Wordplay dataset generation process was guided by the principle of maintaining the Cryptonite splits, it would be a disaster if Rosner Init *Wordplay* splits were to be made public. The reason: The Cryptonite test set (very likely) has a large intersection with the Rozner Init training set. As seems evident from the baseline improvements shown above, OpenAI likely trains on the Cryptonite training set (they are welcome to do so). HOWEVER, since now Saha (2024) appears to be releasing the 'Init' training set under an MIT license, OpenAI would be quite within their rights to also train on that. Thus, commercial systems (against which reviewers are forcing academic papers to benchmark) will have been trained on the test sets (without commercial vendors explicitly 'cheating' - they will just be training on all the available training data).\", \"In our judgement the *reasoning paths* that are being tested here by the cryptic crossword task\", \"are a prize cultural asset, generated over decades of human effort,\", \"and this should not be squandered. Hopefully, this explains the authors'\"], \"reluctance_to_dataset_hop\": \"We don't want to make it common to gather\\nand distribute cross-contaminating Wordplay datasets.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"**Summary**\\n\\nThe paper fosus on the problem of cryptic crosswords for LLMs by proposing a sort of CoT model to solve varius subtasks.\\n\\n**Strengths**\\n\\n- The paper present a CoT to solve an apparently complex task\\n\\n**Weaknesses**\\n\\n- A solid motivation for why this work is important is missing. Even after the rebuttal, motivation is really weak. The inportance should be better evalated.\\n\\n\\n**Final remarks**\\n\\n- The paper can be better motivated\", \"additional_comments_on_reviewer_discussion\": \"The interaction between the authors and the reviewers is overwhelming. There are too many details to reach the goal of convincing the reviewers to increase their scores.\\nDuring the discussion period, the authors revealed their identity.\"}" ] }
BnYJdouhkp
Promptus: Representing Real-World Video as Stable Diffusion Prompts for Video Streaming
[ "Jiangkai Wu", "Liming Liu", "Yunpeng Tan", "Junlin Hao", "Xinggong ZHANG" ]
With the exponential growth of video traffic, traditional video streaming systems are approaching their limits in compression efficiency and communication capacity. To further reduce bitrate while maintaining quality, we propose Promptus, a disruptive novel system that streaming prompts instead of video content, which represents real-world video frames with a series of "prompts" for delivery and employs Stable Diffusion to generate videos at the receiver. To ensure that the prompt representation is pixel-aligned with the original video, a gradient descent-based prompt fitting framework is proposed. Further, a low-rank decomposition-based bitrate control algorithm is introduced to achieve adaptive bitrate. For inter-frame compression, a temporal smoothing-based prompt interpolation algorithm is proposed. Evaluations across various video genres demonstrate that, compared to H.265, Promptus can achieve more than a 4x bandwidth reduction while preserving the same perceptual quality. On the other hand, at extremely low bitrates, Promptus can enhance the perceptual quality by 0.139 and 0.118 (in LPIPS) compared to VAE and H.265, respectively, and decreases the ratio of severely distorted frames by 89.3% and 91.7%. Our work opens up a new paradigm for efficient video communication. Promptus will be open-sourced after publication.
[ "Video Streaming", "Stable Diffusion", "AIGC", "Prompt" ]
https://openreview.net/pdf?id=BnYJdouhkp
https://openreview.net/forum?id=BnYJdouhkp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wtGIoRNuo3", "v8HRpzrDWq", "tSSbjeGnaO", "siA4cvhD1j", "rfhUA1VC8S", "psCyP7uevl", "ojCovqHt6X", "nF8qNTfgqx", "jUP6w4YKKt", "iEDbpB1zwA", "dd5eqEtriT", "V6yCg1dnxL", "V6LzRZeraF", "TOSatujgRC", "QDwVjxHnkl", "NKhBHBujZo", "LAmA5EyvUU", "KWu8y4QZgF", "JH4hWTqqfA", "CPlZoKVUvR", "AfBuQLPogN", "6lfRn4LWyk", "6eEFhCt0OY", "572R406Jp0", "564ZK8Sn7H", "3cbsvBszQj" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732703216826, 1730773364054, 1733062942171, 1732474006799, 1732648395285, 1729760705954, 1732474199013, 1732508637315, 1730723100146, 1730724959649, 1733062908164, 1737633028701, 1733062861399, 1730563595563, 1732475591068, 1732623850433, 1733062795136, 1732475023103, 1732642415639, 1733062726036, 1730358134454, 1732521103337, 1732475327594, 1732474884881, 1732474796240, 1732610296845 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_qpvP" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_Jrjx" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_W5gp" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Area_Chair_6n6F" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_Jrjx" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_LCsu" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_pcrw" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_W5gp" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_5rEN" ], [ "ICLR.cc/2025/Conference/Submission10944/Reviewer_pcrw" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ], [ "ICLR.cc/2025/Conference/Submission10944/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Jrjx\", \"comment\": \"Thank you for your reply. Below is our response to your questions.\\n\\n> Q1: Evaluation Metrics\\n\\nThanks for your comments. Your suggestion is valuable. We tested the performance of PSNR and SSIM on UVG, with a bitrate of 225 kbps. The results are as follows:\\n\\n| Metrics | H.266 | H.265 | Ours |\\n|-------|-------|-------|-------|\\n| psnr | 34.14 | 32.60 | 30.04 |\\n| ssim | 0.80 | 0.72 | 0.61 |\\n| lpips | 0.32 | 0.39 | 0.24 |\\n\\nThis is because Promptus is more sensitive to sharp textures like all AIGC models, while PSNR and SSIM are more sensitive to smooth textures with large areas. As a result, the images generated by Promptus have sharper details with better perceptual quality, while the baselines are more blurred but with higher PSNR. Sometimes, the more details presented by Promptus, the lower the PSNR, while the more blurred the baselines are, the higher the PSNR can be. This phenomenon is also illustrated in the subjective examples in Figure 10.\\n\\nWe agree that PSNR and SSIM are widely used in the video compression task. However, in this paper, Promptus is more oriented towards the semantic communication task. The main goal is to keep the semantic information correct at low bitrates. LPIPS considers perceptual quality, making it more suitable for evaluating semantic distortion. Promptus is a novel paradigm for high-fidelity semantic communications. There are still issues that need to be fixed in future work.\\n\\n\\n> Q2: Details of H.266 Comparison\\n\\nThank you for pointing that out! The encoder implementation of H.266 is [1], version 1.12.1-rc1, and the decoder implementation is [2], version 3.0.0. As for the encoding settings, we specified only the resolution (512x512), target bitrates (140 kbps, 280 kbps, 360 kbps, 540 kbps), and frame rate (30 FPS). All other settings were kept at default, and we did not specify them.\\n\\n[1] VVenC. https://github.com/fraunhoferhhi/vvenc.\\n\\n[2] VVdeC. https://github.com/fraunhoferhhi/vvdec.\\n\\nFor reproducibility, we will open-source the encoded videos and source code after publication.\\n\\nThank you for the discussion and look forward to your reply. We are making the paper more solid together.\"}", "{\"summary\": \"In this paper, the authors leverage a stable diffusion model as a compression method to encode video into text (embedding format), using this text embedding as the video representation. The entire process resembles information distillation, where trainable low-rank features are used to form the prompt embedding. After optimization, these low-rank factor matrices become the compressed video representation. The authors propose several solutions to control bitrate, perform inter-frame compression, and ensure pixel alignment.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(+) Using stable diffusion as a distillation approach is interesting.\\n\\n(+) The paper is well-written, with most figures well-designed, illustrated, and easy to follow.\", \"weaknesses\": \"(-) My major concern is whether there are severe flicker issues or temporal over-smoothing in the decoded video, as the authors did not submit any video as a supplementary file. Considering that the rebuttal can only show images, I suggest the authors present an x-t slice, as used in [1], for a highly dynamic video. The x-t slice would provide a better visual understanding on the temporal consistency.\\n\\n(-) Could the authors justify why it is necessary to use stable diffusion as the intermediate medium for video compression? Why not directly use CLIP or the SD decoder to distill the frames? Is it because CLIP is not powerful enough?\\n\\n(-) The VAE could introduce significant color tone-mapping issues and spatial blurriness. Did the authors encounter similar issues on a large scale? I can see severe color issues in Figure 10, particularly in the boy's eye and background. Consequently, I am also concerned about some analyses in Section 4.4. In my understanding, the performance drop on the Animerun dataset occurs because those frames are edge cases for SD/VAE, which is why there are numerous color mapping issues.\\n\\n(-) Since the VAE and diffusion models are fixed, how do the authors ensure that the frames being represented and compressed fall within the diffusion model's domain of knowledge? Additionally, is the main source of encoding performance derived from CLIP/VAE or the U-Net? I did not see a related discussion on this matter.\\n\\n(-) The qualitative result comparisons are insufficient and do not convincingly demonstrate the proposed method's promise.\\n\\n[1] Li, Zhengqi, et al. \\\"Generative image dynamics.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"All my questions are outlined in the weaknesses section. My major concerns are the necessity of using stable diffusion as a streaming approach, the insufficient qualitative result comparisons, and the generalization issues related to the VAE or other components.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer W5gp,\\n\\nThank you once again for your dedicated review! As the deadline for the author-reviewer discussion phase approaches, we eagerly await your feedback on our responses. Any insights you provide are greatly appreciated and will help us further improve this work.\\n\\nThank you so much!\\n\\nThe authors\"}", "{\"title\": \"Response to Reviewer qpvP\", \"comment\": \"Thank you for the valuable feedback. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: ...I suggest the authors present an x-t slice, as used in [1], for a highly dynamic video. The x-t slice would provide a better visual understanding on the temporal consistency.\\n\\nThank you for your valuable suggestions! We agree that temporal consistency is crucial for the generation of Stable Diffusion. Following your suggestions, we added the X-t slice experiment into Figure 11, where Figure 11(a) represents a high-dynamic video (time-lapse). We also added Figure 15, which shows the video frame by frame (key frames and interpolated frames). The results indicate that our videos basically align with the ground truth videos in terms of motion. This is because Promptus is pixel-consistent with each frame of the ground truth video, ensuring that the temporal consistency also aligns the ground truth video. Additionally, prompt interpolation guarantees the continuity of adjacent frames.\\n\\n> W2: ...why it is necessary to use stable diffusion as the intermediate medium for video compression? Why not directly use CLIP or the SD decoder to distill the frames?\\n\\nThank you for raising this question. The question of which model should serve as the intermediate medium is quite valuable. As for CLIP, it does not have a decoder, which means it can only encode images but cannot generate images.\\n\\nUsing the SD decoder (VAE decoder) as the intermediate medium is a good suggestion, as it can effectively distill frames. However, it cannot perform temporal interpolation. We added distillation experiments where the SD decoder serves as the intermediate medium and presented the interpolation results in Figure 15 (latent interpolation). The results demonstrate that latent interpolation fails to preserve the motion between frames, resulting in spatial overlaps and ghosting. This is because the frames in latent space are not temporally close, making the interpolation unreasonable. To achieve inter-frame compression, one feasible solution is to encode the latent frames using a codec (such as H.265). However, as shown in Figure 9 and Figure 8, this solution performs worse than Promptus, due to the errors introduced by the codec in the latent space.\\n\\n> W3: The VAE could introduce significant color tone-mapping issues...\\n\\nYour concern is insightful and we agree that the VAE itself may introduce issues such as color tone mapping. However, since Promptus employs gradient descent to fit the ground truth frames, this issue can be compensated for in end-to-end fitting. Actually, the color discrepancies in Figure 10 are primarily stem from the low bitrate of the prompt. We added subjective examples to Figure 5, which shows that when the rank (bitrate) is low, the lamp\\u2019s color in Figure 5(d) is inconsistent with the ground truth. When the rank increases, the lamp\\u2019s color in Figure 5(e) is corrected. This is because when the bitrate is low, the representational capacity of the prompt decreases, making it unable to accurately describe all the details in the image, resulting in inconsistent colors.\\n\\n> W4: ... is the main source of encoding performance derived from CLIP/VAE or the U-Net? ...\\n\\nThank you for raising this insightful question. The source of encoding performance is derived from both VAE and the U-Net.\\n \\nFor VAE, it transforms frames from pixel space to latent space, significantly reducing the data size (e.g., 512*512*3 -> 64*64*4\\uff0creduced to 1/48). However, VAE cannot perform inter-frame compression, as discussed in the aforementioned W2. \\n\\nFor U-Net, its compression performance comes from two factors. First, by transforming the latent space into the prompt space, it further reduces the data size (e.g., 64*64*4 -> (1024+77)*8, reduced to 1/2). Second, because the frames are temporally close in the prompt space, this enables interpolation-based inter-frame compression, significantly reducing the data size. For example, when the keyframe interval for interpolation is 10, the bitrate reduces to 1/10. In total, U-Net can further compress the data to 1/20 in addition to VAE. We added discussions on this in Section 3.1 and Section D.\\n\\nIn summary, these results demonstrate that VAE and U-Net need to work together to achieve the performance of Promptus.\\n\\n> W5: The qualitative result comparisons are insufficient.\\n\\nThank you for pointing that out. To enrich the qualitative result comparisons, we added 7 experiments from different perspectives: Figure 5 (color discrepancy), Figure 11 (temporal consistency), Figure 14 (training process), Figure 15 (interpolation experiments and ablation studies), Figure 16 (\\\"fingers\\\" and \\\"text\\\"), Figure 9 (add H.266), Figure 13 (add H.266). Please refer to the \\\"General response for all reviewers.\\\"\"}", "{\"title\": \"A few more questions about the response\", \"comment\": \"Thank you for your detailed responses to my comments and for incorporating additional experiments, especially the comparison with H.266. I appreciate your effort to address the points raised and enhance the manuscript. However, I believe some concerns remain insufficiently addressed:\", \"evaluation_metrics\": \"While I understand that LPIPS provides insight into perceptual quality, for video compression, the most widely recognized metrics are PSNR and VMAF, which are critical for evaluating structural integrity and perceptual alignment. I encourage you to include these metrics to provide a more holistic assessment of your method's performance and to align with the standard practices in the field.\\n\\nDetails of H.266 Comparison: In the newly added comparisons with H.266, the paper lacks sufficient detail regarding the codec model and compression parameters used. For the results to be interpretable and reproducible, it is essential to specify whether VTM and its version or another implementation was used, as well as the precise settings (e.g., GOP size, CRF, or bitrate). Without this information, it remains unclear how robust the comparisons are.\\n\\nOverall, while I appreciate the improvements made to the manuscript. However, I feel the authors have not directly addressed my concerns about using the evaluation metrics, avoiding discussion of their method's performance on these benchmarks. This lack of engagement with standard metrics leads me to reconsider my scoring for the manuscript.\"}", "{\"summary\": \"This paper propose Promptus, a novel system that replaces video streaming with prompt streaming by representing video frames as Stable Diffusion prompts. To achieve this goal, this paper conducted experiments in three aspects: ensuring pixel alignment, achieving adaptive bitrate, and inter-frame compression. Experiments shown that Promptus can achieve more than 4x bandwidth reduction while perserving the same perceptual quality compared to H.265.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tPromptus provides a new communication paradigm, using only prompts to stream a video.\\n2.\\tThe paper is easy to understand well and clearly written.\\n3.\\tThe effect at low bitrates is significantly improved. Compared with VAE and H.265, Promptus's perceptual quality is improved by 0.139 and 0.118 (in LPIPS), respectively, and the proportion of severely distorted frames is reduced by 89.3% and 91.7%, respectively.\", \"weaknesses\": \"1.\\tThe experiments and evaluation are not sufficient enough.\\n\\na) Only the decoding time is listed in the appendix, but not the encoding time. \\n\\nb) For measuring the generated video sequences, it is recommended to open source these videos, or show examples frame by frame to prove the stability of the generation.\\n\\nc) Since Promptus is aimed at low-bitrate video compression, it is recommended to release more subjective comparison results to prove the advantages of the Promptus. \\n\\nd) The paper only mentions the interpolation of prompt. It's suggested to be compared with applying the interpolation in the pixel domain among video frames in terms of generation performance and complexity to prove the effectiveness of interpolation of prompt.\\n\\ne) There is also a lack of comparison with other specific state-of-the-art video compression methods. It is recommended to compare with more video compression methods (e.g. [1][2]).\\n\\n[1] Li J, Li B, Lu Y. Neural video compression with feature modulation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 26099-26108.\\n\\n[2] Sheng X, Li L, Liu D, et al. Prediction and Reference Quality Adaptation for Learned Video Compression[J]. arXiv preprint arXiv:2406.14118, 2024.\\n\\n2.\\tThis proposed diffusion-based video compression method shows several limitations. For example, the relatively large delay and complexity, because the prompt needs to be inserted into the intermediate frame, so the subsequent frames must be received before playback. The second is the usage scenario, which is currently limited to low-bitrate video streaming scenarios.\", \"questions\": \"1.\\tThe comparison with more advanced codec like H266 is also suggested.\\n2.\\tSince SD is not very good at generating features like human fingers and text, could you show how Promptus performs in places with a lot of high-frequency information such as fingers and text?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer LCsu\", \"comment\": \"We appreciate the reviewer for the insightful questions. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: ...this diffusion model-based method can only be used on high-performance PCs with high video memory...\\n\\nYour concern about memory overhead is important, considering the significant memory usage of current Stable Diffusion. We have made efforts to decrease the memory usage of Promptus. For example, we replaced the Stable Diffusion decoder (49.5M parameters) with a more lightweight TAESD Decoder (1.2 M parameters). Furthermore, as a pipeline, Promptus can be compatible with different SD models. We believe that with the development of the SD community, more lightweight SD models will emerge, enabling Promptus to run on some mobile devices. In our follow-up work, we will also focus on reducing memory usage.\\n\\n> W2: ...there are still many unique problems in the diffusion model in content generation, such as \\\"multiple fingers\\\" and \\\"words cannot express\\\" problems. I hope the author can provide relevant experiments...\\n\\nThanks for raising this concern! We added experiments on \\\"fingers\\\" and \\\"text\\\", as shown in Figure 16. The results show that Promptus can generate \\\"fingers\\\" and \\\"text\\\" quite well. This is because the inherent issues in SD can be compensated for during the end-to-end gradient descent prompt fitting.\\n\\n> W3: ...Promptus effectively transmit videos with various resolutions and high resolutions that it has not seen during SD training?\\n\\nThank you for your insightful suggestion. The SD model does struggle to generate good images for resolutions it hasn't seen during training. However, thanks to the end-to-end gradient descent fitting, Promptus can generate good images for these challenging resolutions, just like the example about \\\"fingers\\\" mentioned above. In our future work, we will include more experiments at higher resolutions.\\n\\n> W4: The specific diffusion model version used is not written in the main text.\\n\\nThank you so much for pointing this out\\uff01We added in Section 3.1 that the version is SD 2.1 Turbo.\\n\\n> W5: ...the time speed and memory overhead of this method will be the primary consideration for most people. I suggest the author will move such experiments into the main paper.\\n\\nThanks for the valuable advice. We moved Table 1 (overhead results) to the main text. Due to space limit, more results and analysis are left in Section C. So we added references to Section C in the main text.\\n\\n> Q1: This paper is novel in the field of video streaming, and the idea of using prompt to transmit video is very interesting. However, the application scenarios are narrow, I suggest that the author first provide sufficient evidence to fully demonstrate that Stable Diffusion is a feasible representation.\\n\\nThank you for the feedback. We added 7 experiments from different perspectives; please refer to the \\\"General response for all reviewers.\\\" The results demonstrate that Promptus can generate challenging elements such as \\\"fingers\\\", support high resolutions, and align with the ground truth video in motion. We have made efforts to reduce memory usage and will continue to improve this in the future work.\\n\\nThe current version of Promptus can be used for Video on Demand on PCs equipped with high-performance GPUs. This application scenario holds significant value. According to the 2024 Global Internet Phenomena Report [1], excluding mobile networks, there are currently 1.4 billion fixed network users, each consuming an average of 5.7 GB of Internet traffic daily on Video on Demand (such as YouTube and Netflix), totaling 1.4 billion * 5.7 GB = 8 EB of traffic, which accounts for the largest portion of Internet traffic volume (39%). Considering that traffic is quite expensive, Promptus's ability to reduce the bitrate by 4x is valuable.\\n\\nIn the future, we will continue to improve Promptus to support live video, real-time communication (RTC), and enable it to run on mobile devices.\\n\\n[1]Sandvine. 2024. 2024 Global Internet Phenomena Report. https://www.sandvine.com/global-internet-phenomena-report-2024.\"}", "{\"comment\": \"Hi Reviewers,\\n\\nWe are approaching the deadline for author-reviewer discussion phase. Authors has already provided their rebuttal. In case you haven't checked them, please look at them ASAP. Thanks a million for your help!\"}", "{\"summary\": \"The paper introduces Promptus, an innovative system that represents real-world video frames as prompts for Stable Diffusion, enabling ultra-low bitrate video streaming by sending prompts instead of raw video content. Evaluations show that Promptus achieves more than a 4x reduction in bandwidth compared to H.265 while preserving similar perceptual quality and demonstrating significant quality improvements at extremely low bitrates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Novel application of GAN AI for video streaming: By leveraging Stable Diffusion for video generation through prompts, Promptus introduces a novel approach to reducing video bandwidth requirements.\\n2. The gradient descent-based prompt fitting framework ensures pixel-level alignment, maintaining visual consistency with original frames, which is challenging for generative models.\\n3. The low-rank decomposition approach allows Promptus to adjust bitrate based on network conditions, enhancing usability in fluctuating bandwidth situation.\\n4. Temporal smoothing regularization reduces the need to transmit prompts for every frame, significantly cutting down bandwidth usage.\\n5. Promptus shows a marked improvement in bandwidth efficiency, achieving a fourfold reduction over H.265 while maintaining perceptual quality, especially in complex or high-frequency video content.\", \"weaknesses\": \"1. Promptus\\u2019s primary goal is to find an inverse prompt that moves points represented by noise to the target image\\u2019s location in latent space. However, it\\u2019s unclear how the model handles abrupt scene cuts within videos. Scene cuts could disrupt continuity, as prompts optimized for one scene may not generalize to an entirely different visual context, the generated frames might misrepresent scene transitions.\\n2. As described in 485 lines, as bitrate decreases, Promptus reduces the descriptive capacity of prompts. How the model maintains accurate content representation at these lower bitrates. Why this simplification not lead to slight misalignments in generated frames? More insights into the model\\u2019s mechanisms for preserving spatial and temporal consistency at low bitrates would clarify this point.\\n3. The paper predominantly relies on LPIPS as the primary evaluation metric. Incorporating additional metrics, such as SSIM or VMAF, would provide a more holistic assessment of visual quality, particularly in capturing structural integrity and perceptual alignment.\\n4. Although the supplementary materials provide information on the complexity of the proposed method, the paper lacks direct comparisons with the computational complexity and runtime of other state-of-the-art (SOTA) benchmarks. \\n5. The experimental results compare Promptus only with H.265 and a VAE model, omitting comparisons with other recent SOTA benchmarks in video compression and generation.\", \"questions\": \"1. The paper primarily uses linear interpolation for temporal smoothing between frames. Has the team explored other interpolation methods?\\n2. The visualizations in Figure 10 show noticeable color discrepancies between the generated and original videos. Was the model trained in RGB or YUV color space?\\n3. Figure 5 shows that while fine details such as hair strands are preserved, specific elements like earrings disappear. Why does Promptus struggle to preserve certain details?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed a pipeline called Promptus, which uses prompts instead of video content for streaming and uses Stream Diffusion to generate video at the receiving end. In order to ensure that the prompt representation is aligned with the original video, a prompt fitting framework based on gradient descent is proposed. In addition, a bitrate control algorithm based on low-rank decomposition is introduced to achieve adaptive bitrate. The paper conducted experiments on several individual videos such as QSR Animerun to prove that Promptus can achieve more than 4 times bandwidth reduction while maintaining the same perceptual quality. At the same time, the perceptual quality is higher than that of traditional methods at extremely low bitrates.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Modules such as gradient descent based prompt fitting are proposed to solve the inconsistency problem of diffusion model generation in video streaming transmission.\\nExperiments demonstrate the effectiveness of this method in LPIPS, compression ratio.\\nThe real-time issues are well covered in the supplementary material.\", \"weaknesses\": \"According to the supplementary, Promptus requires 8952MB of memory to run, but most existing methods [1, 2] do not require such a high video memory requirement, and in actual scenarios, many devices do not have such a high video memory, such as mobile phones. Therefore, this diffusion model-based method can only be used on high-performance PCs with high video memory in the context that diffusion models still require high memory.\\n\\nAs far as I know, there are still many unique problems in the diffusion model in content generation, such as \\\"multiple fingers\\\" and \\\"words cannot express\\\" problems. I hope the author can provide relevant experiments to prove what results the method in this paper will have when facing the unique problems of SD itself.\\n\\nAlthough the author said in the experiment that the resolution can be arbitrary, since this work relies on the representation capability of SreamDiffusion[3] based on SD-Turbo or LCM, many \\\"high-resolution\\\" images are not included in the training set of SD-Turbo or LCM. Can Promptus effectively transmit videos with various resolutions and high resolutions that it has not seen during SD training?\\n\\nThe specific diffusion model version used is not written in the main text. It is recommended to write it in the main text. In addition, the time speed and memory overhead of this method will be the primary consideration for most people. I suggest the author will move such experiments into the main paper.\\n\\n[1] H.265,2024.https://www.itu.int/rec/T-REC-H.265\\n[2] Gemino: Practicalandrobustneural compressionforvideoconferencing. \\n[3] https://github.com/cumulo-autumn/StreamDiffusion\", \"questions\": \"This paper is novel in the field of video streaming, and the idea of using prompt to transmit video is very interesting. However, the application scenarios are narrow, and there is a lack of more feasibility experiments on diffusion models. Therefore, I suggest that the author first provide sufficient evidence to fully demonstrate that Stable Diffusion is a feasible representation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer 5rEN,\\n\\nThank you once again for your dedicated review! As the deadline for the author-reviewer discussion phase approaches, we eagerly await your feedback on our responses. Any insights you provide are greatly appreciated and will help us further improve this work.\\n\\nThank you so much!\\n\\nThe authors\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"This paper is open source at: https://github.com/JiangkaiWu/Promptus. Welcome to give it a try.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer Jrjx,\\n\\nThank you once again for your dedicated review! As the deadline for the author-reviewer discussion phase approaches, we eagerly await your feedback on our responses. Any insights you provide are greatly appreciated and will help us further improve this work.\\n\\nThank you so much!\\n\\nThe authors\"}", "{\"summary\": \"The paper proposed an interesting new paradigm for representing videos by inversing them into prompts, replacing traditional video encoding/decoding. In video communication, the sender transmits prompts, while the receiver uses these prompts to control Stable Diffusion to generate (reconstruct) the original video. It is impressive that the generated video can achieve pixel alignment with the original video, and the gradient descent fitting approach is quite pleasing. The methods of controlling bitrate through low-rank decomposition and achieving inter-frame compression via prompt interpolation are sound. The results, particularly in comparison to H.265, demonstrate the potential of Promptus in video communication.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Clear setting. Reducing video bitrate is a practical and important problem, as video traffic can be quite expensive for content providers. This paper offers a revolutionary new perspective by representing videos with prompts and employing generative models for video reconstruction.\\n\\n2. Well motivation. The main ideas and tricks can be easily understood. Many concerns at first glance are addressed in the following sections. The paper systematically discusses the challenges of prompt streaming, such as bitrate control, inter-frame compression, and real-time playback, and presents a comprehensive solution.\\n\\n3. Simple yet effective, making it easy to follow. In the gradient descent framework, there appear to be many potential improvements for further work.\", \"weaknesses\": \"1. Considering that random packet loss occurs in network transmission, how would this influence the performance of Promptus? Would incomplete prompts severely degrade the quality of the generated video? Please clarify.\\n\\n2. To reduce bitrate, semantic communication is an emerging paradigm. Although some related work is introduced in the related work section, the authors do not explicitly compare Promptus with semantic communication. Is Promptus an instance of semantic communication or a potential alternative? Please clarify.\\n\\n3. This paper should thoroughly clarify the fundamental differences between prompt inversion and video encoding/decoding. While P5 L269 touches on this point, the authors do not provide further explanation.\\n\\n4. What is the scalability of Promptus? Although it can generate videos in real-time from prompts, the overhead of the inversion from video to prompts cannot be ignored, as acknowledged in the limitation section. How would this cost increase on a video platform with billions of users?\", \"questions\": \"Please address the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response for all reviewers\", \"comment\": \"Dear reviewers,\\n\\nWe appreciate the time you took to review our paper. Your insightful feedbacks have helped us make the paper more solid. We have worked diligently to address your valuable comments, and we hope that the revised paper meets your requirements for publication. Thank you for the opportunity to revise and resubmit our paper. We have uploaded a revised version to OpenReview.\", \"the_major_modifications_are_summarized_below\": \"1\\u3001We added subjective experiments on \\\"fingers\\\" and \\\"text\\\" in Figure 16. The results show that Promptus can generate them quite well, demonstrating the inherent issues in SD can be compensated for through the end-to-end gradient descent fitting. (In response to reviewer LCsu and W5gp)\\n\\n2\\u3001We added the X-t slice experiment and showed second-by-second video examples into Figure 11 to evaluate the temporal consistency. The results indicate that our videos basically align with the ground truth videos in terms of motion. (In response to reviewer qpvP and W5gp)\\n\\n3\\u3001We added a comparison with H.266 in Section 4.3 Compression Efficiency and Section B Performance on Real-world Traces. The results demonstrate the superiority of Promptus in compression. (In response to reviewer Jrjx, 5rEN and W5gp)\\n\\n4\\u3001We added subjective examples to Figure 5 to prove the color discrepancies are primarily stem from the low bitrate of the prompt, rather than VAE issues. The results show that the color discrepancies disappear as the bitrate increases. (In response to reviewer qpvP and Jrjx\\uff09\\n\\n5\\u3001We added ablation experiments where the only the SD (VAE) decoder serves as the generator for inversion in Section D. We also presented the latent interpolation results in Figure 15. The results prove that the U-Net (Diffusion process) plays a significant role in compression performance. (In response to reviewer qpvP, Jrjx and 5rEN) \\n\\n6\\u3001We added pixel interpolation experiments in in Section D and Figure 15. The results demonstrate the superiority of prompt interpolation. (In response to reviewer Jrjx and W5gp)\\n\\n7\\u3001We added more details about the training process in Section C, including overhead. Additionally, we added subjective examples of training results at different iterations from initialization to convergence in Figure 14. (In response to reviewer 5rEN and W5gp)\\n\\nModifications in the revised paper are highlighted in red. Thank you again for your insightful feedbacks.\\n\\nBest Regards,\\n\\nAuthors.\"}", "{\"title\": \"A few more questions about the response\", \"comment\": \"Thank you for your detailed response and experiments. I have a couple of questions:\\n\\n1. In Figure 15, the paper mentions that keyframe 1 and keyframe 10 are provided, and intermediate frames are interpolated to compare methods. Keyframe 10 should be the same for all methods, but the \\u2018Latent Interpolation\\u2019 results look much worse. The author may need to explain the reason.\\n\\n2. In Figure 16, the examples with text and fingers are interesting, but there\\u2019s no mention of the bitrate overhead. Could you share more details about this?\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer LCsu,\\n\\nThank you once again for your dedicated review! As the deadline for the author-reviewer discussion phase approaches, we eagerly await your feedback on our responses. Any insights you provide are greatly appreciated and will help us further improve this work.\\n\\nThank you so much!\\n\\nThe authors\"}", "{\"title\": \"Response to Reviewer 5rEN\", \"comment\": \"We appreciate the reviewer for the insightful questions. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: ...Figures 8, 9, and 10 do not include a comparison of the results with representative methods from these works...\\n\\nThank you for the valuable feedback. We added a comparison with H.266 in Section 4.3 Compression Efficiency and Section B Performance on Real-world Traces. This is a state-of-the-art video codec that is widely compared with other state-of-the-art works, making it a suitable reference. Many of the discussed works in \\\"Related Work\\\" are optimized for specific scenarios and therefore cannot be compared as general compression solutions. For example, Face-vid2vid [1] [2] can only compress videos of human faces in video conferencing. In the future work, we will try to compare with more representative methods.\\n\\n[1] Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10039\\u201310049, 2021.\\n\\n[2] Peiwen Jiang, Chao-Kai Wen, Shi Jin, and Geoffrey Ye Li. Wireless semantic communications for video conferencing. IEEE Journal on Selected Areas in Communications, 41(1):230\\u2013244, 2022.\\n\\n> W2: what is the significance of the comparison with VAE?\\n\\nThank you for raising this valuable question. The comparison with VAE serves as an ablation study, as Promptus's Stable Diffusion includes the U-Net and VAE modules. The unsatisfactory performance with only VAE demonstrates that the U-Net module (Diffusion process) also plays a significant role in compression performance. Therefore, VAE and U-Net need to work together to achieve the performance of Promptus.\\n\\n> W3: I think the paper should include more comparisons of subjective results...\\n\\nThanks for the valuable suggestion. We added 5 subjective experiments from different perspectives: Figure 5 (color discrepancy), Figure 11 (temporal consistency), Figure 14 (training process), Figure 15 (interpolation experiments and ablation studies\\uff09and Figure 16 (\\\"fingers\\\" and \\\"text\\\"). Please refer to the \\\"General response for all reviewers.\\\"\\n\\n> W4: can the increase in model parameters be discussed? \\n\\nThank you for suggesting this! We added the specific model parameter amounts and some discussions in Section C. With more parameters (such as SD XL Turbo), the SD model has stronger generative ability, making prompt fitting easier and allowing for higher compression rates. However, this also leads to increased overhead in terms of memory usage and run time.\\n\\n> W5: can more details on the training process be provided?\\n\\nThanks for the valuable advice. We added more details about the training process in Section C. Additionally, we added subjective examples of fitting results at different iterations from initialization to convergence in Figure 14.\\n\\n> W6: PSNR and SSIM should still be considered alongside LPIPS.\\n\\nThank you for your valuable suggestion. We use LPIPS because Promptus is more sensitive to sharp textures like all AIGC models, while PSNR and SSIM are more sensitive to smooth textures with large areas. As a result, the images generated by Promptus have sharper details with better perceptual quality, while the baselines are more blurred but with higher PSNR. Sometimes, the more details presented by Promptus, the lower the PSNR, while the more blurred the baselines are, the higher the PSNR can be. This phenomenon is also illustrated in the subjective examples in Figure 10.\\n\\nWe agree that PSNR and SSIM should be considered in the video compression task. However, in this paper, Promptus is more oriented towards the semantic communication task. The main goal is to keep the semantic information correct at low bitrates. LPIPS considers perceptual quality, making it more suitable for evaluating semantic distortion. Promptus is a novel paradigm for high-fidelity semantic communications. We will add PSNR and SSIM in the future work.\\n\\n> Q1: The citation format needs further adjustments...\\n\\nThank you for pointing that out! We have modified the citation format as suggested.\"}", "{\"title\": \"Response to Reviewer W5gp\", \"comment\": \"Thank you for your reply and for giving more valuable suggestions. A revision version has been uploaded. Below is our response to your questions.\\n\\n> Q1: In Figure 15, the paper mentions that keyframe 1 and keyframe 10 are provided, and intermediate frames are interpolated to compare methods. Keyframe 10 should be the same for all methods, but the \\u2018Latent Interpolation\\u2019 results look much worse. The author may need to explain the reason.\\n\\nThank you for raising this question. This detail you pointed out is very important. Keyframe 10 of \\\"latent interpolation\\\" in Figure 15 looks worse due to the temporal smoothing regularization used during fitting. This is because the experiments on pixel and latent interpolation are end-to-end, rather than based on the intermediate results from Promptus. Therefore, the key frames in pixel interpolation are the ground truth images, while the key frames in latent interpolation are the latent variables fitted using VAE as the generator. Because the frames are not temporally close in the latent space, enforcing a temporal smoothing regularization leads to artifacts in the fitting results.\\n\\nWe agree with your suggestion that Keyframe 10 should be the same for all methods, so we added this setting in Figure 15 (using images and latent variables from Promptus as keyframes). The results indicate that the conclusions remain the same, because the average performance of the \\\"end-to-end\\\" setting is actually better. We also added a discussion about this in Section D.\\n\\n> Q2: In Figure 16, the examples with text and fingers are interesting, but there\\u2019s no mention of the bitrate overhead. Could you share more details about this?\\n\\nThank you for pointing that out kindly! The bitrate overhead for the examples in Figure 16 is 225 kbps (with a rank of 8). We will add this to the revised version.\\n\\nThank you again for the discussion. Your insights are valuable to us.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer qpvP,\\n\\nThank you once again for your dedicated review! As the deadline for the author-reviewer discussion phase approaches, we eagerly await your feedback on our responses. Any insights you provide are greatly appreciated and will help us further improve this work.\\n\\nThank you so much!\\n\\nThe authors\"}", "{\"summary\": \"This paper introduces Promptus, a new video streaming system that reduces bandwidth by transmitting video frames as Stable Diffusion prompts rather than raw video data. Utilizing gradient descent for pixel alignment and low-rank decomposition for bitrate control, Promptus achieves over 4x bandwidth reduction compared to traditional methods, maintaining high perceptual quality at low bitrates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Novel in the sense of reforming the paradigm of traditional video streaming system by transmitting prompt streaming instead of video streaming.\\n2. Ensure pixel alignment in this kind of paradigm based on generative model is inspiring.\\n3. The workflow of the proposed method is well presented. The effectiveness of components is well discussed.\\n4. The authors openly discuss limitations of using prompts for video streaming system.\", \"weaknesses\": \"1. The authors discuss recent SOTA works on neural-based video compression in the \\u201cIntroduction\\u201d and \\u201cRelated Works\\u201d sections. However, Figures 8, 9, and 10 do not include a comparison of the results with representative methods from these works. Additionally, what is the significance of the comparison with VAE?\\n2. Although the authors address this in the paper, I do think that PSNR and SSIM should still be considered alongside LPIPS.\\n3. I think the paper should include more comparisons of subjective results at different bitrates to highlight its advantages on pixel alignment.\\n4. Since the paper introduces Stable Diffusion as part of the method, and the authors have also noted this increase in complexity, can the increase in model parameters be discussed? Furthermore, can more details on the training process be provided?\", \"questions\": \"The questions are mentioned in the \\u201cWeaknesses\\u201d section.\\n\\nWhat\\u2019s more, the citation format needs further adjustments according to the conference requirements. e.g., VP8 (Bankoski et al., 2011) instead of \\u201cVP8 Bankoski et al. (2011)\\u201d.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"review\", \"comment\": \"Thanks for your response. All my questions are addressed and I will improve my score.\\n\\nI think Promptus will attract extensive attention in the semantic communication task, because of its pixel consistency. I agree that the application of Promptus in on-demand video streaming is promising. I look forward to seeing its application extended to RTC.\"}", "{\"title\": \"Response to Reviewer W5gp\", \"comment\": \"Thank you for the valuable feedbacks. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: Only the decoding time is listed in the appendix, but not the encoding time.\\n\\nThank you for suggesting this! We added more details about the encoding process in Section C, including time overhead. Additionally, we added subjective examples of encoding results at different iterations from initialization to convergence in Figure 14.\\n\\n> W2: ...it is recommended to open source these videos, or show examples frame by frame to prove the stability of the generation.\\n\\nThank you for raising this insightful question. We added frame-by-frame examples to Figure 15 and second-by-second examples to Figure 11. Additionally, in response to Reviewer qpvP's suggestion, We also added the X-t slice experiment into Figure 11 to evaluate the stability of the generation. The results indicate that our videos basically aligns with the ground truth videos in terms of motion.\\n\\nMoreover, we will open source the videos and codes after publication.\\n\\n> W3: it is recommended to release more subjective comparison results...\\n\\nThanks for the valuable suggestion. We added 5 subjective experiments from different perspectives: Figure 5 (color discrepancy), Figure 11 (temporal consistency), Figure 14 (training process), Figure 15 (interpolation experiments and ablation studies\\uff09and Figure 16 (\\\"fingers\\\" and \\\"text\\\"). Please refer to the \\\"General response for all reviewers.\\\"\\n\\n> W4: ...It's suggested to be compared with applying the interpolation in the pixel domain among video frames...\\n\\nThanks for the helpful suggestion! Pixel interpolation is also effective in reducing the bitrate of videos. We added comparisons of prompt interpolation, pixel interpolation, and latent interpolation in Section D and Figure 15.\\n\\nFor pixel interpolation, we apply RIFE [1], a real-time video frame interpolation algorithm. The results show that it leads to noticeable artifacts in cases of occlusion, edges, or newly appearing objects due to incorrect matching, as illustrated by the green and red boxes in the Figure 15.\\n\\nFor Latent interpolation, it fails to preserve the motion between frames, resulting in spatial overlaps and ghosting, as shown in the second row of the Figure 15.\\n\\nFor our prompt interpolation, it successfully preserves motion while avoiding artifacts and keeping the edges sharp. Additionally, prompt interpolation uses the simplest linear interpolation, which introduces almost no additional overhead.\\n\\n[1] Zhewei Huang, Tianyuan Zhang, Wen Heng, Boxin Shi, and Shuchang Zhou. Real-time intermediate flow estimation for video frame interpolation. In Proceedings of the European Conference on Computer Vision (ECCV), 2022.\\n\\n> W5: ...the prompt needs to be inserted into the intermediate frame, so the subsequent frames must be received before playback. \\n\\nThank you for raising this concern. Promptus is designed for Video on Demand, such as YouTube and Netflix. In this scenario, the video is transmitted in chunks instead of frame by frame, where each chunk is a small segment of video (e.g., 1 second, 30 frames). Once received, chunks are queued in a buffer (such as 10 seconds) for playback. Therefore, users do not perceive the interpolation because it is always completed before video playback.\\n\\n> W6: The second is the usage scenario, which is currently limited to low-bitrate video streaming scenarios.\\n\\nThank you for the valuable feedback. Promptus works well at all bitrate levels. It can reduce the bitrate to very low levels (e.g., 1/4) while maintaining the same video quality. Considering that network traffic is quite expensive, on-demand video alone generates 8 EB of traffic daily over fixed networks [2]. Therefore, the ability of Promptus to reduce bitrates is valuable.\\n\\n[2] Sandvine. 2024. 2024 Global Internet Phenomena Report. https://www.sandvine.com/global-internet-phenomena-report-2024.\\n\\n> Q1: The comparison with more advanced codec like H266 is also suggested.\\n\\nThank you for suggesting this! We added a comparison with H.266 in Section 4.3 Compression Efficiency and Section B Performance on Real-world Traces. The results indicate that Promptus maintains its advantages.\\n\\n> Q2: ...could you show how Promptus performs in places with a lot of high-frequency information such as fingers and text?\\n\\nThanks for raising this insightful concern! We added experiments on \\\"fingers\\\" and \\\"text\\\", as shown in Figure 16. The results show that Promptus can generate \\\"fingers\\\" and \\\"text\\\" quite well. This is because the inherent issues in SD can be compensated for during the end-to-end gradient descent fitting.\"}", "{\"title\": \"Response to Reviewer pcrw\", \"comment\": \"We thank the reviewer for the valuable feedbacks. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: Considering that random packet loss occurs in network transmission, how would this influence the performance of Promptus?\\n\\nThank you for raising this question. Promptus is used for Video on Demand. Currently, most on-demand videos are transmitted using HTTP, based on the TCP protocol. This means that data transmission is reliable.\\n\\nIn future work, we will extend Promptus to RTC scenarios, where packet loss may occur, necessitating the design of error recovery mechanisms for Promptus.\\n\\n> W2: Is Promptus an instance of semantic communication or a potential alternative?\\n\\nThank you for the insightful feedback. Promptus represents a new paradigm in semantic video communication. It uses prompts for communication, which inherently belong to semantic information. However, traditional semantic communication only aim for semantic consistency and cannot achieve pixel-level consistency, as shown in Figure 1. Promptus further ensures pixel consistency, thereby broadening the application of semantic communication to some high-fidelity scenarios.\\n\\n> W3: This paper should thoroughly clarify the fundamental differences between prompt inversion and video encoding/decoding.\\n\\nThank you for the valuable feedback. Video encoding records the video signal itself. To ensure high fidelity of the signal, the compression rate is limited. While Promptus records the coordinates of the video in the prompt space, instead of the signal itself. This achieves a better compression rate while ensuring fidelity. We added an explanation about this in Section 3.1.\\n\\n> W4: ...How would this cost increase on a video platform with billions of users?\\n\\nThank you for raising this concern. Since Promptus is used for Video on Demand, the videos in this scenario are pre-encoded and stored. Therefore, the cost of encoding (inversion) is a one-time expense, and subsequent use only requires decoding (generation). The generation takes place on the user's device, so the number of users does not affect the costs of Promptus on the video platform side.\"}", "{\"title\": \"Response to Reviewer Jrjx\", \"comment\": \"We thank the reviewer for the valuable feedbacks. A revision version has been uploaded, where modifications are highlighted in red. Below is our response to your questions and concerns. The original comments are copied followed by our answers.\\n\\n> W1: ...it\\u2019s unclear how the model handles abrupt scene cuts within videos...\\n\\nWe agree with your concerns, as adjacent frames in the prompt space are no longer close after the scene changes, making interpolation not work. To address this, Promptus will continuously detect scene changes and treat the new scenes as new videos, as described in Section A.1. To highlight this, we added a reference to this part in Section 3.3 of the main text.\\n\\n> W2: ...How the model maintains accurate content representation at these lower bitrates. Why this simplification not lead to slight misalignments in generated frames?...\\n\\nThank you for raising this question. Low-bitrate prompts are also obtained through end-to-end gradient descent fitting, ensuring that the generated images are as consistent as possible with the ground truth images. Thus, Promptus makes the most of the bitrate, achieving the best consistency at low bitrates.\\n\\n> W4: ...lacks direct comparisons with the computational complexity and runtime of other state-of-the-art (SOTA) benchmarks...\\n\\nThanks for the valuable advice. According to Table 1, during decoding, Promptus introduces almost no additional overhead (only some simple linear computations), with the majority of the overhead coming from SD itself. As a pipeline, Promptus can be compatible with different SD models. We believe that with the development of the SD community, more lightweight SD models will emerge, benefiting Promptus. In future work, we will evaluate complexity and runtime on more SOTA benchmarks.\\n\\n> W5: ...omitting comparisons with other recent SOTA benchmarks in video compression and generation...\\n\\nThanks for the suggestions! We added a comparison with H.266 in Section 4.3 Compression Efficiency and Section B Performance on Real-world Traces. This is a state-of-the-art video codec that is widely compared with other state-of-the-art works, making it a suitable reference.\\n\\n> Q1: Has the team explored other interpolation methods?\\n\\nThanks for the insightful advice. We added subjective experiments on different interpolation methods in Section D and Figure 15. \\n\\nFor prompt interpolation, we explored other one-dimensional interpolation methods, such as cubic interpolation, but they did not differ much from linear interpolation. To maintain the simplicity, we finally chose linear interpolation.\\n\\nWe further compared interpolation at the prompt level, latent level, and pixel level (in response to Reviewer W5gp's suggestions). The results demonstrated the superiority of prompt interpolation, as shown in Figure 15.\\n\\n> Q2: Figure 10 show noticeable color discrepancies between the generated and original videos. Was the model trained in RGB or YUV color space?\\n\\nThank you for raising this concern. Promptus is trained in the RGB space. However, the color discrepancies in Figure 10 primarily stem from the low bitrate of the prompt. We added subjective examples to Figure 5, which shows that when the rank (bitrate) is low, the lamp\\u2019s color in Figure 5(d) is inconsistent with the ground truth. When the rank increases, the lamp\\u2019s color in Figure 5(e) is corrected. This is because when the bitrate is low, the representational capacity of the prompt decreases, making it unable to accurately describe all the details in the image, resulting in color discrepancies.\\n\\n> Q3: Why does Promptus struggle to preserve certain details?\\n\\nIt is an interesting question. This is because SD itself has varying abilities for generating different elements. For specific elements, such as text, SD itself struggles to produce them. Fortunately, Promptus can successfully fit these elements through end-to-end gradient descent, as shown in Figures 5 and 16, although this requires more iterations.\"}", "{\"title\": \"Thanks for the reply\", \"comment\": \"We thank the reviewer for the insightful feedback and for increasing the score! We will continue to expand Promptus.\"}" ] }
Bmzv2Gch9v
SmartPretrain: Model-Agnostic and Dataset-Agnostic Representation Learning for Motion Prediction
[ "Yang Zhou", "Hao Shao", "Letian Wang", "Steven L. Waslander", "Hongsheng Li", "Yu Liu" ]
Predicting the future motion of surrounding agents is essential for autonomous vehicles (AVs) to operate safely in dynamic, human-robot-mixed environments. However, the scarcity of large-scale driving datasets has hindered the development of robust and generalizable motion prediction models, limiting their ability to capture complex interactions and road geometries. Inspired by recent advances in natural language processing (NLP) and computer vision (CV), self-supervised learning (SSL) has gained significant attention in the motion prediction community for learning rich and transferable scene representations. Nonetheless, existing pre-training methods for motion prediction have largely focused on specific model architectures and single dataset, limiting their scalability and generalizability. To address these challenges, we propose SmartPretrain, a general and scalable SSL framework for motion prediction that is both model-agnostic and dataset-agnostic. Our approach integrates contrastive and reconstructive SSL, leveraging the strengths of both generative and discriminative paradigms to effectively represent spatiotemporal evolution and interactions without imposing architectural constraints. Additionally, SmartPretrain employs a dataset-agnostic scenario sampling strategy that integrates multiple datasets, enhancing data volume, diversity, and robustness. Extensive experiments on multiple datasets demonstrate that SmartPretrain consistently improves the performance of state-of-the-art prediction models across datasets, data splits and main metrics. For instance, SmartPretrain significantly reduces the MissRate of Forecast-MAE by 10.6\%. These results highlight SmartPretrain's effectiveness as a unified, scalable solution for motion prediction, breaking free from the limitations of the small-data regime.
[ "Motion Prediction", "Trajectory Prediction", "Autonomous Driving", "Self-Supervised Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=Bmzv2Gch9v
https://openreview.net/forum?id=Bmzv2Gch9v
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xCpSNXeAyo", "uLk7MrUdkG", "uHXA18zsjF", "p9l61Blu3L", "nxsn4PU5hP", "n0CYRDkIU4", "mPwcZqp0fg", "gD2b05EcIq", "UaZUpM0Rkk", "RHoRIlLGI1", "PXDTL9oMdc", "LUP6h0Wp3t", "KRzYb9HqIC", "J7fcNaRzcM", "Iv5Sng8Jc8", "G0a0psBP0C", "F5uS1MtaVv", "Dq9cxKevv0", "BHDSDoe0EJ", "BEAa5y4j3r", "7WK19tM4TN", "5AtdQ6iAo3" ], "note_type": [ "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730091061733, 1737523626169, 1732224321065, 1730677935939, 1732224805522, 1732224584082, 1732224763428, 1732224484989, 1732686682014, 1732224190255, 1730656040360, 1732694244533, 1732514018693, 1732224706418, 1732224624870, 1734971218752, 1732308185972, 1730450544552, 1732693082539, 1733210194924, 1733174834661, 1732224375357 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_TuBk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_SCK1" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_ND2Q" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_wPEn" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Area_Chair_ZSWd" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_TuBk" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_ND2Q" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ], [ "ICLR.cc/2025/Conference/Submission4222/Reviewer_SCK1" ], [ "ICLR.cc/2025/Conference/Submission4222/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper offers a universal pipeline to pre-train on real-world motion data for trajectory prediction tasks. Combining popular self-supervised pre-training methods like contrastive learning and reconstruction learning, and with specific designs on motion data domain, this work successfully proposes a pre-training pipeline to learn general representations of *trajectories* in motion data, which can work regardless of the baseline motion prediction models or the motion datasets used. Extensive experiments have been performed on various commonly-used datasets (Argoverse 1/2, Waymo Open Motion Dataset) and state-of-the-art baselines (HiVT/QCNet etc.) to show the effectiveness of the pre-training pipeline proposed. A series of ablation studies clearly ablate the effectiveness of each module of the pre-training pipeline as well as the pre-training data involved.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The pipeline proposed in this work is model-agnostic, i.e., it can be easily extended to any encoder-decoder style motion prediction model.\\n2. This work successfully combines different real-world datasets like Argoverse and Waymo, so it can significantly enlarge the available motion data that can be used for a specific motion prediction task, where data scarcity is a significant problem.\\n3. Extensive experiments are conducted to prove its general effectiveness across different baseline models and datasets.\\n4. Ablation studies are well designed to clearly show i) the effectiveness of each part of the pipeline ii) the influence of pre-training data.\", \"weaknesses\": \"1. The downstream motion prediction settings, though already very diverse, seem not being able to cover all necessary cases. For example, no methods fine-tuned on Waymo Open Motion Dataset (WOMD) are presented.\\n2. The pre-training performance should be illustrated to prove that pre-training tasks can be done successfully. For example, can you show some examples of reconstructed trajectories?\\n3. Direct data mixing in Data-scaled Pre-training is a natural choice, but might not be optimal. For example, WOMD has significant domain gap compared to Argoverse. In this case, a biased weight might be helpful in pre-training stage to lower the influence of WOMD-Argoverse domain gaps.\\n4. An ablation on how to utilize the additional data in the pre-training stage could be added to make Table 3 even more convincing. For example, in Transfer Pre-training and Data-scaled Pre-training, what would happen if the additional data is used to pre-train on the baseline model directly, or even directly to augment the training set for the baseline model?\\n5. Some minor questions:\\ni) Why to use L1 loss in TRL instead of L2, while the latter is the base for metrics used (MR, FDE, ADE)?\\nii) The quantity of data that is complete / incomplete might be presented to help readers understand the quantity of additional data introduced through this work.\\n6. The prediction metrics lack error bars. This is not a weakness, but a point that can be even improved. I understand that the motion prediction metrics can be unstable sometimes, but adding error bars onto the most important results would significantly improve the reliabilities of the pipeline proposed.\", \"questions\": \"Please see the Weaknesses. I am looking forward to the authors' rebuttal and discussions on those issues.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> Clarifications on which parts of each network are used for pretraining would be beneficial (refer to question 1 below).\\n>\\n> In Fig 2, the figure labels the component as \\\"model\\\" for pretraining. However, since Section 3.1 (problem formulation) indicates that the contrastive loss is calculated on the encoded embeddings, should this component be referred to as the \\\"encoder\\\" instead? In the experiments with HiVT, HPNet, QCNet, and Forecast-MAE, are only the encoders of these models pretrained? For models like HPNet, which do not clearly differentiate between encoder and decoder architectures, how do you determine which parts of the model to use for pretraining to obtain latent embeddings? How might this selection influence the results?\\n\\nThanks for the insightful and very detailed observation! As you point out, the choice of which part of the network is used for pre-training is related to the architecture of the model. We\\u2019ll introduce them with three categories:\\n\\n1. For models with clear encoder and decoder architecture (HiVT, Forecast-MAE), we use the agent embeddings before the decoder, and only pre-train the encoder.\\n2. For models with refinement modules (QCNet, HPNet), the model consists of 1) an initial prediction stage with standard encoder-decoder architecture and 2) a refinement prediction stage. We only pre-train the encoder in the initial stage, since the training of the refinement module necessitates predicted trajectory while our pre-training tasks do not provide these predictions.\\n3. Models with special designs: HPNet incorporates a specially designed historical prediction mechanism, enabling predictions not only from the current time step but also from historical time steps. During pre-training, we explored two approaches: 1) the standard approach: using agent embeddings only from the current time step for the SSL task; 2) the HPNet-adapted approach: using agent embeddings from all historical time steps for the SSL task, and averaging the loss from all historic time steps. Interestingly, we observed similar performance between these two approaches. This outcome is likely attributed to our temporal sampling strategy, which effectively captures and integrates temporal information.\\n\\nIn summary, \\u201cmodel encoder\\u201d is indeed a more accurate name than \\u201cmodel\\u201d, we have modified this in Fig.2 of our revised paper.\\n\\n> line 95-97, I find this claim unclear and potentially misleading. I disagree that MAE pretraining is inflexible; on the contrary, I think the masking pretraining is quite versatile. The masking techniques used in the papers you referenced - Rmp, Traj-mae, Forecast-mae, Sept - appear very similar, suggesting the masking concept is not limited and can be readily applied across various models.\\n\\nThanks for pointing out this. Our statement in Line 95-97 may have not been accurate. We do agree that MAE pre-training is flexible, when they are applied on agent trajectory reconstruction. The point we want to emphasize is that the MAE approach based on map reconstruction is not general, since 1) many works focus on aggregating agent embeddings and provide explicit access to them, while explicit map embeddings are not always available (for example, HPNet, QCNet and some other GNN-based models); 2) different works may take different map representations, such as a vectorized map and rasterized map, thus the map reconstruction pretraining strategy need to be designed for each representation and could be less general. We have updated the claim and made it more clear in line 97-100 of our revised paper.\\n\\n> Line 104, abbreviation CL is not explained before.\\n\\nThanks for pointing out. By CL we refer to contrastive learning. We have added the full name before CL to clarify it in line 102 of our revised paper.\", \"title\": \"Response to Reviewer SCK1 (2/3)\"}", "{\"summary\": \"The work proposes a pretraining self-supervised learning framework that can be applied to many models, and trained on different datasets for motion prediction. The pretraining pipeline leverages momentum contrast and generates contrastive pairs by augmenting the same traffic scene with non-overlapping time horizon clips, for the contrastive loss of embeddings and trajectory reconstruction loss. It demonstrates better performance with the pretraining pipeline in two different datasets and various models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. Good performance is achieved on the AV, AV2 datasets, with different models.\\n3. The exploration of the concept of model-agnostic and dataset-agnostic is very good.\", \"weaknesses\": \"1.\\tMany details are missing, which may hinder reproducibility. For instance, in the pretraining phase, it is unclear how the values for t and t' are selected for each dataset to avoid overlapping for the experiments. The horizons of the sub-scenario as input and reconstruction are also not specified; it would be helpful to know if these are consistent with the motion prediction settings (either input or output horizons?) used during fine-tuning. Additionally, the default lambda value for the loss function is not provided. Clarifications on which parts of each network are used for pretraining would be beneficial (refer to question 1 below). Will the code be made available as open-source?\\n2.\\tComputational cost is not shown and compared.\", \"questions\": \"1.\\tIn Fig 2, the figure labels the component as \\\"model\\\" for pretraining. However, since Section 3.1 (problem formulation) indicates that the contrastive loss is calculated on the encoded embeddings, should this component be referred to as the \\\"encoder\\\" instead? In the experiments with HiVT, HPNet, QCNet, and Forecast-MAE, are only the encoders of these models pretrained? For models like HPNet, which do not clearly differentiate between encoder and decoder architectures, how do you determine which parts of the model to use for pretraining to obtain latent embeddings? How might this selection influence the results?\\n2.\\tline 95-97, I find this claim unclear and potentially misleading. I disagree that MAE pretraining is inflexible; on the contrary, I think the masking pretraining is quite versatile. The masking techniques used in the papers you referenced - Rmp, Traj-mae, Forecast-mae, Sept - appear very similar, suggesting the masking concept is not limited and can be readily applied across various models.\\n3.\\tLine 104, abbreviation CL is not explained before.\\n4.\\tLine 266 mentions that only complete trajectories are used in pretraining, excluding incomplete ones. During pretraining phase, do you reconstruct single-agent trajectory or multi-agent? The same question applies to the prediction phase. Could you specify what percentage of trajectories remain after filtering for each dataset?\\n5.\\tLine 296, for eq.(1), why the case of i=j is not excluded in the second term of the denominator, as in this case, it equals to the positive pairs (numerator part) that try to maximize, so it this seems to conflict with the intent.\\n6.\\tLine 485, different reconstruction strategies (categories and reconstruction target of Table 5) are confusing. It is unclear how different options for reconstructing trajectories starting at t' are justified or implemented. What do you mean by historical information for the sub-scenarios of t'? Could you clarify these strategies?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Some minor questions: i) Why to use L1 loss in TRL instead of L2, while the latter is the base for metrics used (MR, FDE, ADE)? ii) The quantity of data that is complete / incomplete might be presented to help readers understand the quantity of additional data introduced through this work.\\n\\nThanks for the suggestion. As common in the literature, we consider L1 loss instead of L2 loss due to practical considerations related to: \\n\\n1. Robustness to outliers: L2 loss penalizes large errors more heavily due to the squaring term, making it sensitive to outliers and noises, while L1 loss could be relatively more robust.\\n2. Balanced training signal: In the early stage of training, where the reconstruction errors could be high and stochastic, L1 loss could provide a more stable learning signal. In the later stage of training, where the reconstruction errors are usually small, L1 loss could provide a stronger learning signal, since the square operation in L2 loss would further reduce its loss value.\\n\\nDuring rebuttal, we also conducted one experiment using L2 loss during pre-training, to study their differences. As shown in the table below, L2 reconstruction results in less improvement compared with L1 reconstruction.\\n\\n| Pre-Training Setup | minFDE | minADE | MR |\\n| ------------------ | ------ | ------ | ----- |\\n| None | 0.969 | 0.661 | 0.092 |\\n| L1 reconstruction | 0.940 | 0.647 | 0.088 |\\n| L2 Reconstruction | 0.948 | 0.654 | 0.089 |\\n\\nAs for the number of complete data in these datasets, sure, the percentage of after-filtering complete trajectories, over all vehicle trajectories is 35%, 27% and 25% for Argo, Argo2 and WOMD respectively.\\n\\n> The prediction metrics lack error bars. This is not a weakness, but a point that can be even improved. I understand that the motion prediction metrics can be unstable sometimes, but adding error bars onto the most important results would significantly improve the reliabilities of the pipeline proposed.\\n\\nThanks for pointing out this. Indeed, most motion prediction methods in the literature have no error bars as the training cost is relatively high. For example, the training of QCNet and HPNet takes about 2 days. We sincerely value your proposal and agree that adding error bars will improve the reliabilities of our proposed pipeline. So we chose a fast training setup (HiVT pre-training) and repeated our pipeline's pre-training and fine-tuning three times with different random seeds. The results are listed in the following table.\\n\\n| Random Seed | minFDE | minADE | MR |\\n| ----------- | -------------- | -------------- | ----------- |\\n| 2023 | 0.939 | 0.647 | 0.088 |\\n| 2024 | 0.940 | 0.649 | 0.088 |\\n| 2025 | 0.939 | 0.646 | 0.088 |\\n| mean / std | 0.939 / 5.7e-4 | 0.646 / 1.5e-3 | 0.088 / 0.0 |\\n\\nThe performance is relatively stable, demonstrating that our proposed pipeline learns robust features for fine-tuning.\", \"title\": \"Response to Reviewer TuBk (3/3)\"}", "{\"comment\": \"> With only one SSL pre-training baseline and a single dataset in Table 2, it may be challenging to substantiate the proposed method\\u2019s advantages over other SSL approaches.\\n\\nThank you for the question. As mentioned in line 403-404 of our paper, to ensure a fair and meaningful comparison, we focused on SSL methods with open-source code. However, to the best of our knowledge, at the time of this paper\\u2019s submission, Forecast-MAE was the only open-sourced SSL method available for Argo and Argo 2. Consequently, we included only one comparison in Table 2, where our method demonstrated a significantly larger improvement over Forecast-MAE\\u2019s pre-training method. Beyond the performance boost, we would also like to highlight that, while prior SSL methods lack generality and can only be applied exclusively to a single model/dataset, our SSL pretraining strategy is designed to be flexibly applicable across models and datasets, showing broader applicability.\\n\\nBesides, we believe that the seemingly limited comparisons with other SSL approaches are not a key weakness of our work, but rather highlight the current gaps and the underexplored nature of this research area, which calls for more contributions from the community. To help address this, we will open-source our work, with the hope of further accelerating progress in this field.\\n\\n> If possible, additional visualization results from the authors would be highly valuable.\\n\\nThanks for this suggestion. Yes, we've added more intuitive visualization results of fine-tuning in Appendix A.1 of the revised paper. As also suggested by reviewer `TuBk`, we've added some visualization results of reconstructed trajectories of pre-training in Appendix A.2.\\n\\n> The pre-training needs 32 Nvidia A100 40GB GPUs for 128 epochs, which takes abundant computational resources. However, the improvements are not that significant. In short, the contributions of the papers are limited, especially the technical part. I think the paper cannot meet the standard of ICLR conference,\\n\\nThanks for this careful review. We've realized the training cost descriptions have not been comprehensive in our original paper. The need of 32 Nvidia A100 40GB GPUs is only applicable when we conduct pretraining on all datasets together, which is our maximum training cost setting: all datasets will contribute to around 900k data and prolonged training cost, thus we use more GPUs to accelerate training. As for single-dataset pre-training, we use 8 GPUs. We have updated the claim and made it more clear in line 367-369 of our revised paper. As in Fig. 3, pre-training with 32 epochs can already have an effective performance boost compared with no pre-training (minFDE 0.950 v.s. 0.969), and further increasing training epochs leads to diminishing returns. We pre-train 128 epochs just to explore the model limit when we scale up the training compute for motion prediction.\\n\\nBesides, similar to data scaling in CV and NLP (e.g., ImageNet and the 400B-token datasets for GPT training), the training cost is an inevitable part of exploring scaling laws. To mitigate the need for repeated training, we will share our pre-trained model weights learned from various data sources. We believe our detailed experiments with diverse models and datasets will provide valuable research insights to the community, such as the design of a general pre-training framework for motion prediction and strategies for data mixing in the trajectory domain. With these techniques rarely explored before, and combined with our open-sourced code, we aim to address the \\\"rarely researched and worth researching field\\\", \\\"pave the way toward a foundation model for motion forecasting\\\" and tackle the issue of \\\"data scarcity is a significant problem\\\", as acknowledged by you, Reviewers `ND2Q` and `TuBk`.\", \"title\": \"Response to Reviewer wPEn (2/2)\"}", "{\"comment\": \"> An ablation on how to utilize the additional data in the pre-training stage could be added to make Table 3 even more convincing. For example, in Transfer Pre-training and Data-scaled Pre-training, what would happen if the additional data is used to pre-train on the baseline model directly, or even directly to augment the training set for the baseline model?\\n\\nThanks for sharing this interesting idea. As suggested, using Argo2 as additional data to pre-train HiVT and Argo as the downstream target dataset, we explored two pre-training settings:\\n\\n1. We pre-train the model on Argo 2 with the standard motion prediction task, and then fine-tune it to Argo.\\n2. We directly train the model on Argo and Argo 2 with the standard motion prediction task.\\n\\nA minor design choice is that, Argo 2 has a longer trajectory horizon than Argo. When pre-training on Argo 2, we could either randomly sample trajectory segments from the full trajectory, or use a fixed time window. For more comprehensive exploration, we explored both. For the fixed time window choice, considering Argo2 data has 110 waypoints and Argo1 requires 50 waypoints (20 as inputs and 30 as outputs), we use Argo2\\u2019s original current timestep, and collect 20 historic waypoints as input and 30 future waypoints as output. \\n\\n\\nWe show the results from the first approach (random window) in the table below.\\n| Pre-Training Dataset | Fine-Tuning Dataset | minFDE | minADE | MR |\\n| -------------------- | ------------------- | ------ | ------ | ----- |\\n| \\\\ | Argo | 0.969 | 0.661 | 0.092 |\\n| Argo2 | Argo | 1.077 | 0.701 | 0.112 |\\n| \\\\ | Argo+Argo2 | 3.359 | 2.092 | 0.636 |\\n\\nWe also show the results from the second approach (fixed window) in the table below.\\n| Pre-Training Dataset | Fine-Tuning Dataset | minFDE | minADE | MR |\\n| -------------------- | ------------------- | ------ | ------ | ----- |\\n| \\\\ | Argo | 0.969 | 0.661 | 0.092 |\\n| Argo2 (fixed) | Argo | 1.078 | 0.697 | 0.112 |\\n| \\\\ | Argo+Argo2 (fixed) | 1.214 | 0.762 | 0.133 |\\n\\nInterestingly, for both approaches, poor performance is observed when we directly use motion prediction as the pretraining task, or directly train the model from a mix of the two datasets (especially when we random sample from the additional dataset). It could be presumably due to: 1) the features learned by motion prediction are less transferable or robust, compared to the features learned from SSL tasks; 2) the trajectory distribution between different datasets is quite different, and could be pronounced when pre-training is performed on the standard prediction task. \\n\\nWe believe these interesting findings will contribute valuable insights to the field. We sincerely thank the reviewer for this constructive idea, and we will incorporate these results into Table 3 and re-organize the corresponding section in our final paper to present the findings more clearly and cohesively.\", \"title\": \"Response to Reviewer TuBk (2/3)\"}", "{\"comment\": \"Dear reviewer `wPEn`, we sincerely appreciate the careful review and valuable feedback on our paper! We have addressed each of your concerns as follows.\\n\\n> While the method introduces a novel paradigm for trajectory prediction, the pretrain-finetune approach has been widely adopted across various domains like NLP and CV for years, making it less valuable. Consequently, the contribution of SSL for model training is incremental.\\n\\nThank you for highlighting this concern. We completely agree that the pretrain-finetune approach has been widely adopted in NLP and CV for years, with numerous renowned works establishing its effectiveness and significance. However, this approach has been much less explored in the motion prediction domain, which presents unique challenges compared to the NLP and CV domains. Unlike the relatively uniform data formats in CV (images) and NLP (tokens), where pixels and text provide straightforward representations, motion prediction relies on diverse and multi-modal data sources such as maps and motion trajectories. Map representations alone exhibit significant variability (e.g., rasterized maps versus vectorized maps), and different datasets often use distinct formats for motion data, further compounding the complexity of this domain. This requires specific domain knowledge of motion prediction tasks to design effective SSL techniques, as demonstrated by recent efforts such as SEPT (ICLR'24) and Forecast-MAE (ICCV'23).\\n\\nIn this context, our SSL task design incorporates a novel contrastive learning objective, which aligns the same agent's embeddings across different time windows. While this approach has not been introduced in previous motion prediction methods, we want to note that our major focus and primary contribution lie in enabling general pre-training across models and datasets for motion prediction, which has not been achieved by prior works. Our main goal is to introduce the first general SSL framework that can be universally applied to various motion prediction models, which is why we designed our pretext tasks in an agent-centric manner. Furthermore, we are among the first to perform data-scaled pretrain-finetune for motion prediction. Through model-agnosticism and dataset-agnosticism, we aim to present an early exploration of the 'scaling laws' in the motion prediction domain, an area that has been significantly underexplored.\\n\\nBesides, the code will be open-sourced, and we here provide an initial and rough preview of it through this anonymous [URL](https://anonymous.4open.science/r/5f404fda8de3e3278e2f794f80bffed0036c827b) for preview. By making our implementation publicly available, we aim to foster transparency and reproducibility, and provide a foundation for further research and development in the motion prediction domain.\\n\\n> The technical contribution of the dataset sampling strategy appears limited: aspects like standardizing representations, ensuring data quality, and maximizing volume and diversity are fundamental considerations when integrating different data sources.\\n\\nThanks for the question. First, we fully agree that standardizing representations, ensuring data quality, and maximizing volume and diversity are essential practices for integrating different data sources. However, similar as mentioned in our previous response, unlike well-established domains such as NLP and CV, motion prediction presents unique challenges that require specific domain knowledge to address effectively. For example, while the CV community has well-established and straightforward data-mixing practices for standardizing representations (e.g., lighting and coloring), ensuring data quality (e.g., removing corrupted images), and diversifying data distributions (e.g., geographic diversity), clear and comprehensive solutions for achieving these goals remain elusive in the motion prediction domain. Due to these complexities, previous SSL approaches in motion prediction (e.g., Forecast-MAE, SEPT, TrajMAE, PreTram, and others) have not ventured into data scaling or mixing. We are among the first to investigate data scaling and mixing in the trajectory domain and propose practical techniques to achieve it. We will also open-source our code to promote transparency and collaboration. With the comprehensive studies of different data combinations, we hope our insight from mixing datasets pre-training inspires more researchers in the community to value data scaling/mixing in pre-training of motion prediction, ultimately leading to more thoughtful and effective utilization of motion prediction datasets.\", \"title\": \"Response to Reviewer wPEn (1/2)\"}", "{\"comment\": \"Thank you for your clarification. Please incorporate these changes into the final version to enhance the paper's clarity. I will update my score to 8.\"}", "{\"comment\": \"Dear reviewer `SCK1`, we sincerely appreciate the detailed assessment and valuable feedback on our paper! Here, we provide responses and explanations to your comments and suggestions.\\n\\n> in the pretraining phase, it is unclear how the values for t and t' are selected for each dataset to avoid overlapping for the experiments.\\n\\nThanks for the careful review! We avoid overlapping of t and t\\u2019 by enforcing different sampling ranges when we sample them. For example, in Argo dataset, the trajectory horizon is 50, with 20 as input and 30 as output, where we need to sample sub-scenarios with 20 timesteps. During sampling, we sample t within the range [0, 10] and t' within the range [t+20, 30] so that the two sub-scenarios have no overlapping timesteps. The same strategy, with varied sub-scenario lengths, is applied to other datasets such as Argo 2 and WOMD, to ensure no overlaps.\\n\\n> The horizons of the sub-scenario as input and reconstruction are also not specified; it would be helpful to know if these are consistent with the motion prediction settings (either input or output horizons?) used during fine-tuning.\\n\\nThanks for the question. As we briefly mentioned in lines 251 and 252 in our original paper (now they appear in lines 255 and 256 in our revised paper), to enable better alignment between pre-training tasks and the actual downstream prediction task, both the horizon of the input and reconstruction sub-scenario is designed to be consistent with the input horizon of the target downstream dataset.\\n\\n> Additionally, the default lambda value for the loss function is not provided.\\n\\nThanks for pointing out! The default lambda value is set to 1. We've added this information to line 330 in our revised paper.\\n\\n> Will the code be made available as open-source?\\n\\nCertainly! It is a great pleasure to share our work with the community and contribute to advancing the foundation model in motion prediction. As part of our commitment to openness and collaboration, we plan to open-source our code upon the acceptance of the paper. To facilitate early access and feedback, we created an anonymous code repository during the rebuttal stage. This repository, though a rough and initial version for a preview, includes both pre-training and fine-tuning code, as well as our model checkpoints for pre-training and fine-tuning. You can access it through this [URL](https://anonymous.4open.science/r/5f404fda8de3e3278e2f794f80bffed0036c827b).\\n\\n> Computational cost is not shown and compared.\\n\\nThanks for this constructive suggestion. The computation cost during pre-training highly depends on the datasets used. Argo 1, Argo 2, and WOMD provide 205k, 200k, and 487k training scenarios respectively, and all datasets add up to 900k scenarios. Pre-training our model on a single dataset typically takes 1~2 days for 128 epochs with 8 GPUs, and pre-training on multiple datasets can have prolonged training time. Note that for model parameters, our pretraining only introduced a few new MLPs to the original model.\\n\\nRegarding comparison to other SSL methods, as mentioned in our paper, to the best of our knowledge, only Forecast-MAE was open-sourced at the time of paper submission. In Forecast-MAE, the pre-training takes 1~2 days for 60 epochs with 4GPUs. As shown in the table below, we compare our SSL strategy with Forecast-MAE. When using the same number of pre-training epochs, our method outperforms Forecast-MAE, and extending the training epochs further enhances our performance, demonstrating the effectiveness of our method.\\n\\n| Pre-Training Method | Pre-Training Epochs | Backbone Model | minFDE | minADE | MR |\\n| ------------------- | ------------------- | -------------- | ------ | ------ | ----- |\\n| \\\\ | \\\\ | Forecast-MAE | 1.436 | 0.811 | 0.189 |\\n| Forecast-MAE | 60 | Forecast-MAE | 1.409 | 0.801 | 0.178 |\\n| SmartPretrain | 60 | Forecast-MAE | 1.394 | 0.796 | 0.174 |\\n| SmartPretrain | 128 | Forecast-MAE | 1.372 | 0.786 | 0.169 |\", \"title\": \"Response to Reviewer SCK1 (1/3)\"}", "{\"summary\": \"In this paper, the authors present SmartPretrain, a novel self-supervised learning (SSL) framework that is model-agnostic and dataset-agnostic. This framework aims to overcome the challenges associated with the scarcity of large-scale driving datasets for motion prediction and the reliance of existing SSL pre-training methods on specific model structures. SmartPretrain incorporates both contrastive and reconstructive SSL approaches and features a dataset-agnostic scenario sampling strategy that combines multiple datasets. Extensive experiments validate the effectiveness of SmartPretrain in motion prediction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The field is worthy researching, and the motivations behind the method is clear.\\n2.The paper is well-organized.\\n3.The paper proposes to pre-train from composition of different sources data, which is rarely researched in motion prediction areas before.\", \"weaknesses\": \"1. While the method introduces a novel paradigm for trajectory prediction, the pretrain-finetune approach has been widely adopted across various domains like NLP and CV for years, making it less valuable. Consequently, the contribution of SSL for model training is incremental.\\n2. The technical contribution of the dataset sampling strategy appears limited: aspects like standardizing representations, ensuring data quality, and maximizing volume and diversity are fundamental considerations when integrating different data sources.\\n3. With only one SSL pre-training baseline and a single dataset in Table 2, it may be challenging to substantiate the proposed method\\u2019s advantages over other SSL approaches.\\n4. If possible, additional visualization results from the authors would be highly valuable.\\n5. The pre-training needs 32 Nvidia A100 40GB GPUs for 128 epochs, which takes abundant computational resources. However, the improvements are not that significant.\\nIn short, the contributions of the papers are limited, especially the technical part. I think the paper cannot meet the standard of ICLR conference,\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised Manuscript Submission\", \"comment\": \"Dear AC and Reviewers,\\n\\nWe would like to sincerely appreciate the time and effort you have invested in reviewing our submission. As the \\u201cuploading a revised PDF\\u201d phase is drawing to a close (November 27th), we\\u2019ve uploaded a revised version of our manuscript. Two minor changes are made: 1) we updated a few claims (marked as blue in the main text) based on the feedback from all Reviewers in our responses; 2) we removed Fig. 4 (visualization results) from the main text and incorporated it with additional visualization results in Appendix A.1 (following the suggestions of Reviewer `wPEn` and `TuBk`) to ensure the main text adheres to the 10-page limit.\\n\\nWe are continuing to polish our manuscript after this deadline to incorporate all new results and discussions with reviewers. Thank you once again for your thoughtful review and consideration.\\nLooking forward to your further responses and comments. We are more than happy to provide any further details or explanations. \\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer `TuBk`,\\n\\nThank you for your feedback and for raising your score to 8 accept! We really appreciate your contributive suggestions to significantly improve our work. We fully agree that these additional results would further help to demonstrate the effectiveness of the pre-training, and further promote more explorations toward scaling law in motion prediction community. We will certainly emphasize and organize these points in the final version. Thank you once again for your valuable insights and recommendations!\"}", "{\"comment\": \"Dear reviewer `TuBk`, we sincerely appreciate the thorough assessment and contributive suggestions on our paper! We address each of your questions as follows.\\n\\n> The downstream motion prediction settings, though already very diverse, seem not being able to cover all necessary cases. For example, no methods fine-tuned on Waymo Open Motion Dataset (WOMD) are presented.\\n\\nThanks for pointing this out! In this paper, we present an early attempt to scale multiple trajectory datasets for pre-training in motion prediction. However, training with scaled data has proven to be both GPU-intensive and time-consuming. Upon closer examination, the Argo 1, Argo 2, and WOMD provide 250k, 200k, and 487k training scenarios, respectively. Among these, the WOMD stands out due to its larger size, a greater number of vehicles per data sample, and the need for a more complex data loader. Additionally, models designed for WOMD (e.g., MTR) are typically larger and more resource-intensive than those for Argo or Argo 2.\\n\\nGiven our limited computational resources, we focused on Argo and Argo 2 for this paper and did not include results on WOMD at the time of submission. However, we fully agree that fine-tuning on WOMD is crucial for evaluating cross-dataset pre-training in motion prediction. While these experiments are time-consuming to run, we are currently making every effort to obtain these results and will add them to our paper as soon as they become available. Thank you for this excellent suggestion to help make our paper more complete!\\n\\n> The pre-training performance should be illustrated to prove that pre-training tasks can be done successfully. For example, can you show some examples of reconstructed trajectories?\\n\\nThanks for this suggestion. We've added some visualization results of reconstructed trajectories in Appendix A.2 of the revised manuscript. As also suggested by reviewer 2, we've added some more intuitive visualization results of fine-tuning in Appendix A.1.\\n\\n> Direct data mixing in Data-scaled Pre-training is a natural choice, but might not be optimal. For example, WOMD has significant domain gap compared to Argoverse. In this case, a biased weight might be helpful in pre-training stage to lower the influence of WOMD-Argoverse domain gaps.\\n\\nThanks for this insightful comment. Yes, we've done some experiments in our early experiments when exploring data scaling and balancing. Specifically, we use a weight of 40% to WOMD and it results in about 200k data which is about 1:1 with Argo. As shown below, we explored pretraining with WOMD, and pretraining with mixing of the two datasets.\\n\\n\\n| Pre-Trainng Datasets | Fine-Tuning Dataset | minFDE | minADE | MR |\\n| -------------------- | ------------------- | ------ | ------ | ----- |\\n| \\\\ | Argo | 0.969 | 0.661 | 0.092 |\\n| WOMD_0.4 | Argo | 0.950 | 0.653 | 0.088 |\\n| WOMD_1.0 | Argo | 0.946 | 0.652 | 0.089 |\\n| Argo+WOMD_0.4 | Argo | 0.937 | 0.647 | 0.087 |\\n| Argo+WOMD_1.0 | Argo | 0.935 | 0.645 | 0.086 |\\n\\nAs shown in the table, in both pre-training settings, the fine-tuning performance slightly drops when we only use 0.4 of the WOMD data, presumably due to the increased diversity of the data.\", \"title\": \"Response to Reviewer TuBk (1/3)\"}", "{\"comment\": \"Dear reviewer `ND2Q`, we sincerely appreciate the thoughtful review and precious feedback on our paper! We have carefully addressed your concerns as outlined below.\\n\\n> In the experiment section (Table 1), it is noted that both HPNet and Forecast-MAE were not pretrained on all three datasets, reportedly due to \\\"compute constraints.\\\" This reasoning should be clarified further to help readers understand the specific limitations or challenges involved.\\n\\nThanks for pointing this out! In this paper, we present an early attempt to scale multiple trajectory datasets for pre-training in motion prediction, drawing inspiration from the successes in NLP and CV. However, training with scaled data has proven to be both GPU-intensive and time-consuming. Furthermore, exploring the influence of different dataset combinations could easily double the number of required experiments, significantly increasing the computational cost.\\n\\nGiven these constraints and the limited time before the paper submission, we focused our experiments on data-scaled pre-training with one model per dataset, ultimately selecting HiVT for Argo and QCNet for Argo2. In Table 3, we present the effects of different dataset scaling strategies, such as same-dataset pre-training, cross-dataset pre-training, and pre-training on all datasets. We believe these experiments provide valuable signals and insights to the community. Additionally, as recommended by the reviewer `TuBk`, we are now exploring more pre-training settings on WOMD, which we believe will further enhance the completeness and potential impact of the proposed method.\\n\\n> The reconstruction task is commonly employed in motion forecasting pretraining frameworks, with two main approaches: predicting masked tokens (as in Forecast-MAE) or predicting masked tail trajectories (as in SEPT[2]). The proposed method follows a strategy similar to the latter, which has been shown to outperform the token prediction approach in [2]. It would be beneficial to emphasize the main distinctions of the proposed method from this established approach to further highlight its contributions.\\n\\nThanks for pointing out this and it's a valuable question! The main distinction of our reconstruction task from SEPT and Forecast-MAE is that our input and reconstruction target are not fixed, and are temporally varied depending on the subs-scenario sampling (random t and t\\u2019). Therefore our reconstruction task is more challenging due to the more randomness introduced to input and output trajectories. It contributes to learning more informative and transferable features.\\n\\nBesides, we will open-source our code upon acceptance of our paper to further contribute to the community, and here we provide a rough and initial preview for our code and checkpoint in this anonymous [URL](https://anonymous.4open.science/r/5f404fda8de3e3278e2f794f80bffed0036c827b).\", \"title\": \"Response to Reviewer ND2Q\"}", "{\"metareview\": \"This paper proposes a pretraining framework for trajectory prediction tasks using real-world datasets. Key ideas include self-supervised pre-training methods (contrastive and reconstruction learning). Experiments is performed on multiple datasets including Argoverse 1/2, Waymo Open Motion Dataset and a series of ablation studies demonstrate the approach.\\n\\nMost of the reviewers are positive about the paper. The most critical review points out that the methods have been used in other domains such as NLP, however application to this domain is novel. Consequently, in light of extensive experimentation, while being incrementally innovating my recommendation is accept as a poster.\", \"additional_comments_on_reviewer_discussion\": \"Majority of the reviews are positive and the most critical review did not show strong conviction towards arguing against accepting the paper.\"}", "{\"title\": \"Great job!\", \"comment\": \"I would like to thanks the authors for providing detailed experiments and explanations on all my questions and concerns. The rebuttals are convincing to me. I would raise my evaluation to accept to recognize the improved reliability of the proposed method, considering the evidences in the rebuttal experiments.\\n\\nI would sincerely hope that the authors could re-organize these additional experiments into the manuscript, during the camera-ready phase if accepted. I believe these would further help to convince readers on the effectiveness of the pre-training. And again, I highly appreciate the author's thorough and solid evidences provided.\"}", "{\"summary\": \"This paper introduces a self-supervised learning framework for motion forecasting that is both model-agnostic and dataset-agnostic. The approach unifies data samples from various motion forecasting datasets, such as WOMD, AV1, and AV2, making it feasible to pretrain models on large-scale, multi-source data. The framework incorporates both conservative learning and a reconstruction task, achieved through Trajectory Contrastive Learning (TCL) and Trajectory Reconstruction Learning (TRL). Experimental results demonstrate significant performance improvements across multiple architectures and datasets, validating the effectiveness of the proposed methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-organized and easy to follow. The experimental results are thorough, covering various datasets and methods, and providing strong evidence for the method's effectiveness.\\n\\n2. While conservative learning and reconstruction tasks are common in self-supervised learning frameworks for motion forecasting, this submission introduces some innovative strategies that add value.\\n\\n3. The approach to unifying data representation from diverse sources could pave the way toward a foundation model for motion forecasting.\", \"weaknesses\": \"While the novelty of this submission may be somewhat limited and most techniques are already verified in many previous works, it does not present any clear weaknesses. For specific considerations, please refer to the questions section.\", \"questions\": \"1. In the experiment section (Table 1), it is noted that both HPNet and Forecast-MAE were not pretrained on all three datasets, reportedly due to \\\"compute constraints.\\\" This reasoning should be clarified further to help readers understand the specific limitations or challenges involved.\\n\\n2. The reconstruction task is commonly employed in motion forecasting pretraining frameworks, with two main approaches: predicting masked tokens (as in Forecast-MAE) or predicting masked tail trajectories (as in SEPT[2]). The proposed method follows a strategy similar to the latter, which has been shown to outperform the token prediction approach in [2]. It would be beneficial to emphasize the main distinctions of the proposed method from this established approach to further highlight its contributions.\\n\\n[1] Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders\\n\\n[2] SEPT: Towards Efficient Scene Representation Learning for Motion Prediction\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer `ND2Q`,\\n\\nThank you for your feedback and for raising your score to 8 accept! We really appreciate your precious suggestions to significantly improve our work. We will certainly emphasize and incorporate these changes in the final version. Thank you once again for your valuable insights and recommendations!\"}", "{\"comment\": \"Dear Reviewer `SCK1`:\\n\\nThank you for your feedback and for maintaining your current score. We address each of your new comments as follows:\\n\\n> Interestingly, row 4 (complementary trajectory of the input sub-scenario, basically motion prediction of t) and row 5 (trajectory of the other sub-scenario t' input, one of the core things of this paper) show relatively equivalent good performance in the ablation results. What could this imply? Does it suggest that pretraining with motion prediction tasks alone is sufficient for achieving strong performance on a single dataset?\\n\\nThanks for the question. All results shown in Table 5 are pre-trained with our contrastive learning task as well, and we have adapted different reconstruction targets based on it. We present the performance of only doing the reconstruction learning task in Table 4, which indicates: 1) the reconstruction task alone can effectively improve prediction accuracy in isolation, and 2) combining both tasks yields the largest improvement.\\n\\nAlso, as inspired by Reviewer `TuBk`, we've added an experiment using the standard motion prediction task as the pre-training task. The results are relatively poor compared with the SSL task (seen in our responses to Reviewer `TuBk`).\\n\\n> The sentence you provided in the rebuttal makes some sense. However, the description in line 96 is not accurate. MAE is a general framework, and it can be applied to tasks like masking and reconstructing trajectories only, as shown in methods like Rmp. I recommend revising this line.\\n\\nThanks for the careful review. We will revise it to make our claim more accurate in the final version. Specifically, we will separate the MAE approach with/without map reconstruction and address the \\u201cmasking and reconstructing trajectories only\\u201d method like Rmp.\\n\\n> I hope this important statistic and the details of how non-overlapping t and t' trajectories are sampled for each dataset, can be included in the appendix of the final version.\\n\\nThank you for this suggestion. We agree that these details are important to fully understanding the method design and will add them to the appendix in the final version.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thank you very much for your efforts in providing clarifications and conducting additional experiments. Your responses address most of my concerns and questions. I have reviewed all the reviewers' comments and your replies. I have just a few comments below. I will maintain my current score.\\n\\n$\\\\ $\\n\\n> (Clarification on Table 5) The last two rows represent the category \\\"reconstruction with predictive information\\\", which means we don't include trajectories of the sub-scenario t in our reconstruction targets, but instead, aim to predict the remaining trajectories. In row 4, we reconstruct the complementary trajectory of the input sub-scenario (20 points as input and complementary 30 points for reconstruction in Argo). Row 5 aims to reconstruct the trajectory of the other sub-scenario (sub-scenario t').\\n\\nThanks for the detailed explanations for Table 5, I do think a clearer elaboration in the final version is needed. I understand the table much better now, for Argo dataset, for instance, input is first [t, t+20], the reconstructed output from rows 2 to 5, represent to [t, t+20], [t, t+50], [t+20, t+50], [t', t'+20], respectively.\\n\\nInterestingly, row 4 (complementary trajectory of the input sub-scenario, basically motion prediction of t) and row 5 (trajectory of the other sub-scenario t' input, one of the core things of this paper) show relatively equivalent good performance in the ablation results. What could this imply? Does it suggest that pretraining with motion prediction tasks alone is sufficient for achieving strong performance on a single dataset?\\n\\n\\n> (regarding the statement in lines 95-97) The point we want to emphasize is that the MAE approach based on map reconstruction is not general.\\n\\n> \\\"MAE approaches demand that each trajectory and map segment must have an explicit feature representation to enable reconstructive pre-training\\\" (line 96, the revised version)\\n\\nThe sentence you provided in the rebuttal makes some sense. However, the description in line 96 is not accurate. MAE is a general framework, and it can be applied to tasks like masking and reconstructing trajectories only, as shown in methods like Rmp. I recommend revising this line.\\n\\n > The percentage of after-filtering complete trajectories, over all vehicle trajectories is 35%, 27% and 25% for Argo, Argo2 and WOMD respectively.\\n\\nI hope this important statistic and the details of how non-overlapping t and t' trajectories are sampled for each dataset, can be included in the appendix of the final version.\"}", "{\"comment\": \"> Line 266 mentions that only complete trajectories are used in pretraining, excluding incomplete ones. During pretraining phase, do you reconstruct single-agent trajectory or multi-agent? The same question applies to the prediction phase. Could you specify what percentage of trajectories remain after filtering for each dataset?\\n\\nThanks for the insightful question. Regarding single-agent or multi-agent pre-training, we follow the backbone model's original training setting. Specifically, multi-agent training has become popular in recent literature, since it can be seen as a data augmentation strategy to enhance data diversity and model performance. All four backbone models considered in our experiment adopt multi-agent training, thus our pre-training did the same. Regarding the downstream prediction training phase, again we follow the backbone models\\u2019 original setting and adopt multi-agent training.\\n\\nThe percentage of after-filtering complete trajectories, over all vehicle trajectories is 35%, 27% and 25% for Argo, Argo2 and WOMD respectively.\\n\\n> Line 296, for eq.(1), why the case of i=j is not excluded in the second term of the denominator, as in this case, it equals to the positive pairs (numerator part) that try to maximize, so it this seems to conflict with the intent.\\n\\nThanks for the detailed question! We follow classical contrastive learning methods (i.e., SimCLR, MoCo), to consider all data pairs, including the positive pair, in the denominator, so that the denominator serves as a normalization term. This ensures that the loss function is mathematically consistent and effectively balances the contributions of positive and negative pairs. We\\u2019re also happy to provide more discussion/information regarding the design of Eq.1, if needed.\\n\\n> Line 485, different reconstruction strategies (categories and reconstruction target of Table 5) are confusing. It is unclear how different options for reconstructing trajectories starting at t' are justified or implemented. What do you mean by historical information for the sub-scenarios of t'? Could you clarify these strategies?\\n\\nThanks for pointing this out and we are happy to clarify these ablation strategies. Table 5 aims to ablate the influence of different reconstruction targets, on the downstream prediction performance. In Table 5, the first row represents the variant where we do not conduct reconstructive pretraining, and only contrastive pretraining is considered. The second and third rows belong to one category \\\"reconstruction with historical information\\\", which means the trajectories of the input sub-scenario t are included in the reconstructed trajectories. For example, the reconstruction target of row 2 is set as exactly the trajectories of the sub-scenario t, which forms a self-reconstruction task. Row 3 reconstructs the trajectories of the entire scenario (e.g., 20 points as input to reconstruct all 50 points in Argo). The last two rows represent the category \\\"reconstruction with predictive information\\\", which means we don't include trajectories of the sub-scenario t in our reconstruction targets, but instead, aim to predict the remaining trajectories. In row 4, we reconstruct the complementary trajectory of the input sub-scenario (20 points as input and complementary 30 points for reconstruction in Argo). Row 5 aims to reconstruct the trajectory of the other sub-scenario (sub-scenario t').\\n\\nThe results of Table 5 indicate that the last two reconstruction strategies show the biggest performance boost, and we choose row 5 as our final actual model variant.\", \"title\": \"Response to Reviewer SCK1 (3/3)\"}" ] }
BmYzoPppij
LLMCO2: Advancing Accurate Carbon Footprint Prediction for LLM Inferences
[ "Zhenxiao Fu", "Fan Chen", "Shan Zhou", "Haitong Li", "Lei Jiang" ]
Throughout its lifecycle, a large language model (LLM) generates a substantially larger carbon footprint during inference than training. LLM inference requests vary in batch size, prompt length, and token generation number, while cloud providers employ different GPU types and quantities to meet diverse service-level objectives for accuracy and latency. It is crucial for both users and cloud providers to have a tool that quickly and accurately estimates the carbon impact of LLM inferences based on a combination of inference request and hardware configurations before execution. Estimating the carbon footprint of LLM inferences is more complex than training due to lower and highly variable model FLOPS utilization, rendering previous equation-based models inaccurate. Additionally, existing machine learning (ML) prediction methods either lack accuracy or demand extensive training data, as they inadequately handle the distinct prefill and decode phases, overlook hardware-specific features, and inefficiently sample uncommon inference configurations. We introduce LLMCO2, a graph neural network (GNN)-based model that greatly improves the accuracy of LLM inference carbon footprint predictions compared to previous methods.
[ "carbon footprint", "LLM inferences", "energy prediction" ]
https://openreview.net/pdf?id=BmYzoPppij
https://openreview.net/forum?id=BmYzoPppij
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fS6NO54qdW", "dYLqo9FS9P", "SKgH8Tq7W9", "Aw7ZJYMkYG" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732641505330, 1730519721248, 1730687079960, 1730464668564 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3319/Authors" ], [ "ICLR.cc/2025/Conference/Submission3319/Reviewer_LWKb" ], [ "ICLR.cc/2025/Conference/Submission3319/Reviewer_cevv" ], [ "ICLR.cc/2025/Conference/Submission3319/Reviewer_zUnY" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents LLMCO2, a graph neural network that provides the inference energy consumption cost of LLMs based on the computation graph and deployed hardware of said LLM. They account for the prefill- and decode-phase of the LLM, utilize hardware characteristics, and keep tensor parallelism in mind, which limits the mean absolute error percentage compared to previous state-of-the-art to ~ 20% w.r.t. ground truth.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"S1: The authors found a research gap that the previous SOTA did not address.\", \"s2\": \"This is an important topic to work on, and I am glad it is being tackled from a solid basis in empirical research (performing a roofline analysis, considering multiple models, etc.).\", \"s3\": \"An interesting approach to use the computation graph as input to a GNN. I feel there is a lot of value in this following this further in this line of research.\", \"weaknesses\": \"W1: I seriously doubt the credibility of the papers cited in this line of research, like Faiz et al., which is an important building block in the argumentation of this paper (see first and second paragraph). They cite Strubell et al. 2019, which was famously (or at least I thought) debunked by Patterson et al. (https://arxiv.org/pdf/2104.10350) as providing an 88x higher estimate of the energy consumption compared to what was really used, highlighting how hard it is to perform this research in practice. Patterson et al. was also not cited, which, from my understanding, is one of the few papers of people with actual end-to-end access to all production metrics, making this more trustworthy than most other papers that estimate costs. I take specific issue with general statements \\\"(...) with a single epoch requiring three times the FLOPs of an inference\\\", \\\"(...) the period required for inference emissions to match training emissions is rapidly decreasing.\\\" and \\\"For instance, training the Google T5 LLM generates 40% more carbon emissions than a round-trip flight between San Francisco and New York (Faiz et al., 2024).\\\" While the first two statements would have to have a large caveat of the specific use-case attached to it and potentially not being representative at the current time, the last statement is, to my understanding, entirely wrong. Patterson et al. states that T5 training took 46.7 tCO2e (Table 4), which is 26% of a roundtrip between NYC and SF (180 tCO2e). The authors argue that this took 140%, an error by a factor of 140/26 ~= 5x.\", \"w2\": \"I question the addition of empirical data from older hardware towards the training dataset of LLMCO2. Inference for frontier models like OpenAI's (which was used in the motivation) likely does not happen on older GPU architectures due to their missing support of sparsity and reduced FLOP and memory bandwidth per $ performance, making that analysis void. When arguing that inference is costly at scale, one needs to analyze the actual costs of said scale and not use older hardware for that comparison.\", \"w3\": \"The authors cite the Azure LLM trace as a basis for their decision to use a <2 batch size and argue that this is representative. The same paper (Patel et al. '24) also says this only counts for the prefill (Sec 3.D) but not for decoding, which scales for that particular model-hardware setup until a batch size of 64 (only limited by memory in that specific instance, making the results even more questionable as this is dependent on the context lengths of the requests due to them affecting the KV-cache). This is not addressed by LLMCO2, making its prediction very likely not applicable to other providers and use cases. Additionally, Patel et al. argue that the largest latency (hence, time and energy consumption at ~50% energy consumption) comes from the decoding phase, so they split these two phases to run on different hardware setups. However, the authors do not consider this use case (at least to my understanding) when generating the dataset to train LLMCO2, discounting their own motivation of \\\"Disregard for prevalent configurations\\\", while Patel et al. was used to argue this exact motivation.\", \"w4\": \"The idle time of the nodes was not considered, which is also likely to be a large chunk of energy spent if services are not utilized to capacity. Arguably, this can be omitted by assuming that on-demand up-/downscaling of hardware works efficiently. However, this was never stated. Generally, the assumptions of when LLMCO2 will likely provide accurate results and when it does not are not stated concretely enough.\", \"w5\": \"While I appreciate the empirical-based approach of this paper, it is lacking on multiple fronts. No SOTA deployment methods are discussed (while citing Patel et al., which proposes one instance when splitting prefill and decoding phase on different hardware generations), and continuous batching was barely addressed in how it affects energy consumption. No sparsity, no speculative decoding or streaming, no TensorRT, no NVIDIA Triton, no vLLM, no torch.compile, and a very limited definition of workloads. Just as an example, if speculative decoding improves the decoding phase by 2x, its carbon emissions will be approximately halved (due to the smaller model being multiple magnitudes faster and cheaper to run, making it a negligible cost). This is a very common technique and is likely to be used by most inference providers, which is not addressed at all.\", \"w6\": \"Peak FLOP is wrong for the H100 in Table 2 (see the H100 datasheet https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet). FP32 is stated as 989 TFLOP, while it is 67 TFLOP (presumably, the authors mistyped TF32, which would also be wrong as this is the performance with sparsity). FP16 performance is stated as 1979 TFLOP, but this is 989 TFLOP, as this is the performance with sparsity enabled. Same thing for INT8. Same issue for all precisions for the A100. This makes me question the other results from this paper if such an important consideration was overlooked.\", \"w7\": \"The second paragraph states, \\\"(...) the period required for inference emissions to match training emissions is rapidly decreasing.\\\" While I agree that this can be the case, the way this is argued is not conclusive. We see a trend that compute utilization is doubling every 6 months (https://epochai.org/trends#compute). To argue the point of inference costs becoming closer to the training costs, the same analysis would need to happen for inference (presumably with AI usage numbers, which are likely to be kept private at large). However, due to the issues from W5, the trace used in this paper would not be representative of the real world, making it probably impossible to argue outside of specific deployments.\", \"w8\": \"Eq. 1, \\\"energy_per_operation x PUE x carb_intensity.\\\" I doubt that this is the correct way to estimate energy consumption. This excludes the energy for the nodes itself, including networking, PC internals, and cooling. A DGX H100 node uses roughly 10.2kW, while 8xH100 use 8x700W=5.6kW, making your resulting estimate off by a factor of 2x (https://docs.nvidia.com/dgx/dgxh100-user-guide/introduction-to-dgxh100.html)\", \"w9\": \"What were the exact dataset splits? The dataset and evaluation sections seem to suggest that all evaluated models were part of the training dataset, making me afraid of an overfit happening.\\n\\nW10 (Summary and final notes): While the basic premise of the paper is interesting to use a computational-graph analysis and using a GNN to process it, the application of this model would lead to the following problems:\\n- A misguided understanding of how energy costs come about from LLM inference due to very limited real-world application scenarios\\n- Potentially misleading results about being energy-efficient in theory or inference being much more grave w.r.t. energy-consumption than training. It is important to note that I am not averse to thinking this might be the case, but how it is stated here is definitely wrong. I fear this work being misunderstood and misused, similar to Strubell et al. (and how prior cited work like Faiz et al. was used similarly by the authors).\", \"minor_issues\": [\"Figures 9, 10, and 11 are squished and are hardly readable.\", \"Figure 10 was referenced when the authors wanted to reference Figure 11 in \\\" Figure 10 shows the operational carbon footprint of Bloom-7b1 (...)\\\".\", \"Figure 11: It is hard to understand what training is and what is inference due to the legend's labels.\"], \"questions\": \"Q1: Given the discrepancies in the cited data (as outlined in W1, W2, and W3), how do you plan to reassess and strengthen the motivation for your research?\", \"q2\": \"Regarding the issues raised in W4, W6, W7, and W8, could you elaborate on your decision-making process for including or excluding certain factors in your model? What were the trade-offs you considered, and how do you plan to address these concerns in future iterations of your work?\", \"q3\": \"Why were real-world deployment types outlined in W5 not included? The results from your model are not representative if even one of the techniques is used, potentially making the model void outside of the exact specifications under which it was developed in this paper. My current understanding points me towards a mathematical model based on empirical measurements rather than a regression-based predictor for energy consumption. Even if the mathematical model might not be perfectly accurate, I am fairly confident in estimating a tight lower and upper bound of energy consumption with any of these techniques used.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces LLMCO$_2$, a GNN-based pipeline to estimate the carbon footprint of LLM inference.\\nLLMCO$_2$, separate LLM's prefill and decode stage during inference, and uses focused sampling targeting common inference configurations. The system demonstrates significant improvement in prediction accuracy compared to existing methods across various LLM architectures and GPU configurations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Addresses the critical issue of carbon footprint estimation for LLM inferences\", \"This paper separate modeling of prefill/decode phases and provides more accurate estimate\", \"Use GNN to predict the carbon footprint and shows promising accuracy.\"], \"weaknesses\": [\"This is good work. However, this paper is more suitable for an HPC-related conference; I didn't see much relation to the submitted track of `alignment, fairness, safety, privacy, and societal considerations'.\", \"While the paper presents several innovations, it looks like combining existing methods such as GNNs, the Roofline model, and active learning. It may lack of novelty.\"], \"questions\": [\"Could you provide more details about the exact training setup of the GNN, specifically: what is the target label during training, how is it measured and at what granularity, and how do you handle the separation of prefill and decode phases in your ground truth measurements?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a graph neural network-based model, called LLMCO2, which aims to improve the accuracy of carbon footprint prediction in the LLM inference process.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The organization of this paper is good.\\n2. It can improve the accuracy of LLM inference carbon footprint predictions compared to previous methods.\", \"weaknesses\": \"1. The paper proposes a GNN-based model to predict the carbon footprint of LLM inferences. The integration of graph embedding, data sampling, and the Roofline model does not introduce fundamentally new concepts but rather repurposes established techniques in a novel application context. This approach lacks substantial innovation. The primary contribution is an improvement in prediction accuracy over previous models. However, these improvements are incremental, and the research lacks a clear breakthrough in model efficiency or in introducing a new predictive paradigm.\\n\\n2. While the model aims to predict the carbon footprint, it does not adequately address the broader trade-offs in energy efficiency versus accuracy. The model's usage of multiple GPUs and its impact on real-world energy savings are not thoroughly discussed.\", \"questions\": \"The paper\\u2019s focused data sampling strategy is likely to cause biases in the model\\u2019s predictions. The sampling method omits rarely encountered configurations, which might negatively affect the model's robustness when faced with less common but real-world scenarios, potentially leading to inaccuracies in edge cases.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BmG88rONaU
Test-time Adaptation for Cross-modal Retrieval with Query Shift
[ "Haobin Li", "Peng Hu", "Qianjun Zhang", "Xi Peng", "XitingLiu", "Mouxing Yang" ]
The success of most existing cross-modal retrieval methods heavily relies on the assumption that the given queries follow the same distribution of the source domain. However, such an assumption is easily violated in real-world scenarios due to the complexity and diversity of queries, thus leading to the query shift problem. Specifically, query shift refers to the online query stream originating from the domain that follows a different distribution with the source one. In this paper, we observe that query shift would not only diminish the uniformity (namely, within-modality scatter) of the query modality but also amplify the gap between query and gallery modalities. Based on the observations, we propose a novel method dubbed Test-time adaptation for Cross-modal Retrieval (TCR). In brief, TCR employs a novel module to refine the query predictions (namely, retrieval results of the query) and a joint objective to prevent query shift from disturbing the common space, thus achieving online adaptation for the cross-modal retrieval models with query shift. Expensive experiments demonstrate the effectiveness of the proposed TCR against query shift. Code is available at https://github.com/XLearning-SCU/2025-ICLR-TCR.
[ "Test-time adaptation", "Cross-modal retrieval", "Query shift" ]
Accept (Spotlight)
https://openreview.net/pdf?id=BmG88rONaU
https://openreview.net/forum?id=BmG88rONaU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uxzLFvZGl3", "uxymSEr41o", "u3W7V5TM8L", "terCzAA1Vp", "teRDr5hSVm", "sKBJ4Mwc39", "rFAzh2zook", "mRZ9B5WoW6", "mE8guV7hHG", "k45IR9X7MA", "hxVWDDXfVf", "fqm2dlXigo", "Wd2zQPPXUe", "VUIKK2FpxF", "U5S8sMqVM7", "U21XPvHQ56", "TFIv0OgAt3", "S2zcNIO9fu", "Oo0qrPfYew", "Ln7DhYhl87", "FnUP37m4nn", "CHJFd5fKc0", "9Tpxj0sRAf", "9JE6qth1qe", "7DcjWzbPwn", "69DvrGuIgT", "4jaaBwo4gZ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732185473982, 1732172279770, 1732172070945, 1732172463749, 1732176529251, 1730656771783, 1732172376510, 1732172676973, 1732172663660, 1732172577690, 1732636935414, 1732415569231, 1730182239257, 1732171964119, 1732172612658, 1732172145879, 1732172507707, 1730805665239, 1732427253718, 1732172023150, 1734397745813, 1732427350007, 1737524071831, 1730100745051, 1732679390623, 1732172184480, 1732172125058 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_t1ei" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_E6xm" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_E6xm" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_jDwD" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_t1ei" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_Wks8" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Area_Chair_PRz7" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10708/Reviewer_jDwD" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ], [ "ICLR.cc/2025/Conference/Submission10708/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your prompt response and for upgrading your score! We deeply appreciate the time and effort you dedicated to reviewing our work. Your constructive feedback has been invaluable in helping us refine and improve the paper.\"}", "{\"comment\": \"| Mixed Corruption Types | TR@1 Level 1 | TR@1 Level 2 | TR@1 Level 3 | TR@1 Level 4 | TR@1 Level 5 | IR@1 Level 1 | IR@1 Level 2 | Avg. |\\n| ---------------------- | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | -------- |\\n| BLIP ViT-B/16 | 68.8 | 64.5 | 61.4 | 54.5 | 44.9 | 42.5 | 41.1 | 53.9 |\\n| Tent | 70.0 | 67.0 | 64.4 | 56.4 | 33.2 | 42.2 | 38.5 | 53.1 |\\n| EATA | 71.7 | 68.2 | 64.7 | 58.9 | 48.0 | 43.2 | 41.9 | 56.7 |\\n| DeYO | 71.5 | 68.4 | 65.1 | 59.9 | 48.3 | 42.9 | 41.8 | 56.8 |\\n| Ours | **73.3** | **70.4** | **66.9** | **61.5** | **53.6** | **43.8** | **42.3** | **58.8** |\\n\\n**The performance superiority of TCR over all baselines under both Mixed Severity and Mixed Corruption Types settings demonstrates its robustness against non-i.i.d query shift.**\\n\\n> Q2: Regarding the emergence of query shift, **I am curious whether temporal issues, such as temporal shifts or concept drift discussed in [1-3], are present in real-world scenarios**. Could the authors provide relevant discussion on this aspect?\\n> [1] Evolving standardization for continual domain generalization over temporal drift. *NIPS 2023*. \\n> [2] Temporal domain generalization with drift-aware dynamic neural networks. *arXiv preprint arXiv:2205.10664* (2022).\\n> [3] Online Boosting Adaptive Learning under Concept Drift for Multistream Classification. AAAI 2024.\\n\\n**A2**: Thanks for your constructive comments. We completely agree with your insightful opinion that the temporal issues would lead to query shift. **We have cited these related works and established some connections with them in Appendix E of the revised manuscript.** To be specific, as discussed in [D], the underlying distributions of data are distinct at different times, leading to **concept drift**. For instance, in weather forecasting, data is collected across diverse distributions (e.g., sunny, frost, snow). It is noteworthy that we have evaluated TCR under query shift caused by various weather conditions (e.g., frost, snow, fog). **The corresponding results from COCO-C (Table 1 in the manuscript) and Flickr-C (Table 7 in the manuscript) benchmarks demonstrate the robustness of TCR against concept drift to some extent**. For your convenience, we attach the corresponding numerical results (regarding TR@1) in the following tables.\\n\\n| COCO-C Benchmark | Snow | Frost | Fog | Bright | Avg. |\\n| ---------------------- | -------- | --------- | -------- | ---------- | -------- |\\n| BLIP ViT-B/16 | 32.3 | 52.2 | 57.0 | 66.8 | 52.1 |\\n| Tent | 31.9 | 48.7 | 56.3 | 66.5 | 50.9 |\\n| EATA | 45.6 | 56.7 | 62.5 | 71.4 | 59.0 |\\n| SAR | 38.0 | 56.2 | 59.1 | 70.6 | 56.0 |\\n| READ | 39.9 | 49.9 | 58.4 | 70.3 | 54.6 |\\n| DeYO | 37.5 | 59.7 | 66.4 | 71.2 | 58.7 |\\n| Ours | **56.5** | **64.1** | **71.0** | **73.4** | **66.3** |\\n| **Flickr-C Benchmark** | **Snow** | **Frost** | **Fog** | **Bright** | **Avg.** |\\n| BLIP ViT-B/16 | 66.4 | 80.4 | 79.5 | 85.5 | 78.0 |\\n| Tent | 67.2 | 80.9 | 79.6 | 86.8 | 78.6 |\\n| EATA | 72.0 | 83.7 | 82.5 | 87.9 | 81.5 |\\n| SAR | 71.9 | 83.1 | 82.2 | 87.9 | 81.3 |\\n| READ | 71.7 | 83.8 | 81.9 | 87.7 | 81.3 |\\n| DeYO | 73.1 | 84.1 | 83.2 | 88.6 | 82.3 |\\n| Ours | **78.2** | **85.2** | **85.7** | **89.5** | **84.7** |\"}", "{\"comment\": \">Q.3: This paper introduces several hyperparameters, such as the temperature parameter (\\u03c4) for controlling the trade-off between smoothness and sharpness, and others for balancing the different loss terms. **How sensitive is TCR to these hyperparameters, and how easy is it to tune them for new domains**? Some more results from these ablations studies will be very beneficial.\\n\\n**A.3**: Thanks for your comments. There are two hyperparameters in the paper, i.e., the temperature $\\\\tau$ for controlling the trade-off between smoothness and sharpness, the trade-off parameter $t$ for controlling the intra-modality uniformity. For parameter sensitivity analysis, we have conducted ablation studies about $\\\\tau$ and $t$ in Fig. 4(a) and Fig. 4(b), respectively. For your convenience, we attach the corresponding numerical results of Fig. 4(a) in the following tables.\\n\\n| $\\\\tau$ | 1e-5 | 1e-4 | 1e-3 | 5e-3 | 0.01 | 0.02 | 0.05 | 0.1 |\\n| ------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\\n| Base2COCO | 52.4 | 53.0 | 52.9 | 55.7 | 56.4 | 57.8 | 57.0 | 55.2 |\\n| ICFG2CUHK | 32.9 | 33.9 | 34.5 | 34.6 | 35.8 | 36.2 | 35.9 | 34.7 |\\n| Flickr-C | 58.8 | 62.3 | 64.0 | 66.0 | 67.3 | 69.1 | 65.2 | 64.7 |\\n| Base2Fashion | 21.1 | 21.5 | 23.1 | 25.0 | 25.8 | 26.4 | 24.7 | 13.7 |\\n\\n**The results denote that TCR demonstrates stable performance within the range of [0.001,0.05] and achieves the best performance when $\\\\tau=0.02$.**\\n\\nIn response to your concern, **we conduct more ablations studies on the trade-off parameter $t$ under \\\"Flickr-C\\\", \\\"ICFG2CUHK\\\", \\\"Base2Fashion\\\" settings**. The results are summarized in Fig. 4(b) within the updated manuscript. For your convenience, we attach the corresponding numerical results (regarding Recall@1) in the following tables.\\n\\n| t | 0.1 | 1.0 | 2.0 | 5.0 | 10.0 | 20.0 | 100.0 |\\n| ------------ | ---- | ---- | ---- | ---- | ---- | ---- | ----- |\\n| Base2COCO | 58.4 | 58.4 | 58.8 | 58.9 | 59.0 | 58.5 | 58.0 |\\n| ICFG2CUHK | 37.0 | 37.1 | 37.3 | 37.3 | 37.3 | 37.3 | 37.0 |\\n| Flickr-C | 67.8 | 68.2 | 68.2 | 68.4 | 68.4 | 68.2 | 67.9 |\\n| Base2Fashion | 26.9 | 27.0 | 27.0 | 27.1 | 27.4 | 27.1 | 27.0 |\\n\\n**The results indicate that TCR is not sensitive to the choice of the parameter $t$.**\\n\\nBesides, we apologize for the omission of the sensitivity analysis for these hyperparameters. In the revised manuscript, we have supplemented the detailed parameter analysis. For your convenience, we attach the added statement as follows.\\n\\nAs shown in Fig. 4(a), we observe that: i) selecting an appropriate temperature for the existing TTA approach across various datasets is challenging; ii) even a low temperature (e.g., $1e-4$) is a better setting across all datasets, the performance degrades as a low temperature tends to make model overfitting on noisy query prediction. In contrast, the query prediction refinement module not only stabilizes the temperature setting for all the datasets but also prevents the model from either underfitting or overfitting by excluding some irrelevant samples in the gallery. Besides, TCR demonstrates stable performance within the range of [0.001,0.05] and achieves the best performance when $\\\\tau=0.02$. As depicted in Fig. 4(b), one could observe that TCR is not sensitive to the choice of $t$.\"}", "{\"comment\": \"| TR/Gallery Shift Types | OCR | CI | CR | CS | CD | SR | RI | RS | RD | IP | Formal | Casual | Passive | Active | Backtrans | Avg. |\\n| ------------------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ------ | ------ | ------- | ------ | --------- | ---- |\\n| BLIP ViT-B/16 (ACC & R@1) | 49.7 | 23.1 | 20.1 | 34.5 | 22.8 | 59.7 | 64.8 | 65.2 | 66.9 | 73.1 | 73.2 | 72.5 | 71.5 | 73.6 | 71.1 | 56.1 |\\n| TCR (ACC) | 55.5 | 26.8 | 23.5 | 39.5 | 27.5 | 66.2 | 70.9 | 70.4 | 71.3 | 77.1 | 76.5 | 76.2 | 75.1 | 77.3 | 75.4 | 60.6 |\\n| TCR (R@1) | 55.5 | 26.8 | 23.5 | 39.0 | 27.7 | 65.9 | 70.9 | 70.6 | 71.6 | 77.2 | 76.7 | 76.2 | 75.2 | 77.3 | 75.3 | 60.6 |\\n\\n- Both Query and Gallery Shift: In this setting, we choose the OCR / Gaussian corruptions as the query shift for image / text retrieval, respectively. For the baseline BLIP ViT-B/16, the IR@1 / TR@1 with OCR / Gaussian corruptions is 31.4% / 43.4%. \\n\\n| IR/Gallery Shift Types | Gauss. | Shot | Impul. | Speckle | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |\\n| ------------------------- | ------ | ---- | ------ | ------- | ------ | ----- | ------ | ---- | ---- | ----- | ---- | ----- | ------ | ------- | ----- | ---- | ---- |\\n| BLIP ViT-B/16 (ACC & R@1) | 18.5 | 19.8 | 18.6 | 23.4 | 20.1 | 28.8 | 19.0 | 8.2 | 18.9 | 22.5 | 25.5 | 27.6 | 17.1 | 18.6 | 10.4 | 26.0 | 20.2 |\\n| TCR (ACC) | 20.6 | 21.8 | 20.6 | 26.0 | 22.6 | 31.6 | 21.5 | 9.3 | 21.6 | 25.5 | 28.8 | 30.5 | 18.9 | 21.8 | 12.1 | 28.4 | 22.6 |\\n| TCR (R@1) | 20.6 | 21.8 | 20.6 | 26.0 | 22.6 | 31.7 | 21.5 | 9.4 | 21.6 | 25.6 | 28.8 | 30.5 | 19.0 | 21.8 | 12.2 | 28.4 | 22.6 |\\n\\n| TR/Gallery Shift Types | OCR | CI | CR | CS | CD | SR | RI | RS | RD | IP | Formal | Casual | Passive | Active | Backtrans | Avg. |\\n| ------------------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ------ | ------ | ------- | ------ | --------- | ---- |\\n| BLIP ViT-B/16 (ACC & R@1) | 27.2 | 13.2 | 11.1 | 18.4 | 27.2 | 32.5 | 36.3 | 37.2 | 38.9 | 42.2 | 42.8 | 42.7 | 41.0 | 43.2 | 41.0 | 33.0 |\\n| TCR (ACC) | 35.7 | 16.0 | 15.2 | 24.7 | 35.7 | 42.1 | 47.1 | 46.0 | 48.2 | 53.0 | 53.8 | 53.0 | 51.9 | 53.1 | 51.7 | 41.8 |\\n| TCR (R@1) | 35.2 | 16.5 | 15.3 | 24.6 | 35.2 | 42.2 | 47.7 | 45.4 | 48.3 | 52.8 | 52.4 | 52.7 | 52.1 | 53.5 | 52.1 | 41.7 |\\n\\nFrom the results, one could observe that **gallery shift degrades both retrieval performance and nearest neighbor selection accuracy**, whether in only gallery shift or both query and gallery shift settings. However, **the proposed TCR improves the retrieval performance under gallery shift, with the selected nearest neighbors more likely to be correct**. Besides, even under gallery shift setting, TCR could enhance retrieval performance surpassing the baseline performance without gallery shift. For example, in the both query and gallery shift setting, the text retrieval performance of TCR under RI (47.7%), RS (45.4%), Formal (52.4%), and Passive (52.1%) gallery shift exceeds the baseline performance without gallery shift (43.4%). It\\u2019s worth noting that in real-world scenarios, data with only gallery shift is rare, as the data in the gallery is often extensive and curated. In contrast, the queries of the users are more diverse, which might lead to the distribution shift challenge.\"}", "{\"comment\": \"Thank you very much for your responses. I have carefully reviewed each of them. You have addressed all my concerns, and I think this is a very interesting and meaningful work that provides valuable insights into cross-modal retrieval research. Therefore, I will increase my score.\"}", "{\"summary\": \"The paper presents a Test-time adaptation for Cross-modal Retrieval (TCR) method to address query shift, which is a critical and understudied problem in cross-modal retrieval tasks. Query shift occurs when the distribution of online query streams differs from the source domain, leading to performance degradation in existing models. TCR introduces a query prediction refinement module and a joint objective function to refine query predictions and prevent query shift from disturbing the common space. It improves the existing test-time adaptation (TTA) methods with the capacity to manipulate both the modality uniformity and modality gap. Overall speaking, this paper is well-organized and of practical value.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The proposed TCR method addresses an important problem and is supported by strong experimental results.\\n\\nIt provides extensive experiments demonstrating the effectiveness of TCR against query shift. The comparisons with existing TTA methods show convincing improvements, with is a strong validation of the ablation study .\", \"weaknesses\": \"In Section 4.2, it is said that \\u201cWe compare TCR with five SOTA TTA methods (Tent (Wang et al., 2021), EATA(Niu et al.,2022), SAR(Niu et al.,2023), READ(Yang et al.,2024), and DeYO...\\u201d. These methods should be introduced in Section 2.2 of the related work part.\\n\\nLine 212, ,where Q and G denotes as query modality and gallery modality for clarity in the following. Change to \\u201cdenote\\u201d\\u201d\\n\\nTables 1 and 2 appear too early. They should not be on Page 7 but on the page where they are referred for the first time,\", \"questions\": \"see above comment\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"As noted in [E] and [F], the collected data would continuously vary over time, resulting in **temporal shift**. For example, changes in lighting conditions throughout the day could impact the distribution of the collected data. For evaluation, we have conducted experiments under the CUHK2ICFG setting (Table 4 in the manuscript). Specifically, the ICFG-PEDES dataset is gathered at different times of the day (i.e., morning, noon, and afternoon), while the CUHK-PEDES dataset is derived from short-duration surveillance videos. Therefore, compared to CUHK-PEDES, the data in the ICFG-PEDES dataset exhibit distribution shifts due to time changes, such as illumination variation. The corresponding results from the CUHK2ICFG setting demonstrate that TCR could achieve robustness against temporal shift to some extent. For your convenience, we attach the corresponding numerical results (regarding IR@1) in the following tables.\\n\\n| Method | CUHK2ICFG |\\n| ------------- | --------- |\\n| CLIP ViT-B/16 | 41.0 |\\n| Tent | 41.9 |\\n| EATA | 42.2 |\\n| SAR | 42.2 |\\n| READ | 42.3 |\\n| DeYO | 42.2 |\\n| Ours | **42.4** |\\n\\nThe three works mentioned are primarily designed for the classification task and focus on addressing distribution shifts caused by temporal issues. In contrast, TCR aims to tackle the query shift challenge in the cross-modal retrieval task. **Notably, any distribution shifts in the query modality would lead to query shift, not limited to temporal issues**. For instance, personalized issues such as writing habits and styles, as well as real-world corruptions like noise and blur, would lead to query shift.\\n> Q3: In Section 3.2.1 on candidate selection, it would be valuable to address two points: first, **whether gallery shift affects the outcomes of nearest neighbor selection**; and second, **how the number of selected candidates impacts the results**. Additional experiments should be conducted to clarify these aspects.\\n\\n**A3**: Thanks for your comments. In the submission, we have conducted experiments under the Query-Gallery Shift setting (Table 3 in the manuscript), which demonstrates that TCR could improve retrieval performance even when the gallery modality occurs distribution shift. To address your concerns, we conduct more experiments during the rebuttal and present the results and analysis as follows. \\n\\n**Whether gallery shift affects the outcomes of nearest neighbor selection.** In response to your insightful suggestion, we conduct additional experiments to investigate whether gallery shift would affect the outcomes of nearest neighbor selection. Specifically, **we carry out experiments on the COCO-C benchmark under two gallery shift settings**, i.e., **only gallery shift** setting, **both query and gallery shift** setting. The corresponding results are depicted in Tables 16-17 within the revised manuscript. For your convenience, we attach the numerical results regarding Recall@1 and neighbor ACC (i.e., the cross-modal nearest neighbor of the query is correct) in the following tables. Notably, for the baseline BLIP ViT-B/16, the neighbor ACC and R@1 are the same since both are computed using cosine similarity for ranking.\\n\\n- Only Gallery Shift: In this setting, there is no distribution shift in the query modality. For the baseline BLIP ViT-B/16, the IR@1 and TR@1 without any query or gallery shift are 57.1% and 74.0%, respectively. \\n\\n| IR/Gallery Shift Types | Gauss. | Shot | Impul. | Speckle | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |\\n| ------------------------- | ------ | ---- | ------ | ------- | ------ | ----- | ------ | ---- | ---- | ----- | ---- | ----- | ------ | ------- | ----- | ---- | ---- |\\n| BLIP ViT-B/16 (ACC & R@1) | 35.4 | 37.4 | 35.9 | 44.4 | 38.5 | 53.2 | 35.8 | 15.2 | 36.5 | 43.0 | 47.7 | 51.7 | 32.5 | 36.2 | 20.3 | 48.9 | 38.3 |\\n| TCR (ACC) | 36.3 | 38.0 | 36.6 | 45.2 | 39.1 | 53.7 | 37.4 | 16.4 | 38.4 | 44.2 | 49.2 | 52.5 | 33.1 | 38.5 | 21.8 | 49.5 | 39.4 |\\n| TCR (R@1) | 36.5 | 38.6 | 37.1 | 45.2 | 39.9 | 53.8 | 37.4 | 16.8 | 38.3 | 44.5 | 49.3 | 52.4 | 33.5 | 38.5 | 21.8 | 49.6 | 39.6 |\"}", "{\"comment\": \"**More explanation on the baselines.** In response to your constructive feedback, we provide more detailed introdution of the baselines, , which can be found in Appendix B.3 due to space limitations. For your convenience, we attach the added statement as follows.\\n\\nTest-time Adaptation (TTA) aims to reconcile the distribution shifts in an online manner. Towards achieving this goal, Fully TTA ([B]) has been proposed, which fine-tunes the BatchNorm layers by minimizing entropy during the test phase. EATA ([C]) employs a Fisher regularizer to limit excessive model parameter changes and filter out high-entropy samples via selection strategy. SAR [K] removes high-gradient samples and promotes flat minimum weights, enhancing robustness against more challenging TTA scenarios such as mixed domain shifts, single-sample adaptation, and imbalanced label shifts. READ ([L]) proposes a noise-robust adaptation loss and reliable fusion module to tackle the reliability bias challenge in the multi-modal setting. DeYO ([A]) reveals the unreliability of treating entropy as the confidence metric and establishes a novel metric by measuring the difference between predictions before and after applying an object-destructive transformation.\\n\\n**Reference:**\\n\\n[A] Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, and Sungroh Yoon. Entropy is not enough for test-time adaptation. In ICLR, 2024.\\n\\n[B] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021.\\n\\n[C] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Ef\\ufb01cient test-time model adaptation without forgetting. In ICML, 2022.\\n\\n[D] Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. *Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift*. Journal of Data-centric Machine Learning Research, 2023.\\n\\n[E] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. DeepReID: Deep Filter Pairing Neural Network for Person Re-Identification. In CVPR, 2014.\\n\\n[F] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jiahao Bu, and Qi Tian. Person Re-identification Meets Image Search. In arXiv, 2015.\\n\\n[G] Tong Xiao, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang. Joint Detection and Identification Feature Learning for Person Search. In arXiv, 2016.\\n\\n[H] Douglas Gray, Shane Brennan, and Hai Tao. Evaluating appearance models for recognition, reacquisition, and tracking. In Proc. IEEE International Workshop on Performance Evaluation for Tracking and Surveillance (PETS), 2007.\\n\\n[I] Wei Li, Rui Zhao, and Xiaogang Wang. Human Reidentification with Transferred Metric Learning. In ACCV, 2012.\\n\\n[J] Longhui Wei, Shiliang Zhang, Wen Gao, and Qi Tian. Person Transfer GAN to Bridge Domain Gap for Person Re-Identification. In CVPR, 2018.\\n\\n[K] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. In ICLR, 2023.\\n\\n[L] Mouxing Yang, Yunfan Li, Changqing Zhang, Peng Hu, and Xi Peng. Test-time Adaption against Multi-modal Reliability Bias. In ICLR, 2024.\"}", "{\"comment\": \">Q3: The authors are advised to provide **more explanation on the baselines** and to **include a few actual examples of query shifts** to help readers intuitively feel the task.\\n\\n**A3**: Thanks for your valuable suggestions. In Appendix B.1, **we have provided examples of query shift on the COCO-C dataset, which includes 16 types of image corruption and 15 types of text corruption.** \\n\\n**More examples of query shifts.** In response to your insightful suggestion, we offer more examples of query shifts in the ReID domain. **The examples are depicted in Fig. 6 within the revised manuscript**. For your convenience, we summarize the difference about the distribution shifts between the CUHK-PEDES dataset and ICFG-PEDES datasets:\\n\\nCUHK-PEDES is a dataset designed for text-to-image person re-identification, and the test set consists of 3,074 images and 6,156 textual descriptions associated with 1,000 identities. The images are sourced from five re-identification datasets, CUHK03 [E], Market-1501 [F], SSM [G], VIPER [H], and CUHK01 [I]. These images mainly capture outdoor scenes in diverse public spaces, such as markets and campuses. The textual descriptions often contain details not directly relevant to identity (e.g., actions and backgrounds), with an average of 23.5 words per description.\\n\\nICFG-PEDES is a large-scale text-to-image person re-identification dataset, and the test set contains 19,848 image-text pairs of 1,000 identities. The images are sourced from the MSMT17 dataset [J] and depict scenes within a campus environment, with a mix of indoor and outdoor settings. Textual descriptions are more identity-focused and fine-grained, averaging 37.2 words per description.\\n\\nNotably, images in the ICFG-PEDES dataset are collected over multiple days at different times (morning, noon, and afternoon), which introduces considerable illumination variation. In contrast, images in the CUHK-PEDES dataset are sourced from short-duration surveillance videos, leading to minimal lighting variation.\"}", "{\"comment\": \">Q6: It is crucial to **provide access to the data processing methods and code** to ensure the reproducibility of the experimental results.\\n\\n**A6**: Following the setting in [G], we have introduced 31 types of corruption to establish the Query Shift benchmarks (i.e., COCO-C and Flickr-C datasets). **The details of the benchmarks are provided in Appendix B.1** and the data processing code is available at **https://github.com/Jielin-Qiu/MM_Robustness**. Specifically, for image corruptions, we employ `image_perturbation/perturb_COCO_IP.py` to construct image corruptions for the COCO dataset and `image_perturbation/perturb_Flickr30K_IP.py` for the Flickr dataset. For text corruptions, we utilize `text_perturbation/perturb_COCO_TP.py` to construct text corruptions for the COCO dataset and `text_perturbation/perturb_Flickr30K_TP.py` for the Flickr dataset.\\n\\n>Q7: I recommend that the authors discuss **the limitations of the proposed method and outline specific future research directions**. This would provide readers with additional insights and considerations for further exploration.\\n\\n**A7**: Thanks for your valuable suggestions. In response to your concern, we summarize the following limitations and potential directions for future work of TCR.\\n\\n**Limitations.** The proposed TCR might have the following two limitations. On the one hand, although TCR achieves significant performance improvement in the cross-modal retrieval task, it remains uncertain whether TCR could achieve similar success in other cross-modal tasks, such as image captioning and visual question answering (VQA). On the other hand, the robustness of TCR against more challenging TTA scenarios (e.g., single-sample adaptation and continuous adaptation) is worth further investigation.\\n\\n**Future research**. In the future, we plan to extend TCR to more applications and more challenging scenarios. Specifically, we would like to extend TCR for a broader range of cross-modal tasks, such as image captioning and visual question answering (VQA). Besides, although we have demonstrated that TCR could achieve robustness against the query shift evaluated in the paper, further work is needed to verify whether TCR could address more challenging scenarios, such as temporal shift and concept drift.\\n\\n**Reference:**\\n\\n[A] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021.\\n\\n[B] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Ef\\ufb01cient test-time model adaptation without forgetting. In ICML, 2022.\\n\\n[C] Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, and Sungroh Yoon. Entropy is not enough for test-time adaptation. In ICLR, 2024.\\n\\n[D] En Yu, Jie Lu, Bin Zhang, and Guangquan Zhang. Online Boosting Adaptive Learning under Concept Drift for Multistream Classification. In AAAI, 2024.\\n\\n[E] Guangji Bai, Chen Ling, and Liang Zhao. Temporal domain generalization with drift-aware dynamic neural networks. In arXiv, 2022.\\n\\n[F] Mixue Xie, Shuang Li, Longhui Yuan, Chi Harold Liu, and Zehui Dai. Evolving standardization for continual domain generalization over temporal drift. In NIPS, 2023.\\n\\n[G] Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. *Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift*. Journal of Data-centric Machine Learning Research, 2023.\"}", "{\"comment\": \"Thanks for the authors's feedback.\\n\\nCompared methods in the experiments should be most relevant to the proposed work, but even their names did not appear in the related work part in the main text. I can understand due to page limit their details can be moved to appendix, but they should be at least mentioned in related work.\"}", "{\"comment\": \"Thanks for the response, I choose to keep my score.\"}", "{\"summary\": \"This paper introduces a novel setting, cross-modal retrieval under query shift. To address this challenge, it introduces a test-time adaptation method called TCR, which includes a query prediction refinement module to produce retrieval-optimized predictions for incoming queries. Additionally, it employs a joint objective function for online adaptation, effectively handling the query shift and noise.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The research question, cross-modal retrieval under query shift, is challenging and holds significant practical relevance.\\n2. Although this method builds on the principles of TTA, it also reveals TTA\\u2019s limitations in cross-modal retrieval and effectively overcomes these challenges.\\n3. Extensive experiments demonstrate the effectiveness of the proposed TCR method.\\n4. The paper is well-organized and well-written, enhancing the clarity and impact of its findings.\", \"weaknesses\": \"1. This research setting is limited by the assumption that each query batch contains i.i.d. samples. However, in real scenarios, query shift may occur unpredictably, introducing non-i.i.d. data within the same batch. This raises concerns about the method\\u2019s applicability under such conditions.\\n2. Regarding the emergence of query shift, I am curious whether temporal issues, such as temporal shifts or concept drift discussed in [1-3], are present in real-world scenarios. Could the authors provide relevant discussion on this aspect? \\n [1] Evolving standardization for continual domain generalization over temporal drift.\\u00a0*NIPS 2023*.\\n [2] Temporal domain generalization with drift-aware dynamic neural networks.\\u00a0*arXiv preprint arXiv:2205.10664*\\u00a0(2022)\\n [3]Online Boosting Adaptive Learning under Concept Drift for Multistream Classification, AAAI 2024\\n3. In Section 3.2.1 on candidate selection, it would be valuable to address two points: first, whether gallery shift affects the outcomes of nearest neighbor selection; and second, how the number of selected candidates impacts the results. Additional experiments should be conducted to clarify these aspects.\\n4. In Section 3.2.2, given the shift between the source and target domains, it is unclear why source-domain-like data can be directly selected based on centers. Could the authors provide further analysis and explanation on this approach?\\n5. In section 3.5the definition of S(x_{i}^Q) in Equation (11) lacks corresponding theoretical analysis.\\n6. In the experiments, the authors employed various methods to generate image or text query shifts. I believe the results may depend on the specific shift generation techniques used. Therefore, it is crucial to provide access to the data processing methods and code to ensure the reproducibility of the experimental results.\\n7. I recommend that the authors discuss the limitations of the proposed method and outline specific future research directions. This would provide readers with additional insights and considerations for further exploration.\", \"questions\": \"discussed in Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your constructive reviews and suggestions. In the following, we will answer your questions one by one.\\n\\n> Q1: Given the variety of potential shifts in real-world data (e.g., subtle cultural variations, extreme distortions, rare domain-specific content), **how does TCR perform across these different types of shifts**?\\n\\n**A1**: Thanks for your constructive comments. In the submission, we have reported the experiment results on a variety of real-world query shift, including noise, blur, and weather (e.g., frost, snow, and fog) distortions. Besides, we have evaluated TCR on the rare domains, such as ReID domain, e-commerce domain and natural image domain. Notably, we have carried out the experiments to verify that TCR could handle the maximum severity level of the corruptions (Tables 1-2 and Tables 7-8). In other words, **TCR is able to address the most extreme distortions, which are highlighted and widely acknowledged in previous works** ([A] [B] [C]).\\n\\nIn response to your insightful suggestion, **we conduct additional experiments in the even rarer remote sensing domain**. Specifically, we choose the BLIP as the source model and perform zero-shot retrieval on the remote sensing datasets RSICD ([D]) and RSITMD ([E]). To verify the effectiveness of TCR, we choose the typical TTA method Tent ([A]), the SOTA TTA methods EATA ([F]) and DeYO ([G]) as the baselines for comparisons. Results are summarized in Table 20 within the revised manuscript. For your convenience, we attach the corresponding results in the following tables.\\n\\n| Base2RSICD | TR@1 | IR@1 |\\n| ------------- | ------- | ------- |\\n| BLIP ViT-B/16 | 6.4 | 6.8 |\\n| Tent | 5.7 | 5.4 |\\n| EATA | 6.9 | 6.7 |\\n| DeYO | 6.4 | 6.5 |\\n| Ours | **8.5** | **7.1** |\\n\\n| Base2RSITMD | TR@1 | IR@1 |\\n| ------------- | ------- | -------- |\\n| BLIP ViT-B/16 | 7.6 | 10.4 |\\n| Tent | 7.9 | 9.3 |\\n| EATA | 8.0 | 10.4 |\\n| DeYO | 7.7 | 10.0 |\\n| Ours | **8.4** | **10.7** |\\n\\n**The results indicate that TCR could also achieve the best performance in even rarer remote sensing domain.**\\n\\n>Q.2: **Could the model's performance degrade if it encounters shifts it was not explicitly evaluated against**? A thorough breakdown of the model\\u2019s robustness to a diverse set of query shifts would strengthen the understanding of its general applicability.\\n\\n**A.2**: Thanks for your comments. In this paper, we have evaluated TCR on the Flickr and COCO datasets with image and text corruptions, which simulate real-world query shift. It is worth noting that we have introduced **a total of 130 perturbations across various severity levels**, comprising 80 for the image modality and 50 for the text modality. Specifically, the image modality includes 16 types of corruption, each with 5 levels of severity, while the text modality comprises 15 corruption types with 7/2/1 severity levels for character-level/word-level/sentence-level corruptions. Besides, we conduct experiments on the datasets with real-world query shift, including the ReID, e-commerce, and natural image domains. \\n\\nIn Appendix D.3 of the manuscript, we have reported the experiment results on the 130 perturbations in Fig. 7 and Fig. 8. Specifically, we carry out the experiments on the COCO-C benchmark and report the average performance across various severity levels for each corruption. For your convenience, we have included a summary of these results (regarding Recall@1) in the following tables.\"}", "{\"comment\": \"Thanks for the detailed comments. In the following, we will answer your questions one by one.\\n\\n>Q1: The TCR method proposed in the paper performs model adaptation at test time, which may increase additional computational costs. It is recommended that the authors **analyze the computational complexity of the model and the additional cost incurred**.\\n\\n**A1**: Thanks for your comments. In response to your insightful suggestion, we conduct additional experiments to analyze the efficiency of TCR. To this end, we choose the pre-trained model BLIP as the source model and perform zero-shot retrieval on the COCO dataset. We measure the GPU time during the test-time adaptation phase. Results are summarized in Table 19 within the revised manuscript. For your convenience, we attach the corresponding results in the following table.\\n\\n| Method | TR | IR | Avg. |\\n| ------ | ------------- | ------------- | ------------- |\\n| Tent | 285.5 seconds | 189.7 seconds | 237.6 seconds |\\n| EATA | 276.3 seconds | 190.4 seconds | 233.3 seconds |\\n| DeYO | 391.6 seconds | 254.2 seconds | 322.9 seconds |\\n| Ours | 291.1 seconds | 193.6 seconds | 242.4 seconds |\\n\\nNote that the learnable parameters of all the methods are the same for a fair comparison. The results underscore that TCR achieves adaptation more efficiently than the augmentation-based method DeYO[A]. Compared to the vanilla Tent[B] and EATA[C] (only low-entropy samples are employed for optimization), TCR requires only a negligible additional time cost, primarily due to the nearest neighbor selection in the query prediction refinement module. \\n\\n>Q2: Are the COCO-C and Flickr-C datasets constructed by the authors themselves? It seems that the paper does not explain whether the results of the baseline methods for comparison were obtained by the authors' own experiments or cited from their respective articles. If they were obtained through their own experiments, **it should be clarified whether such comparisons are fair (whether they were trained on the new baselines), which is quite confusing for readers**.\\n\\n**A2**: We apologize for the confusion arising from the initial presentation. Following the setting in [D], we have introduced 31 types of corruptions to establish the Query Shift benchmarks (i.e., COCO-C and Flickr-C datasets). The details of the benchmarks are provided in Appendix B.1 and the data processing code is available at **https://github.com/Jielin-Qiu/MM_Robustness**. Specifically, for image corruptions, we employ `image_perturbation/perturb_COCO_IP.py` to construct image corruptions for the COCO dataset and `image_perturbation/perturb_Flickr30K_IP.py` for the Flickr dataset. For text corruptions, we utilize `text_perturbation/perturb_COCO_TP.py` to construct text corruptions for the COCO dataset and `text_perturbation/perturb_Flickr30K_TP.py` for the Flickr dataset.\\n\\nIn the paper, we aim to achieve online adaptation for cross-modal retrieval models under query shift, which is a novel challenge. Unfortunately, existing test-time adaptation (TTA) methods overlook the query shift in cross-modal settings and do not address this challenge. Moreover, **most existing TTA methods are specifically designed for the recognition task and cannot be directly employed for the cross-modal retrieval task**. To solve the problem, we **propose a simple baseline** so that the recognition-oriented TTA methods could be employed for the cross-modal retrieval task. **For a fair comparison, the optimizer, learning rate, and training parameters across all the methods are the same**. Besides, we carefully set the temperature for baselines on various datasets, as detailed in Appendix B.2. For your convenience, we attach the corresponding content as follows.\\n\\nTo guarantee the performance of the baselines, **we select the optimal temperature (Eq. 1) for the TTA baselines upon each dataset**. According to Fig. 4(a), the temperature is fixed as $0.01$ for COCO-C, Flickr-C, COCO, Flickr, and Nocaps datatsets, $0.001$ for Fashion-Gen dataset, and $0.0001$ for CUHK-PEDE and ICFG-PEDES datasets.\\n\\nThe code will be released upon acceptance of the paper.\"}", "{\"comment\": \"Thanks for your valuable reviews. We would like to address your concerns one by one in the following.\\n\\n> Q1: In Section 4.2, it is said that \\u201cWe compare TCR with five SOTA TTA methods (Tent (Wang et al., 2021), EATA(Niu et al.,2022), SAR(Niu et al.,2023), READ(Yang et al.,2024), and DeYO...\\u201d. **These methods should be introduced in Section 2.2 of the related work part**.\\n\\n**A1**: Thanks for your valuable suggestions. We apologize for the missing details on the baselines. In response to your constructive feedback, we provide more detailed introdution of the baselines, which can be found in Appendix B.3 due to space limitations. For your convenience, we attach the added statement as follows.\\n\\nTest-time Adaptation (TTA) aims to reconcile the distribution shifts in an online manner. Towards achieving this goal, fully TTA ([A]) has been proposed, which fine-tunes the BatchNorm layers by minimizing entropy during the test phase. EATA ([B]) employs a Fisher regularizer to limit excessive model parameter changes and filter out high-entropy samples via the selection strategy. SAR [C] removes high-gradient samples and promotes flat minimum weights, enhancing robustness against more challenging TTA scenarios such as mixed domain shifts, single-sample adaptation, and imbalanced label shifts. READ ([D]) proposes a noise-robust adaptation loss and reliable fusion module to tackle the reliability bias challenge in the multi-modal setting. DeYO ([E]) reveals the unreliability of treating entropy as the confidence metric and establishes a novel metric by measuring the difference between predictions before and after applying an object-destructive transformation.\\n\\n>Q2: Line 212, ,where Q and G denotes as query modality and gallery modality for clarity in the following. Change to \\u201cdenote\\u201d\\n\\n**A2**: Thanks for your careful reading. We apologize for the typos and have revised them in the updated manuscript.\\n\\n>Q3: Tables 1 and 2 appear too early. They should not be on Page 7 but on the page where they are referred for the first time.\\n\\n**A3**: Thanks for your valuable suggestions. We apologize for the misplacement of Table 1 and Table 2 in the submission and have revised them in the updated manuscript.\\n\\n**Reference:**\\n\\n[A] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021.\\n\\n[B] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. In ICLR, 2023.\\n\\n[C] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Ef\\ufb01cient test-time model adaptation without forgetting. In ICML, 2022.\\n\\n[D] Mouxing Yang, Yunfan Li, Changqing Zhang, Peng Hu, and Xi Peng. Test-time Adaption against Multi-modal Reliability Bias. In ICLR, 2024.\\n\\n[E] Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, and Sungroh Yoon. Entropy is not enough for test-time adaptation. In ICLR, 2024.\"}", "{\"comment\": \"**How the number of selected candidates impacts the results.** We conduct more experiments to investigate how the number of selected candidates affects performance. To this end, we directly perform zero-shot retrieval experiment on the COCO dataset with pre-trained BLIP as the source model. In the paper, we retrieve the most similar sample from the gallery set for each query, thus the number of selected candidates is equal to the batch size $B$. In the additional experiment, we vary the number of selected candidates at different values, i.e., $[0.2B, 0.5B, B, 2B, 5B, 10B, 50B]$. Specifically, assume that the number of selected candidates is $\\\\lambda B$, where $\\\\lambda$ is an integer. When $\\\\lambda < 1$, we randomly select $\\\\lambda B$ candidates from the original $B$ selected candidates. When $\\\\lambda \\\\geq 1$, we retrieve the most similar $\\\\lambda$ candidates from the gallery set for each query, forming a new set of $\\\\lambda B$ selected candidates. The results are depicted in Table 18 within the revised manuscript. For your convenience, we attach the corresponding numerical results in the following tables.\\n\\n| Number | 0.2B | 0.5B | B (Default) | 2B | 5B | 10B | 50B |\\n| -------------------------------------- | ---- | ---- | ----------- | ---- | ---- | ---- | ---- |\\n| TR@1 | 67.2 | 68.5 | 68.9 | 65.3 | 64.9 | 64.7 | 64.6 |\\n| IR@1 | 47.3 | 48.3 | 48.9 | 48.3 | 48.2 | 48.0 | 47.5 |\\n\\nFrom the results, one could observe that **enlarging the number of selected candidates would significantly degrade the performance**. Such a phenomenon indicates that an excessively large number of candidates may lead to the underfitting issue, which **highlights the necessity and effectiveness of the query refinement module**.\\n\\n>Q4: In Section 3.2.2, given the shift between the source and target domains, **it is unclear why source-domain-like data can be directly selected based on centers**. Could the authors provide further analysis and explanation on this approach?\\n\\n**A4**: We appreciate your feedback. As illustrated in Fig. 1(c), we observe that distribution shift would diminish the modality uniformity, defined as the average distance between all samples and the modality center (Eq. 14 in the manuscript). For your convenience, we have attached the corresponding equation below.\\n$$\\n \\\\text{Uniformity}=\\\\frac{1}{N^{Q}}\\\\sum_{i=1}^{N^{Q}}\\\\|\\\\mathbf{z}_{i}^{Q}-\\\\overline{\\\\mathbf{Z}}^{Q}\\\\|.\\n$$\\nIn other words, the distribution shift would narrow the distance between samples and their modality centers. Therefore, we conclude that data from the source domain should exhibit higher modality uniformity. Based on the conclusion, we select samples farther from their modality centers as the source-domain-like data, since these samples enjoy higher modality uniformity.\\n>Q5: In section 3.5\\uff0cthe definition of $S(x_{i}^Q)$ in Equation (11) **lacks corresponding theoretical analysis**.\\n\\n**A5**: We apologize for the initial oversight regarding the theoretical analysis in Section 3.5. The proposed noise-robust adaptation loss (Eq. 11) aims to **achieve robustness against heavy noise by excluding high-entropy query predictions from adaptation and assigning higher weights to query predictions with lower uncertainties**. Specifically, for a given query sample $x_{i}^{Q}$, let its entropy be denoted as $E(x_{i}^Q)$. \\n\\nWhen $E(x_i^Q) \\\\geq E_m$, the weight $S(x_i^Q)$ is defined as\\n$$\\nS(x_i^Q) = 0.\\n$$\\nIn this case, $x_{i}^{Q}$ is excluded from optimization. Such a design prevents high-entropy (i.e., noisy) query predictions from degrading performance, as their gradients produced by entropy loss might be biased and unreliable.\\n\\nWhen $0 \\\\leq E(x_i^Q) < E_m$, the weight $S(x_i^Q)$ is positive and defined as\\n$$\\nS(x_i^Q) = 1 - \\\\frac{E(x_i^Q)}{E_m}.\\n$$\\n\\nThe weight is inversely proportional to entropy, i.e., the weight decreases as entropy increases. Formally,\\n$$\\nS(x_i^Q) \\\\propto \\\\frac{1}{E(x_i^Q)}.\\n$$\\nThe adaptive weighting strategy enjoys two advantages. On the one hand, query predictions with lower entropy (i.e., reliable predictions) are assigned with higher weights, thus guiding the optimization. On the other hand, query predictions with higher entropy (i.e., uncertain predictions) are assigned with lower weights, thereby preventing overfitting on noisy query predictions.\"}", "{\"summary\": \"The paper addresses the challenge of cross-modal retrieval in scenarios where the query data distribution deviates from the source domain, a phenomenon known as \\\"query shift.\\\" This deviation often leads to a performance decline in cross-modal retrieval systems. The authors propose a novel approach called TCR: Test-time adaptation for Cross-modal Retrieval, which adapts cross-modal retrieval models during inference to account for query shift. The proposed method includes a query prediction refinement module and a joint objective function to prevent the disturbances caused by the query shift, enhancing the uniformity within the query modality and minimizing the gap between query and gallery modalities. The model is designed to operate effectively in real time by adapting to changing online queries. The approach was tested on six popular image-text datasets and demonstrated superior performance against existing test-time adaptation (TTA) techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper tackles the underexplored problem of query shift in cross-modal retrieval, providing a comprehensive analysis of its effects on retrieval performance. The method's unique combination of query prediction refinement and multiple loss functions sets it apart from traditional TTA approaches.\\n2) The authors conducted extensive experiments across six datasets and compared their method against several state-of-the-art TTA models. The amount of experiments is fair and convincing. \\n3) The paper proposes a joint objective consisting of three loss functions\\u2014uniformity learning, gap minimization, and noise-robust adaptation\\u2014that each address specific challenges introduced by query shift. This is a novel design for this problem.\", \"weaknesses\": \"1) The authors provide only limited discussion regarding the sensitivity of the various hyperparameters involved, such as the temperature and trade-off parameters. A more detailed analysis would improve understanding of the model's adaptability to different scenarios.\\n2) The approach heavily relies on pre-trained models and assumes the existence of a well-aligned common space. In cases where the source domain model lacks robust representations, the effectiveness of TCR may be diminished. This could limit the generalizability of the approach to pre-trained models of different quality. More discussions and insights need to be given in order to make the paper more readable.\", \"questions\": \"1) The proposed TCR method aims to enhance retrieval robustness under query shift by manipulating modality uniformity and the modality gap. Given the variety of potential shifts in real-world data (e.g., subtle cultural variations, extreme distortions, rare domain-specific content), how does TCR perform across these different types of shifts?\\n\\n2) Could the model's performance degrade if it encounters shifts it was not explicitly evaluated against? A thorough breakdown of the model\\u2019s robustness to a diverse set of query shifts would strengthen the understanding of its general applicability.\\n\\n3) This paper introduces several hyperparameters, such as the temperature parameter (\\u03c4) for controlling the trade-off between smoothness and sharpness, and others for balancing the different loss terms. How sensitive is TCR to these hyperparameters, and how easy is it to tune them for new domains? Some more results from these ablations studies will be very beneficial.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We greatly appreciate the time and effort you invested in reviewing our work. Your feedback has been helpful in improving the paper. We would be happy to discuss further if needed.\"}", "{\"comment\": \"| BLIP ViT-B/16 | Gauss. | Shot | Impul. | Speckle | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |\\n| -------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| Tent | 64.3 | 64.4 | 58.8 | 70.2 | 50.3 | 74.0 | 45.8 | 20.6 | 29.8 | 63.4 | 68.6 | 73.9 | 59.7 | 66.9 | 42.7 | 72.5 | 57.9 |\\n| EATA | 64.1 | 65.5 | 62.1 | 70.3 | 64.2 | 74.9 | 62.6 | 26.9 | 55.5 | 64.4 | 70.8 | 74.6 | 66.4 | 67.5 | 52.5 | 72.8 | 63.4 |\\n| SAR | 63.9 | 65.3 | 61.8 | 69.8 | 60.6 | 74.2 | 58.2 | 22.9 | 47.7 | 64.0 | 69.3 | 74.0 | 63.0 | 67.5 | 51.7 | 72.2 | 61.6 |\\n| READ | 64.6 | 63.9 | 59.1 | 69.0 | 58.5 | 74.0 | 61.0 | 22.2 | 49.9 | 62.0 | 69.0 | 73.6 | 63.0 | 65.8 | 48.6 | 71.9 | 61.0 |\\n| DeYO | 65.4 | 66.3 | 64.2 | 70.2 | 62.9 | 74.6 | 61.2 | 22.5 | 52.8 | 65.5 | 71.9 | 74.3 | 66.0 | 67.7 | 50.7 | 72.7 | 63.1 |\\n| Ours | **67.6** | **67.8** | **67.1** | **71.2** | **67.8** | **75.8** | **66.2** | **46.4** | **62.1** | **68.6** | **74.0** | **75.6** | **71.1** | **71.8** | **59.3** | **73.2** | **67.8** |\\n\\n| BLIP ViT-B/16 | OCR | CI | CR | CS | CD | SR | RI | RS | RD | IP | Formal | Casual | Passive | Active | Backtrans | Avg. |\\n| -------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | --------- | -------- |\\n| Tent | 42.0 | 27.1 | 24.5 | 31.1 | 27.8 | 45.5 | 52.8 | 51.3 | 51.1 | 57.0 | 56.6 | 56.4 | 55.3 | 57.2 | 54.0 | 46.0 |\\n| EATA | 43.3 | 30.5 | 29.0 | 34.6 | 30.7 | 46.2 | 53.2 | 51.6 | 51.3 | 56.6 | 56.6 | 56.3 | 55.6 | 57.2 | 54.2 | 47.1 |\\n| SAR | 42.0 | 29.7 | 28.3 | 34.0 | 30.1 | 45.0 | 51.8 | 50.5 | 50.9 | 56.8 | 56.5 | 56.2 | 54.9 | 56.8 | 54.2 | 46.5 |\\n| READ | 42.9 | 30.0 | 28.6 | 34.2 | 30.2 | 45.8 | 53.2 | 51.7 | 51.4 | 56.7 | 56.5 | 56.2 | 55.2 | 57.0 | 53.9 | 46.9 |\\n| DeYO | 43.4 | 29.6 | 28.2 | 34.5 | 30.4 | 46.2 | 53.6 | 51.7 | 51.3 | 56.7 | 56.6 | 56.3 | 55.6 | 57.1 | 54.2 | 47.0 |\\n| Ours | **44.0** | **31.7** | **30.3** | **35.2** | **31.5** | **46.6** | **54.0** | **52.0** | **51.6** | **57.3** | **57.1** | **56.8** | **56.0** | **57.3** | **54.7** | **47.7** |\\n\\n**The results demonstrate that TCR outperforms all baselines across various severities and corruptions, showcasing its robustness against different distribution shifts.**\\n\\nBesides, we conduct more experiments on the remote sensing datasets RSICD ([D]) and RSITMD ([E]), which might encounter the query shift not explicitly evaluated in the paper. For your convenience, we have included a summary of these results in the following tables.\\n\\n| Base2RSICD | TR@1 | IR@1 |\\n| ------------- | ------- | ------- |\\n| BLIP ViT-B/16 | 6.4 | 6.8 |\\n| Tent | 5.7 | 5.4 |\\n| EATA | 6.9 | 6.7 |\\n| DeYO | 6.4 | 6.5 |\\n| Ours | **8.5** | **7.1** |\\n\\n| Base2RSITMD | TR@1 | IR@1 |\\n| ------------- | ------- | -------- |\\n| BLIP ViT-B/16 | 7.6 | 10.4 |\\n| Tent | 7.9 | 9.3 |\\n| EATA | 8.0 | 10.4 |\\n| DeYO | 7.7 | 10.0 |\\n| Ours | **8.4** | **10.7** |\\n\\nThe results underscore that TCR achieves superior performance on the RSICD, RSITMD datasets from the remote sensing domain, further validating its effectiveness on unevaluated query shifts in the paper.\"}", "{\"metareview\": \"This paper tackles the challenging 'query shift' problem in cross-modal retrieval, in which the distribution of query data deviates from the source domain. They formulate their algorithm as a test-time adaptation for cross-modal retrieval, which includes a query prediction refinement module to refine the query predictions and a joint objective function to prevent the disturbances caused by the query shift. Sufficient experimental results on multiple datasets show the effectiveness of the proposed method in such different cases.\\n\\nIn the first round, the main concerns lie in the types of query shifts, robustness to a diverse set of query shifts, parameters, and other minor issues. After rebuttal, all these concerns were eliminated, and all the reviewers voted to accept this work.\\n\\nAfter carefully reading the main document and supplementary docs, the AC has the following comments about this paper: \\n***Advantages*** \\nThis paper addresses the query shift problem in cross-modal retrieval during test time. The problem itself is practical due to the diverse and unpredictable nature of open-world queries. Through extensive experimental analyses, the authors identify two key challenges posed by query shift and propose an effective method to address them. The manuscript and appendix include thorough empirical studies validating the significance of the addressed problem and the effectiveness of the proposed approach. \\n***Future Direction*** \\nAlthough the authors pointed out the query shift issue in cross-modal retrieval, how to effectively handle more potential scenarios contaminated with query shift should be further discussed and highlighted. \\n\\nIn summary, due to novel problem definition, technical context and experimental results, this paper is a good work to be presented in ICLR 2025.\", \"additional_comments_on_reviewer_discussion\": \"All four reviewers recommend acceptance, with three giving clear acceptance and one providing a borderline acceptance. The reviewers recognize the contributions of this work. Overall, I believe this paper provides valuable insights for the test-time adaptation (TTA) and cross-modal retrieval communities and strongly recommend its acceptance.\"}", "{\"comment\": \"Dear reviewer Wks8,\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement.\\n\\nThank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"summary\": \"This paper introduces a novel method named TCR for addressing the query shift problem in cross-modal retrieval. TCR employs a test-time adaptation approach that leverages a multi-scale adaptive convolutional neural network and a hybrid transformer module to refine query predictions and adapt to shifts in query distribution without additional training data. The method is designed to enhance the uniformity of the query modality and reduce the gap between query and gallery modalities, thereby improving retrieval performance. The study demonstrates TCR's effectiveness on image-text retrieval tasks using standard benchmarks and various corruption types.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper proposes a novel test-time adaptation method (TCR) to address the query shift problem in cross-modal retrieval. This method achieves robustness against query shift by adjusting query predictions and designing a joint objective function, which is an interesting and potentially influential direction for research.\\n2.\\tThe authors have conducted extensive experiments on multiple datasets, including COCO-C and Flickr-C, to verify the effectiveness of the proposed method. The experiments cover comparisons across different model types and sizes, as well as varying severity levels of query shift, demonstrating the robustness of the method.\\n3.\\tThe paper not only introduces a new method but also provides an in-depth analysis of the impact of query shift on cross-modal retrieval, revealing how query shift can reduce the uniformity of the query modality and increase the gap between the query and gallery modalities. These theoretical analyses offer valuable insights for future research.\", \"weaknesses\": \"1.\\tThe TCR method proposed in the paper performs model adaptation at test time, which may increase additional computational costs. It is recommended that the authors analyze the computational complexity of the model and the additional cost incurred.\\n2.\\tAre the COCO-C and Flickr-C datasets constructed by the authors themselves? It seems that the paper does not explain whether the results of the baseline methods for comparison were obtained by the authors' own experiments or cited from their respective articles. If they were obtained through their own experiments, it should be clarified whether such comparisons are fair (whether they were trained on the new baselines), which is quite confusing for readers.\\n3.\\tThe authors are advised to provide more explanation on the baselines and to include a few actual examples of query shifts to help readers intuitively feel the task.\", \"questions\": \"Please address my concerns proposed in Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable suggestions. The five baselines compared in the manuscript (Tent, EATA, SAR, READ, and DeYO) are the most relevant works to the proposed TCR. Specifically, Tent is the most classic TTA method; EATA and SAR are robust TTA methods designed to handle noisy predictions; DeYO is the state-of-the-art method for unimodal recognition, while READ is the state-of-the-art for multimodal recognition. Since the proposed TCR might be one of the first TTA works for cross-modal retrieval, we select the above baselines for comparison.\\n\\nIn response to your insightful suggestion, we have cited these methods in Related Work (Sec. 2.2 in the manuscript). For your convenience, we attach the revised content as follows.\\n\\nTo avoid the reduplicated training cost of the source model, Fully Test-Time Adaptation paradigm (Tent) has been proposed, which could be coarsely divided into the following three categories: i) online TTA methods (e.g., DeYO), which continually update the normalization layers by resorting to the unsupervised objectives, such as entropy minimization or its variants. ii) robust TTA methods (e.g., EATA, SAR), which strive to improve the robustness against noisy predictions, mixed distribution shifts, label shifts, and so on. iii) TTA beyond recognition, which focuses on the tasks including but not limited to image restoration, multimodal recognition (e.g., READ), and multimodal segmentation.\\n\\nWe would be happy to discuss further if needed.\"}", "{\"comment\": \"Thanks for the insightful reviews. We will answer your questions one by one in the following.\\n\\n> **Q1**: This research setting is limited by the assumption that each query batch contains i.i.d. samples. However, in real scenarios, **query shift may occur unpredictably, introducing non-i.i.d. data within the same batch**. This raises concerns about the method\\u2019s applicability under such conditions.\\n\\n**A1**: In order to address your concerns, we conduct more experiments on the COCO-C benchmark, **investigating the robustness of the proposed TCR under non-i.i.d. settings** (i.e., **Mixed Severity Levels** and **Mixed Corruption Types**). Specifically,\\n\\n- Mixed Severity Levels: For each corruption, we create the test pairs by selecting $1/m$ of the data from each severity level, resulting in a total of $N$ test pairs, where $m$ is the number of severity levels and m=5 / 7 / 2 for the image / character-level / word-level corruptions.\\n\\n- Mixed Corruption Types: For the text retrieval, we construct the test pairs by selecting 1/16 of the data from each image corruption (1 through 16), resulting in a total of $N$ test pairs. For the image retrieval, we create test pairs by selecting 1/15 of the data from each text corruption (1 through 15), resulting in a total of $N$ test pairs.\\n\\nTo verify the effectiveness of TCR under the Mixed Severity and Mixed Corruption Types settings, we choose the typical TTA method Tent ([A]) and the SOTA TTA methods EATA ([B]), DeYO ([C]) as baselines for comparisons. In the experiment, we carry out the experiments on the COCO-C benchmark, and the corresponding results are depicted in Tables 13-15 within the revised manuscript. For your convenience, we attach the corresponding numerical results (regarding Recall@1) in the following tables.\\n\\n| Mixed Severity Levels | Gauss. | Shot | Impul. | Speckle | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |\\n| ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| BLIP ViT-B/16 | 61.8 | 61.8 | 59.7 | 66.4 | 58.5 | 70.7 | 56.7 | 22.5 | 42.6 | 60.4 | 66.0 | 70.8 | 61.0 | 61.2 | 47.8 | 69.8 | 58.6 |\\n| Tent | 65.3 | 64.9 | 59.9 | 69.4 | 31.6 | 74.1 | 35.7 | 1.9 | 10.7 | 63.3 | 70.4 | 73.8 | 64.4 | 65.8 | 47.8 | 71.5 | 54.4 |\\n| EATA | 64.9 | 65.6 | 64.6 | 70.0 | 62.0 | 74.3 | 61.7 | 28.1 | 55.3 | 63.9 | 71.1 | 74.4 | 65.5 | 66.0 | 53.7 | 72.7 | 63.4 |\\n| DeYO | 64.0 | 66.0 | 63.0 | 69.8 | 64.6 | 74.6 | 63.0 | 5.8 | 56.1 | 65.7 | 71.4 | 74.5 | 65.7 | 67.8 | 52.5 | 72.7 | 62.3 |\\n| Ours | **67.2** | **68.1** | **66.6** | **70.7** | **67.0** | **75.8** | **65.8** | **45.7** | **61.2** | **68.4** | **74.2** | **75.2** | **70.4** | **70.4** | **58.6** | **73.5** | **67.4** |\\n\\n| Mixed Severity Levels | OCR | CI | CR | CS | CD | SR | RI | RS | RD | IP | Avg. |\\n| ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| BLIP ViT-B/16 | 42.1 | 29.7 | 28.0 | 33.9 | 30.0 | 44.8 | 51.7 | 50.5 | 50.8 | 56.8 | 41.8 |\\n| Tent | 42.4 | 28.5 | 23.6 | 33.8 | 26.9 | 45.5 | 52.4 | 51.4 | 50.9 | 57.0 | 41.2 |\\n| EATA | 43.4 | 30.8 | 29.4 | 34.8 | 30.7 | 46.0 | 53.2 | 51.8 | 51.4 | 57.6 | 42.9 |\\n| DeYO | 43.4 | 30.7 | 29.3 | 35.0 | 30.9 | 46.2 | 53.4 | 51.9 | 51.4 | 57.7 | 43.0 |\\n| Ours | **44.4** | **32.2** | **30.6** | **35.7** | **31.7** | **46.3** | **53.8** | **52.1** | **51.5** | **57.4** | **43.6** |\\n\\nNote that for the Mixed Corruption Types setting, there are five levels of the mixed corruptions in text retrieval, corresponding to the image corruptions with five severity levels. For image retrieval, the severity levels of character-level / word-level / sentence-level text corruptions are 7 / 2 / 1. Thus, we select the two highest severity levels for character-level and word-level corruptions, and combine them with sentence-level corruptions, resulting in two levels of the mixed corruptions.\"}", "{\"comment\": \">Q4: The approach heavily relies on pre-trained models and assumes the existence of a well-aligned common space. **In cases where the source domain model lacks robust representations, the effectiveness of TCR may be diminished**. This could limit the generalizability of the approach to pre-trained models of different quality. More discussions and insights need to be given in order to make the paper more readable.\\n\\n**A4**: Thanks for your comments. We acknowledge that TCR relies on the well-aligned common space, which is essential for achieving good intra-modality uniformity and inter-modality gap. In other words, if the source model is suboptimal, the performance improvements of TCR might be less pronounced. However, **we highlight that TCR outperforms all baselines and achieves a stable performance improvement across various pre-trained model types and sizes**, including BLIP ViT-B/16, BLIP ViT-L/16, CLIP ViT-B/16, and CLIP ViT-B/32. **Indeed, any test-time adaptation paradigm heavily depends on the high-quality source model** [A] [B] [F], i.e., the high-quality source model would provide more reliable predictions that support performance improvement.\\n\\n**Reference:**\\n\\n[A] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021.\\n\\n[B] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. In ICLR, 2023.\\n\\n[C] Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. *Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift*. Journal of Data-centric Machine Learning Research, 2023.\\n\\n[D] Xiaoqiang Lu, Binqiang Wang, Xiangtao Zheng, and Xuelong Li. *Exploring models and data for remote sensing image caption generation*. IEEE Transactions on Geoscience and Remote Sensing, 2017.\\n\\n[E] Zhiqiang Yuan, Wenkai Zhang, Kun Fu, Xuan Li, Chubo Deng, Hongqi Wang, and Xian Sun. *Exploring a fine-grained multiscale method for cross-modal remote sensing image retrieval*. IEEE Transactions on Geoscience and Remote Sensing, 2021.\\n\\n[F] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Ef\\ufb01cient test-time model adaptation without forgetting. In ICML, 2022.\\n\\n[G] Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, and Sungroh Yoon. Entropy is not enough for test-time adaptation. In ICLR, 2024.\"}" ] }
BlzBcWYmdB
Cross-modal Mitigation of Spurious Correlation for Prompt-tuning in VLMs with Causally Motivated Logic Alignment
[ "Xueyang Tang", "Song Guo", "Xiaosong Ma", "Haoxi Li", "Jie ZHANG", "Yue Yu" ]
Recent studies have shown that pre-trained vision-language models can effectively adapt to diverse downstream tasks through parameter-efficient prompt tuning. Unfortunately, the tuned models can exploit spurious correlations during prediction, resulting in a failure to generalize to out-of-distribution test data, especially when the tuning dataset exhibits bias. How to achieve cross-modal mitigation of spurious correlations during prompt tuning of vision-language models remains an open question. In this paper, the challenging problem is tackled by leveraging the stable relationship between necessary and sufficient causal features and the corresponding label. On the one hand, we constrain the learning process of prompt by reinforcing the necessary and sufficient connection between the textual labels and textual features. On the other hand, the probability of necessity and sufficiency between the textual features and the filtered visual features is measured and maximized to enhance cross-modal feature alignment. By iteratively optimizing these two objectives, we can achieve cross-modal mitigation of spurious correlations because the logic equivalence between textual labels and visual features is bolstered. The theoretical analysis on generalization error indicates that our method can achieve a tighter generalization error bound than existing approaches. We evaluate the proposed method on several commonly adopted out-of-distribution datasets, and the empirical results demonstrate the superiority of our method over the state-of-the-art competitors.
[ "Vision-Language Models", "Prompt Tuning", "Spurious Correlations", "Out-of-Distribution Generalization", "Causality", "Probability of Necessity and Sufficiency" ]
Reject
https://openreview.net/pdf?id=BlzBcWYmdB
https://openreview.net/forum?id=BlzBcWYmdB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vddGvbvTdm", "toVoaIKzvs", "s2DhnqcdD4", "pVlmkzU6FQ", "n2HeF05okA", "jsqxwhUEfN", "ikCSMTOXL8", "gf1hSLLvwc", "gVgAHN1rbu", "fwmsWTAaLG", "fqn5FVNYxt", "fkL5nx1vZ4", "bAd50gvNaT", "Z89GTVLog9", "TMoNjAVX46", "SoutFlL96m", "QnlrNLydjT", "OJTZFrqpHH", "I4x7BdIRoZ", "CMPWcMicr1", "AVxD3erTVr", "9pC8osIS3z", "4KYdZ7ptxJ", "0G4qGHeTRs" ], "note_type": [ "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730275925427, 1732620662109, 1732590483195, 1734937072897, 1732797268386, 1732930137371, 1730721787991, 1733173105592, 1737524301244, 1733173003478, 1732630385894, 1732703294126, 1732929913235, 1730265963674, 1733174639051, 1732867893682, 1732601809703, 1732604937403, 1732626729972, 1732926046648, 1732695594567, 1730714055852, 1732592424482, 1732926468361 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_zmpK" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Area_Chair_tVNG" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_fk2Q" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_Rh7Y" ], [ "ICLR.cc/2025/Conference/Submission14151/Area_Chair_tVNG" ], [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_ZvzB" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_zmpK" ], [ "ICLR.cc/2025/Conference/Submission14151/Reviewer_ZvzB" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ], [ "ICLR.cc/2025/Conference/Submission14151/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the novel LogicAI-PT framework to mitigate learning of spurious correlations in prompt tuning of CLIPs. It models the PNS (probability of necessity and sufficiency) by introducing intervention $\\\\bar{Q}$ and $\\\\bar{\\\\Phi}$. The author provide extensive introduction of the methodology and background and the experimental results are significant.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method is quite simple and effective regarding the significant improvement of multiple benchmarks. The idea of tackling spurious correlation problem from a sufficiency-necessity view is intuitive and is implemented based on thorough proof.\", \"weaknesses\": \"1. The proposed method seems to be universally applicable to many tasks rather than only prompting of VLM as classifier. Causal Representation Learning baselines adapting from other tasks can largely consolidate the motivation of this paper.\\n2. Ablation study of the proposed is not enough. What is the effect of different $\\\\alpha, \\\\beta$? #419 introduces the author chooses different value combination for different benchmarks. The robustness regarding different hyper-parameters can largely affect the applicability of this method.\", \"questions\": \"1. What kind of knowledge does the $\\\\bar{Q}$ and $\\\\bar{\\\\Phi}$ learnt durning the training process? I am curious about more quantitative and qualitative results about these additionally introduced intervention modules.\\n2. Could this method transfer to other VLMs such as EVA-CLIP and single tower VLMs such as BEiT?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer zmpK (Part 1)\", \"comment\": \"Thank you very much for your valuable comments and feedback, which significantly contributes to improving the quality of this paper. Detailed responses to your concerns and questions are listed below.\\n\\n>**W1: Causal Representation Learning baselines adapting from other tasks can largely consolidate the motivation of this paper.**\\n\\n**Answer:** We adapt two representative causal representation learning methods from invariant learning: IRM [1] and IB-IRM [2]. They mitigate spurious correlations by ensuring the invariance of the conditional probability of the label $Y$ given the causal representation across varied training environments. The evaluation is conducted using ResNet-50 as the backbone model. The experimental results on four datasets are list as follows:\\n\\n| Dataset | Waterbird | | CelebA | | ImageNet | | PACS | |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| Test Acc (%) | Worst-case | Average | Worst-case | Average | Worst-case | Average | Worst-case | Average |\\n| ERM | $54.7$ | $84.1$ | $26.7$ | $78.2$ | $80.5$ | $88.5$ | $80.0$ | $92.6$ | \\n| IRM [1] | $64.7$ | $83.9$ | $67.1$ | $86.2$ | $87.9$ | $93.6$ | $80.7$ | $\\\\mathbf{93.8}$ |\\n| IB-IRM [2] | $65.3$ | $84.3$ | $67.9$ | $85.8$ | $88.3$ | $93.9$ | $81.2$ | $93.4$ |\\n| LogicAl-PT | $\\\\mathbf{67.5}$ | $86.2$ | $\\\\mathbf{69.9}$ | $\\\\mathbf{87.3}$ | $\\\\mathbf{90.2}$ | $\\\\mathbf{95.1}$ | $\\\\mathbf{82.4}$ | $93.7$ | \\n\\n[1] Arjovsky, Martin, et al. \\\"Invariant risk minimization.\\\" arXiv preprint arXiv:1907.02893 (2019).\\n\\n[2] Ahuja, Kartik, et al. \\\"Invariance principle meets information bottleneck for out-of-distribution generalization.\\\" Advances in Neural Information Processing Systems 34 (2021): 3438-3450.\\n\\n**Analysis:** We can find that LogicAl-PT outperforms the typical causally invariant representation learning methods. The underlying reason stems from the advantage of 'sufficient and neccesary' causal representation over traditional causal representation, which also forms the motivation for proposing LogicAl-PT for prompt tuning of VLMs. Prevalent causal representation learning methods primarily aim to mitigate non-causal spurious correlations. In contrast, the concept of 'sufficiency and necessity' goes further by excluding not only non-causal spurious correlations but also causal relationships that are 'sufficient but not necessary' or 'necessary but not sufficient'. We provide specific examples to clarify these types of relationships and explain why only 'sufficient and necessary' relations remain stable across diverse data distributions in Figure 4 on page 14 in Appendix A.\\n\\n&nbsp;\\n\\n>**W2: Ablation study of the proposed is not enough. What is the effect of different $\\\\alpha$, $\\\\beta$?**\\n\\n**Answer:** We add experiments to evaluate the effects of two significant hyper-parameters in the proposed objective (i.e., $\\\\alpha$ and $\\\\beta$) on model performance. Since the results on other datasets present the similar tendency as on ImageNet-1K, we herein focus on ImageNet-1K, with ResNet-50 as backbone model. When evaluating the effect of $\\\\alpha$, we fix $\\\\beta=1.0$ . When evaluating the effect of $\\\\alpha$, we fix $\\\\alpha=20.0$. The experimental results are shown in the following two tables:\\n\\n| $\\\\alpha$ | $0.0$ | $1.0$ | $10.0$ | $20.0$ | $30.0$ | $50.0$ |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\\n| worst-case (%) | $78.6$ | $80.9$ | $86.1$ | $90.2$ | $88.7$ | $79.5$ |\\n| average (%) | $87.2$ | $89.4$ | $93.5$ | $95.1$ | $94.0$ | $87.9$ |\\n\\n| $\\\\beta$ | $0.0$ | $0.10$ | $1.00$ | $10.0$ | $20.0$ | $30.0$ |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\\n| worst-case (%) | $88.6$ | $89.7$ | $90.2$ | $89.3$ | $87.2$ | $85.5$ |\\n| average (%) | $94.3$ | $94.8$ | $95.1$ | $94.5$ | $93.2$ | $91.9$ |\\n\\n**Analysis:** When $\\\\alpha=0.0$, model is tuned with only textual logic alignment; when $\\\\beta=0.0$, models is tuned with only cross-modal logic alignment. We can find the performance of LogicAl-PT is more sensitive to the selection of $\\\\alpha$ than the selection of $\\\\beta$. To effectively mitigate spurious correlations in VLMs, careful tuning of $\\\\alpha$ is essential. Regarding $\\\\beta$, a small value is safer in practice, as a large $\\\\beta$ may compromise the discriminative capability of the extracted features.\"}", "{\"title\": \"Response to Reviewer Rh7Y (Part 1)\", \"comment\": \"Thank you very much for your valuable comments and feedback, which significantly contributes to improving the quality of this paper. Detailed responses to your concerns and questions are listed below.\\n\\n>**W1: Confused symbols: $\\\\Phi$ denotes the filter in Cross-modal logic alignment but visual representation space in Textual logic alignment.**\\n\\n**Answer:** Thanks for your suggestions. To distinguish the filter from the representation space, we use a different symbol ($\\\\mathbf{h}$) to denote the filter in the revised paper. Meanwhile, $\\\\Phi_t$ and $\\\\Phi_v$ continue to represent the textual and visual representation spaces, respectively. Accordingly, the corresponding symbols in Figure 1 (on page 5) have also been updated.\\n\\n&nbsp;\\n\\n>**W2: Timid evaluation section.**\\n\\n**Answer:** To comprehensively validate the effectiveness of the proposed method, we have expanded the evaluation section (Section 5) by adding the following contents into the main text of the revised paper:\\n\\n**1)** Two state-of-the-art prompt tuning methods as **baselines** (PromptSRC [1] and DePT [2]), in Section 5.2 (on page 8);\\n\\n**2)** Experiments on another **backbone model (i.e., ViT-B/32)**, in Section 5.2 (on page 8);\\n\\n**3)** **Visualization experiments** to evaluate whether the proposed LogicAl-PT effectively mitigates cross-modal spurious correlations and enhances logical alignment between visual representation and text label, in Section 5.3 (on page 8-9);\\n\\n**4)** Visual explanation to illustrate **the necessity of the proposed textual logic alignment**, in Section 5.4 (on page 9);\\n\\n**5)** **Ablation study** on the sensitivity of hyper-parameters, in Section 5.4 (on page 10).\\n\\n[1] Khattak, Muhammad Uzair, et al. \\\"Self-regulating prompts: Foundational model adaptation without forgetting.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Zhang, Ji, et al. \\\"Dept: Decoupled prompt tuning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n&nbsp;\\n\\n>**W3: The motivation is not clear since the authors employ PNS without explanation; It would be helpful to include a better explanation of what the \\\"spurious correlation in vision-language models\\\" is exactly.** \\n\\n**Answer:** To better **clarify the motivation** for using logic alignment to integrate cross-modal mitigation of spurious correlations and cross-modal feature alignment, we summarize four **specific examples in Figure 4 (on page 14 of Appendix A)** to address the following questions:\\n\\n**1)** What the \\\"**spurious correlation** in vision-language models\\\" is exactly.\\n\\n**2)** Apart from mitigation of spurious correlations, **why corss-modal logic alignment (i.e., sufficiency and necessity) is also necessary** for enhancing out-of-distribution generalization performance in vision-language models?\\n\\n&nbsp;\\n\\n>**W4: Too much content from the existing papers and it could be removed by referring these papers.**\\n\\n**Answer:** Very good suggestion. We have removed the redundant parts in Section 3.2 (PNS) and Section 3.3 (PNS modeling). Additional details about PNS have been moved to Appendix C for further reference.\\n\\n&nbsp;\\n\\n>**W5: Section 4.1: The NSC feature shown in Figure 1 is not explained by the authors themselves how to make use and take advantage of this.**\\n\\n**Answer:** Sorry, we missed providing the explanation for the NSC feature in Figure 1. \\\"NSC\\\" represents \\\"necessary and sufficient cause''. Specifically, the NSC features in textual and visual modalities are given by $f([Q,CLASS])$ and $h(g(X))$, respectively. The interventions in textual and visual modalities are given by $f([\\\\bar{Q},CLASS])$ and $\\\\bar{h}(g(X))$, respectively.\\n\\nAt the training stage, \\\"NSC\\\" features are optimized by adjusting the learnable prompt $Q$ and filter $h$ using the proposed objective (10), as stated on line 290, page 6.\\n\\nAt the inference stage, predictions are made using the cosine similarity between textual and visual \\\"NSC\\\" features.\\n\\n**Advantage:** By optimizing the textual and visual \\\"NSC\\\" features through the proposed objective (10), the optimal textual \\\"NSC\\\" features become logically aligned with both text labels and the optimal visual \\\"NSC\\\" features. Leveraging these features for prediction effectively eliminates spurious correlations and improves out-of-distribution generalization performance.\"}", "{\"metareview\": \"This paper aims to tackle the issue of cross-modal spurious correlation for parameter-efficient prompt tuning of VLMs. This work pointed out that cross-modal mitigation of spurious correlations during prompt tuning of vision-language models remains an open question, and further proposed the logic of logic alignment and a practical framework to calculate the probability of necessity and sufficiency (PNS) between the textual label and textual representations.\\n\\nThis paper recevied diverse ratings, i.e., 6, 6, 5, 3. The AC has read reviewers' comments, authors' responses, and the revised version. The idea of calculating the probability of necessity and sufficiency and analyzing the cross-modal spurious correlation for the aspect of causal inference is novel. The main reasons for reject are that (1) the paper has been revised significantly to include more experimental results and quantitive analysis compared to the original version, which indicates that the submission is not fully ready for publication, (2) although the performance comparisons with PromptSRC and DePT have been included in the revision, the performance gap between the proposed method, PromptSRC and DePT are marginal, and the reasons on why the gap is marginal is not discussed. (3) The presentation of the paper is not good, which is not easy to follow the idea and contributions. Therefore, the AC does not recommend the current submission as accept.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised the concern on (1) a lack of quantitative analysis on the contribution of PNS, and (2) the workload of the original submitted paper may be questionable.\"}", "{\"title\": \"Common Responses\", \"comment\": \"We sincerely thank all the reviewers for their efforts and valuable feedback, which have greatly contributed to improving the quality of our work. In particular, we appreciate the reviewers' recognition of our:\\n\\n**(1) novel and intuitive methodology design** (by reviewer **Rh7Y**, **zmpK**); \\n\\n**(2) extensive and thorough theoretical analysis** (by Reviewer **fk2Q**, **zmpK**, **Rh7Y**); \\n\\n**(3) significant performance improvement** on multiple benchmarks (by Reviewer **ZvzB**, **zmpK**).\\n\\nTo address the reviewers' concerns and questions, we have provided detailed explanations for unclear content and conducted extensive experiments to further validate the superiority of our method. Specifically, **we summarize the updated content included in the revised version of the paper as follows:**\\n\\n>**Clarifications:**\\n\\n**(1)** Updated illustrations and descriptions of the overall framework in **Figure 1, on page 5**.\\n\\n**(2)** Added a detailed illustration to analyze the necessity and superiority of logic alignment in VLMs in **Appendix A, on page 14**.\\n\\n>**Evaluations:**\\n\\n**(1) Visualization experiments and analysis** that demonstrate the proposed LogicAl-PT effectively mitigates cross-modal spurious correlations and enhances logical alignment between visual representation and text label, in **Section 5.3 (on page 8-9)**.\\n\\n**(2)** Added two state-of-the-art prompt tuning methods as **additional baselines** (i.e., PromptSRC and DePT), in **Section 5.2 (on page 8)**.\\n\\n**(3)** Experiments on **another backbone model** (i.e., ViT-B/32), in **Section 5.2 (on page 8)**.\\n\\n**(4)** **Visual explanation** to illustrate the necessity of the proposed textual logic alignment, in **Section 5.4 (on page 9-10)**.\\n\\n**(5)** **Ablation study** on the sensitivity of hyper-parameters, in **Section 5.4 (on page 10)**.\\n\\n**(6)** Experimental comparison with **causal representation learning baselines** adapted from single-modal scenarios, **on line 1126-1151, page 21-22, in Appendix D**.\\n\\n**(7)** Evaluation of computational overhead to verify the **computational efficiency** of our method, **on line 1107-1124, page 21, in Appendix D**.\"}", "{\"title\": \"Looking forward to the reviewer's responses\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for taking the time to provide constructive and valuable comments on this work, which have significantly contributed to improving the quality of the paper.\\n\\nAs the discussion period nears its conclusion, we would like to know if there are any additional clarifications or experiments we can provide. We look forward to your feedback and kindly invite you to update your score if your concerns have been adequately addressed.\\n\\nThank you once again for your time!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"summary\": \"The paper presents a novel framework, LogicAl-PT, that addresses the challenge of cross-modal mitigation of spurious correlations in prompt tuning of vision-language models. The authors introduce a new concept, logic alignment, which integrates the mitigation of spurious correlations with cross-modal alignment of representations, and demonstrates its effectiveness through theoretical analysis and empirical results on various out-of-distribution datasets. LogicAI-PT earns competitive performance compared with traditional prompt-tuning methods for CLIP model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Extensive theoretical analysis is provided to verify the proposed concept, PNS and PNS risk modeling.\", \"weaknesses\": \"- In contrast to the detailed theoretical analyses, the empirical verifications are fairly absent in this paper. Take the most recent competitor Coopood as an example, this paper presents much fewer empirical analyses, i.e. only 2 tables in the experiment section for verification. More ablation studies about the hyper-parameter chosen, visual results about the improvement on spurious correlations should be provided.\\n\\n-Some more recent prompt tuning methods[a][b] should be discussed and compared with.\\n\\n- Experiments on more architectures like ViT except for ResNet-50 should be done as well.\\n\\n[a] Self-regulating Prompts: Foundational Model Adaptation without Forgetting, https://arxiv.org/abs/2307.06948)\\n[b] DePT: Decoupled Prompt Tuning, https://arxiv.org/abs/2309.07439\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to the reviewer's responses\", \"comment\": \"Dear Reviewer Rh7Y,\\n\\nAs the Reviewer-Author discussion phase **concludes at midnight today (23:59, Dec 2nd, AoE)**, we kindly ask if we have adequately addressed your questions and concerns. Further discussions are welcome if you have any additional concerns or questions.\\n\\nThank you once again for your time!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Looking forward to the reviewer's responses\", \"comment\": \"Dear Reviewer fk2Q,\\n\\nAs the Reviewer-Author discussion phase **concludes at midnight today (23:59, Dec 2nd, AoE)**, we kindly ask if we have adequately addressed your questions and concerns. Further discussions are welcome if you have any additional concerns or questions.\\n\\nThank you once again for your time!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer ZvzB\", \"comment\": \"Thank you very much for your valuable feedback and suggestions, which significantly contributes to improving the quality of this paper. Detailed responses to your concerns and questions are listed below.\\n\\n>**W1: The proposed method shows superior performance compared with other benchmarks. What is the computational efficiency compare to simpler methods?**\\n\\n**Answer:** Thank you for your appreciation of our work. Regarding computational efficiency, we analyze and compare the computational overhead of our method with several existing methods. The results are presented in the following table.\\n\\n| Method | Params | Params+%CLIP | FLOPS | FLOPS+%CoOp |\\n| :--- | :---: | :---: | :---: | :---: |\\n| CoOp | 2048 | 0.004% | 354.50G | - |\\n| ERM | 0.514M | 1.05% | 354.53G | 0.01% |\\n| CoOPood | 1.026M | 2.10% | 354.56G | 0.02% |\\n| LogicAl-PT | 1.028M | 2.10% | 354.57G | 0.02%. | \\n\\n**Analysis:** We can see the the overall parameters and Floating Point Operations (FLOPS) of our LogicAl-PT are only 2.1% and 0.02% higher than those of CLIP and CoOp, respectively. Compared with the improvement in out-of-distribution generalization performance, our method LogicAl-PT exhibits considerable computational efficiency in terms of the number of parameters and FLOPS.\"}", "{\"title\": \"Thanks for the re-evaluation\", \"comment\": \"Thank you very much for your efforts and response. We are pleased to have addressed your concerns, and truly appreciate your re-evaluation of our work and the decision to raise the score. Your valuable feedback has greatly facilitated the improvement of our work. Further discussions are welcome if you have any additional concerns or questions.\"}", "{\"title\": \"Response to Reviewer Rh7Y (Part 3)\", \"comment\": \">**Q1: Usefulness of the PNS term.**\\n\\n**Answer:** \\nTo verify that the tuned models developed by our method LogicAl-PT exploit the necessary and sufficient features rather than spurious features, we sample some data instances to generate **visual explanations** for the selected model using Grad-CAM [3]. The commonly used Grad-CAM can produce a localization map which highlights the important regions in the input image that a deep learning model depends on for predicting the label.\\n\\n**The visualization results are displayed in Figure 2 (on page 9), and the detailed analysis on the visualization results is provided on line 410-431, page 8.** In summary, visualization results demonstrate the proposed LogicAl-PT can effectively exploit the 'sufficient and necessary' features and mitigate the unstable spurious features, including non-causal spurious features, 'sufficient but not necessary' features and 'necessary but not sufficient' features. This explains why LogicAl-PT achieves superior out-of-distribution generalization performance, delivering more consistent results across diverse data distributions compared to its competitors.\\n\\n[3] Selvaraju, Ramprasaath R., et al. \\\"Grad-cam: Visual explanations from deep networks via gradient-based localization.\\\" Proceedings of the IEEE international conference on computer vision. 2017.\\n\\n&nbsp;\\n\\n>**Q2: Usefulness of the textual logic alignment.**\\n\\n**Answer:** \\n\\n**1) Qualitative Analysis:** Since the textual representations (corresponding to variable $\\\\Phi_t$) are the class-wise mapping from the text labels, the sufficiency of variable $Y$ for variable $\\\\Phi_t$ (i.e., $Y\\\\Rightarrow \\\\Phi_t$) is naturally guaranteed while the reverse $Y\\\\Leftarrow \\\\Phi_t$ is not ensured. In other words, textual representations ($\\\\Phi_t$) must be necessary causes for variable $Y$, but they don't have to be sufficient causes for variable $Y$. Therefore, textual logic alignment is proposed to enhance the sufficiency of text representations ($\\\\Phi_t$) for label $Y$. Accordingly, when cross-modal logic alignment (i.e., $\\\\Phi_t \\\\Leftrightarrow \\\\Phi_v$) is achieved, combining textual logic alignment can mitigate the visual features that are not sufficient for variable $Y$.\\n\\n**2) Experimental Validation:** To investigate the actual role that textual logic alignment serves, we visualize the features which is utilized by the model tuned without textual logic alignment (w/o TLA), i.e., $\\\\beta=0$. In particular, when we set $\\\\beta=0$, $\\\\alpha$ is tuned to its optimal value, i.e., the cross-modal logic alignment ($\\\\Phi_t \\\\Leftrightarrow \\\\Phi_v$) is enhanced. **The visualization results are displayed in Figure 3 on page 10**. From the visualization results, we find that adding textual logic alignment mitigates the visual features which are not sufficient for predicting $Y$. **Therefore, the above qualitative analysis is validated by the visualization results.**\\n\\nDetailed visualization results and analysis are provided on line 483-513, page 10.\\n\\n&nbsp;\\n\\n>**Q3: How to extractive NSC features in textual and visual modality.**\\n\\n**Answer:** Sorry, we missed providing the explanation for the NSC feature in Figure 1. \\\"NSC\\\" represents \\\"necessary and sufficient cause''. Specifically, the NSC features in textual and visual modalities are given by $f([Q,CLASS])$ and $h(g(X))$, respectively. The interventions in textual and visual modalities are given by $f([\\\\bar{Q},CLASS])$ and $\\\\bar{h}(g(X))$, respectively.\\n\\nAt the training stage, \\\"NSC\\\" features are optimized by adjusting the learnable prompt $Q$ and filter $h$ using the proposed objective (10), as stated on line 290, page 6.\\n\\nAt the inference stage, predictions are made using the cosine similarity between textual and visual \\\"NSC\\\" features.\\n\\nWe have updated illustrations and descriptions of the overall framework in **Figure 1, on page 5**, in the revised paper.\\n\\n&nbsp;\\n\\nThank you again for your valuable feedback. Further discussions are always welcome if you have any additional concerns or questions.\"}", "{\"summary\": \"This article introduces the concept of logical alignment to address the cross-modal mitigation problem of spurious correlation for prompt adjustment in visual language models. To achieve this, the authors maximize the probability of necessity and sufficiency corresponding to cross-modal and textual logical alignment. Theoretical analysis proves that the proposed method has a tighter generalization error bound compared to existing approaches. Performance is analyzed across a few different test data distributions, and components of the method are ablated.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The generalization error bond of the presented method is provided with theory analysis in Appendix A.\\n\\nThe novelty of the method is how to integrate the probability of necessity and sufficiency in multi-modal learning.\", \"weaknesses\": \"The paper is not easy to follow. This is due to the symbols being confused, e.g., $\\\\Phi$ denotes the filter in Cross-modal logic alignment but visual representation space in Textual logic alignment.\\nWhile the first half of the paper explains the idea and motivation well, creating a rightful sense of expectation of the result, the section on the results somewhat comes short of delivering the findings with a bang. After reading the first half I was excited to read the next pages to find \\\"Where are those indeed integrated areas for boosting the expected performance \\\", and tingling with an expectation of learning something new. But then, for some reason, the Overall Performance and Ablation Study sections are very timid and just present dry numbers for each of the tests that were planned. \\n\\n1. It would be helpful to include a better explanation of what the \\\"spurious correlation in vision-language models\\\" is exactly. Maybe a picture. \\n2.The paper borrowed too much content from the existing papers and it could be removed by referring these papers. Even so, the motivation is not clear since the authors employ PNS without explanation.\\n\\n3. Page 6, Section 4.1: The NSC feature shown Figure 1 is not explained by the authors themselves how to make use and take advantage of this. \\n\\n4. Compared to CoOPood, it seems that the proposed method exploits PNS terms instead of mutual information to align cross-modal representations. I am wondering how effective the PNS term is in cross-modal mitigation of spurious correlation. \\n5. Why was it necessary to do textual logic alignment?\", \"questions\": \"I would like to hear the authors' discussion regarding the three weakness that I highlikeed above:\\n\\n1. Usefulness of the PNS term. \\n\\n2. Usefulness of the textual logic alignment. \\n\\n3. How to extractive NSC features in textual and visual modality.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Dec 2 - last day for reviewers' questions\", \"comment\": \"Dear Reviewer fk2Q and Rh7Y,\\n\\nThis is a kind reminder that December 2 is the last day for reviewers to ask questions to authors. As the paper received diverse ratings and your initial ratings are negative, could you check the authors' responses by today and see whether the responses addressed your concerns? Your constructive and timely communications are strong contributions to the reviewing process.\\n\\nThank you,\\n\\nAC\"}", "{\"comment\": \"Thanks! This is helpful.\"}", "{\"title\": \"Response to Reviewer fk2Q (Part 1)\", \"comment\": \"Thank you very much for your valuable feedback and suggestions, which significantly contributes to improving the quality of this paper. Detailed responses to your concerns and questions are listed below.\\n\\n>**W1: Ablation studies about the hyper-parameter chosen should be provided..**\\n\\n**Answer:** We add experiments to evaluate the effects of two significant hyper-parameters in the proposed objective (i.e., $\\\\alpha$ and $\\\\beta$) on model performance. Since the results on other datasets present the similar tendency as on ImageNet-1K, we herein focus on ImageNet-1K, with ResNet-50 as backbone model. When evaluating the effect of $\\\\alpha$, we fix $\\\\beta=1.0$ . When evaluating the effect of $\\\\alpha$, we fix $\\\\alpha=20.0$. The experimental results are shown in the following two tables:\\n\\n| $\\\\alpha$ | $0.0$ | $1.0$ | $10.0$ | $20.0$ | $30.0$ | $50.0$ |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\\n| worst-case (%) | $78.6$ | $80.9$ | $86.1$ | $90.2$ | $88.7$ | $79.5$ |\\n| average (%) | $87.2$ | $89.4$ | $93.5$ | $95.1$ | $94.0$ | $87.9$ |\\n\\n| $\\\\beta$ | $0.0$ | $0.10$ | $1.00$ | $10.0$ | $20.0$ | $30.0$ |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\\n| worst-case (%) | $88.6$ | $89.7$ | $90.2$ | $89.3$ | $87.2$ | $85.5$ |\\n| average (%) | $94.3$ | $94.8$ | $95.1$ | $94.5$ | $93.2$ | $91.9$ |\\n\\n**Analysis:** When $\\\\alpha=0.0$, model is tuned with only textual logic alignment; when $\\\\beta=0.0$, models is tuned with only cross-modal logic alignment. We can find the performance of LogicAl-PT is more sensitive to the selection of $\\\\alpha$ than the selection of $\\\\beta$. To effectively mitigate spurious correlations in VLMs, careful tuning of $\\\\alpha$ is essential. Regarding $\\\\beta$, a small value is safer in practice, as a large $\\\\beta$ may compromise the discriminative capability of the extracted features.\\n\\n&nbsp;\\n\\n>**W2: Visual results about the improvement on spurious correlations should be provided.**\\n\\n**Answer:** We add visualization experiment in the updated paper.\\n\\n**Setup:** For the purpose of verifying that the tuned models developed by our method LogicAl-PT exploit the necessary and sufficient features rather than spurious features, we sample some data instances to generate visual explanations for the selected model using Grad-CAM [1]. The commonly used Grad-CAM can produce a localization map which highlights the important regions in the input image that a deep learning model depends on for predicting the label. \\n\\n**Results: The detailed visualization results are displayed in Figure 2 (on page 9)**. The pivotal features employed by various prompt tuning methods are highlighted in red. **The visualization results reveal that the proposed LogicAl-PT demonstrates three notable advantages** over existing prompt-tuning methods: **1) LogicAl-PT can effectively eliminate the non-causal spurious features** that are associated with the label (i.e., 'background' in WaterBird dataset and 'baby' in ImageNet-1K dataset). **2) LogicAl-PT can mitigate the 'sufficient but not necessary' features** that demonstrate inconsistent presence across different data instances. For example, the shape of feet is a 'sufficient but not necessary' feature for classifying the picture of a bird as 'waterbird' or 'landbird' because its feet can retract or remain hidden when the bird is lying down or in flight. **3) LogicAl-PT can mitigate the 'necessary but not sufficient' features** which can impact the classification performance when the distribution of these 'necessary but not sufficient' features varies. For example, the wings of birds are 'necessary but not sufficient' features for distinguishing 'waterbird' from 'landbird'.\\n\\n**Analysis:** In summary, visualization results demonstrate the proposed LogicAl-PT can effectively exploit the 'sufficient and necessary' features and mitigate the unstable features, including non-causal spurious features, 'sufficient but not necessary' features and 'necessary but not sufficient' features. **This explains why LogicAl-PT achieves superior out-of-distribution generalization performance**, delivering more consistent results across diverse data distributions compared to its competitors.\\n\\n[1] Selvaraju, Ramprasaath R., et al. \\\"Grad-cam: Visual explanations from deep networks via gradient-based localization.\\\" Proceedings of the IEEE international conference on computer vision. 2017.\"}", "{\"title\": \"Response to Reviewer fk2Q (Part 2)\", \"comment\": \">**W3: Experiments on more architectures like ViT except for ResNet-50 should be done as well.**\\n\\n**Answer:** We have added experiments on four commonly used datasets with ViT-B/32 as the backbone model. The experimental results are shown in the following table.\\n| Dataset | Waterbird | | CelebA | | ImageNet | | PACS | |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| Test Acc (%) | Worst-case | Average | Worst-case | Average | Worst-case | Average | Worst-case | Average |\\n| CLIP | $41.4$ | $65.3$ | $69.7$ | $85.2$ | $51.4$ | $75.8$ | $81.7$ | $93.8$ |\\n| CoOp | $43.5$ | $77.4$ | $26.2$ | $77.0$ | $87.1$ | $92.8$ | $82.4$ | $94.5$ |\\n| ERM | $49.6$ | $78.3$ | $25.9$ | $76.8$ | $86.7$ | $93.3$ | $82.9$ | $94.1$ |\\n| CoOPood | $52.5$ | $79.2$ | $27.1$ | $76.5$ | $89.9$ | $94.6$ | $82.7$ | $94.4$ |\\n| PromptSRC | $50.8$ | $79.5$ | $69.3$ | $85.9$ | $87.8$ | $94.1$ | $83.4$ | $94.8$ |\\n| DePT+PromptSRC | $51.7$ | $80.0$ | $70.2$ | $86.3$ | $87.4$ | $94.3$ | $83.5$ | $95.1$ |\\n| LogicAl-PT | $\\\\mathbf{61.2}$ | $\\\\mathbf{80.3}$ | $\\\\mathbf{73.1}$ | $\\\\mathbf{86.9}$ | $\\\\mathbf{91.8}$ | $\\\\mathbf{95.4}$ | $\\\\mathbf{84.3}$ | $\\\\mathbf{95.2}$ |\\n\\n**Analysis:** The results show that our method LogicAl-PT consistently outperforms the competitors on both worst-case and average test accuracy in four commonly used datasets. In particular, LogicAl-PT achieves around 9%, 3%, 2% and 1% higher worst-case accuracy than the second best algorithm on Waterbird, CelebA, ImageNet-1K and PACS when ViT-B/32 is used as backbone model.\\n\\n&nbsp;\\n\\n>**W4: Some more recent prompt tuning methods[a] (PromptSRC) [b] (DePT) should be discussed and compared with.**\\n\\n**Answer:** We have added the suggested recent prompt tuning methods: PromptSRC and DePT as baselines. \\n\\n**1)** When adopting ResNet-50 as backbone model, the corresponding results are listed as follows:\\n| Dataset | Waterbird | | CelebA | | ImageNet | | PACS | |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| Test Acc (%) | Worst-case | Average | Worst-case | Average | Worst-case | Average | Worst-case | Average |\\n| CoOPood | $60.3$ | $\\\\mathbf{86.3}$ | $31.6$ | $78.6$ | $85.8$ | $92.9$ | $81.5$ | $92.8$ |\\n| PromptSRC | $57.2$ | $85.5$ | $68.2$ | $85.3$ | $81.6$ | $89.4$ | $81.7$ | $93.6$ |\\n| DePT+PromptSRC | $57.9$ | $86.0$ | $68.3$ | $85.7$ | $82.0$ | $90.1$ | $81.6$ | $\\\\mathbf{93.9}$ |\\n| LogicAl-PT | $\\\\mathbf{67.5}$ | $86.2$ | $\\\\mathbf{69.9}$ | $\\\\mathbf{87.3}$ | $\\\\mathbf{90.2}$ | $\\\\mathbf{95.1}$ | $\\\\mathbf{82.4}$ | $93.7$ | \\n\\n**2)** When adopting ViT-B/32 as backbone model, the corresponding results are listed as follows:\\n| Dataset | Waterbird | | CelebA | | ImageNet | | PACS | |\\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| Test Acc (%) | Worst-case | Average | Worst-case | Average | Worst-case | Average | Worst-case | Average |\\n| CoOPood | $52.5$ | $79.2$ | $27.1$ | $76.5$ | $89.9$ | $94.6$ | $82.7$ | $94.4$ |\\n| PromptSRC | $50.8$ | $79.5$ | $69.3$ | $85.9$ | $87.8$ | $94.1$ | $83.4$ | $94.8$ |\\n| DePT+PromptSRC| $51.7$ | $80.0$ | $70.2$ | $86.3$ | $87.4$ | $94.3$ | $83.5$ | $95.1$ |\\n| LogicAl-PT | $\\\\mathbf{61.2}$ | $\\\\mathbf{80.3}$ | $\\\\mathbf{73.1}$ | $\\\\mathbf{86.9}$ | $\\\\mathbf{91.8}$ | $\\\\mathbf{95.4}$ | $\\\\mathbf{84.3}$ | $\\\\mathbf{95.2}$ |\\n\\n**Analysis:** The results demonstrate that the proposed LogicAl-PT consistently achieves the highest worst-case test accuracy while maintaining comparable average test accuracy to recent prompt tuning methods. In contrast to PromptSRC and DePT, which lack specific designs for mitigating spurious correlations, LogicAl-PT effectively leverages the 'sufficient and necessary' features and mitigates the unstable features, including non-causal spurious features, \\u2018sufficient but not necessary\\u2019 features and \\u2018necessary but not sufficient\\u2019 features. This explains why LogicAl-PT achieves superior out-of-distribution generalization performance, providing more consistent results across diverse data distributions compared to competing methods.\"}", "{\"title\": \"Response to Reviewer zmpK (Part 2)\", \"comment\": \">**Q1: What kind of knowledge does the $\\\\mathbf{\\\\bar{Q}}$ and $\\\\mathbf{\\\\bar{\\\\Phi}}$ learnt durning the training process?**\\n\\n**Answer:** That's a very intriguing question. We have attempted to conduct visualization experiments to interpret the knowledge $\\\\bar{Q}$ and $\\\\bar{\\\\Phi}$ learned during the training process. Unfortunately, we were unable to obtain understandable and meaningful visualization results. However, we believe that interpreting these two intervention modules is highly intriguing and warrants further investigation.\\n\\n>**Q2: Could this method transfer to other VLMs such as EVA-CLIP and single tower VLMs such as BEiT?**\\n\\n**Answer:** Yes, the proposed method could be adapted for other VLMs. However, determining how to implement this transfer and how to leverage PNS modeling to enhance the training of other VLMs remains a challenging task. We consider exploring the application of the proposed method to a broader range of VLMs as a future research direction.\"}", "{\"title\": \"Looking forward to the reviewer's responses\", \"comment\": \"Dear Reviewer\\n\\nThank you very much for taking the time to provide constructive and valuable comments on this work, which have significantly contributed to improving the quality of the paper.\\n\\nAs the discussion period nears its conclusion, we would like to know if there are any additional clarifications or experiments we can provide. We look forward to your feedback and kindly invite you to update your score if your concerns have been adequately addressed.\\n\\nThank you once again for your time!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks for the response. It solves my concerns and I will raise my score.\"}", "{\"summary\": \"This paper presents Logical-pt, a framework for mitigating spurious correlations in vision-language models. It uses causally motivated logic alignment to align visual and textual features during prompt tuning. The method is backed by a tighter generalization error bound and empirically validated on several datasets, outperforming existing methods in out-of-distribution generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"the results reported in this paper demonstrates good performance on multiple benchmarks\", \"this paper did extensive evaluations and experiments to validate the method's effectiveness\"], \"weaknesses\": [\"The proposed method shows superior performance compared with other benchmarks. What is the computational efficiency compare to simpler methods?\"], \"questions\": \"same as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Rh7Y (Part 2)\", \"comment\": \">**W6: Compared to CoOPood, it seems that the proposed method exploits PNS terms instead of mutual information to align cross-modal representations. I am wondering how effective the PNS term is in cross-modal mitigation of spurious correlation.**\\n\\n**Answer:**\\n\\n**1) Clarification on CoOPood:** In CoOPood, the conditional mutual information is specifically employed to disentangle visual invariant features from visual spurious features, rather than facilitating cross-modal feature alignment. In contrast, CoOPood employs the assumption that the spurious correlation between visual spurious features and text label follows approximately uniform probability distributions to achieve cross-modal alignment. Therefore, the mitigation of spurious correlations and cross-modal alignment in CoOPood cannot be ensured if the assumption is not met. In comparison, our method does not rely on any assumptions about spurious correlations.\\n\\n**2) Effectiveness/Usefulness of the PNS term:** To verify that the tuned models developed by our method LogicAl-PT exploit the necessary and sufficient features rather than spurious features, we sample some data instances to generate **visual explanations** for the selected model using Grad-CAM [3]. The commonly used Grad-CAM can produce a localization map which highlights the important regions in the input image that a deep learning model depends on for predicting the label.\\n\\n**The visualization results are displayed in Figure 2 (on page 9), and the detailed analysis on the visualization results is provided on line 410-431, page 8.** In summary, visualization results demonstrate the proposed LogicAl-PT can effectively exploit the 'sufficient and necessary' features and mitigate the unstable spurious features, including non-causal spurious features, 'sufficient but not necessary' features and 'necessary but not sufficient' features. This explains why LogicAl-PT achieves superior out-of-distribution generalization performance, delivering more consistent results across diverse data distributions compared to its competitors.\\n\\n[3] Selvaraju, Ramprasaath R., et al. \\\"Grad-cam: Visual explanations from deep networks via gradient-based localization.\\\" Proceedings of the IEEE international conference on computer vision. 2017.\\n\\n&nbsp;\\n\\n>**W7: Usefulness of the textual logic alignment: Why was it necessary to do textual logic alignment?**\\n\\n**Answer:** \\n\\n**1) Qualitative Analysis:** Since the textual representations (corresponding to variable $\\\\Phi_t$) are the class-wise mapping from the text labels, the sufficiency of variable $Y$ for variable $\\\\Phi_t$ (i.e., $Y\\\\Rightarrow \\\\Phi_t$) is naturally guaranteed while the reverse $Y\\\\Leftarrow \\\\Phi_t$ is not ensured. In other words, textual representations ($\\\\Phi_t$) must be necessary causes for variable $Y$, but they don't have to be sufficient causes for variable $Y$. Therefore, textual logic alignment is proposed to enhance the sufficiency of text representations ($\\\\Phi_t$) for label $Y$. Accordingly, when cross-modal logic alignment (i.e., $\\\\Phi_t \\\\Leftrightarrow \\\\Phi_v$) is achieved, combining textual logic alignment can mitigate the visual features that are not sufficient for variable $Y$.\\n\\n**2) Experimental Validation:** To investigate the actual role that textual logic alignment serves, we visualize the features which is utilized by the model tuned without textual logic alignment (w/o TLA), i.e., $\\\\beta=0$. In particular, when we set $\\\\beta=0$, $\\\\alpha$ is tuned to its optimal value, i.e., the cross-modal logic alignment ($\\\\Phi_t \\\\Leftrightarrow \\\\Phi_v$) is enhanced. **The visualization results are displayed in Figure 3 on page 10**. From the visualization results, we find that adding textual logic alignment mitigates the visual features which are not sufficient for predicting $Y$. **Therefore, the above qualitative analysis is validated by the visualization results.**\\n\\nDetailed visualization results and analysis are provided on line 483-513, page 10.\"}", "{\"comment\": \"Thank you very much for your responses. Further discussions are always welcome if you have any additional concerns or questions.\"}" ] }
BltaWJZMeR
DataSciBench: An LLM Agent Benchmark for Data Science
[ "Dan Zhang", "Sining Zhoubian", "Min Cai", "Fengzu Li", "Lekang Yang", "Wei Wang", "Tianjiao Dong", "Ziniu Hu", "Jie Tang", "Yisong Yue" ]
This paper presents DataSciBench, a comprehensive benchmark for evaluating Large Language Model (LLM) capabilities in data science. Recent related benchmarks have primarily focused on single tasks, easily obtainable ground truth, and straightforward evaluation metrics, which limits the scope of tasks that can be evaluated. In contrast, DataSciBench is constructed based on a more comprehensive and curated collection of natural and challenging prompts. We develop a semi-automated pipeline for generating ground truth (GT) and validating evaluation metrics. This pipeline utilizes and implements an LLM-based self-consistency strategy to produce accurate GT by leveraging collected prompts, predefined task types, and aggregate metrics. Furthermore, it employs a careful approach to filter a high-quality Task - Function - Code (TFC) list and assess each code execution outcome within TFC based on precisely defined metrics and programmatic rules. Our experimental framework involves testing 6 API-based models, 8 open-source general models, and 9 open-source code generation models using the diverse set of prompts we have gathered. Through this approach, we aim to provide a more comprehensive and rigorous evaluation of LLMs in the domain of data science, shedding light on their strengths and weaknesses. Experimental results demonstrate that API-based models greatly outperform open-sourced models on all metrics except for VLM-as-a-judge and Deepseek-Coder-33b-instruct achieves the highest score among open-sourced models.
[ "data science", "data analysis and visualization", "benchmarking language model", "large language models" ]
https://openreview.net/pdf?id=BltaWJZMeR
https://openreview.net/forum?id=BltaWJZMeR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xN0CIiHvjA", "sOsgNDGOMQ", "qDjl5BlPKF", "o0L32ix8PH", "mYujRZVi1y", "lWpyNpgfDu", "iUMrpMAFDL", "hdO314Kvvk", "hPbkvSoYOG", "gb1fxscwV5", "frg79gRjlm", "eQdcqftmmo", "bIecnUTkD0", "Iy9KeTvvHq", "E1caqE3Xju", "DFTp21Rw2m", "ApdUCG7RCB", "960I8fzvty", "1klRQH5idJ" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1732558243580, 1730078277457, 1730604904044, 1732614733875, 1732260351281, 1732338051320, 1730707549800, 1732258578083, 1730630772602, 1732258330260, 1732259984233, 1730083502868, 1732768315723, 1732260551702, 1732260466707, 1732260279222, 1733809596983, 1732726060585, 1732260678061 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_amAm" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_h1zC" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_V76c" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_5yBS" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_h1zC" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_ZPJR" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_5yBS" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_amAm" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_V76c" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ], [ "ICLR.cc/2025/Conference/Submission11369/Reviewer_ZPJR" ], [ "ICLR.cc/2025/Conference/Submission11369/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks to the authors for their rebuttal and for including larger open-source models. However, the rebuttal still does not address the most critical concerns raised about novelty and insights by me and other reviewers. Also, the authors did not sufficiently respond to the first 2 weaknesses and the questions.\\n\\nRegarding W1, the novelty, significance, and impact of this benchmark in light of existing benchmarks are still not convincing. Yes, I agree that the tasks in DataSciBench are more diverse, but what differentiates this benchmark -- focus on general programming abilities, general-purpose reasoning, data science knowledge, or a combination of these? Also, the authors have mentioned multiple times that \\\"domain-specific focus\\\" is one of the strengths of this benchmark. But it is not clear what are the domains here. The tasks are still largely general programming and data science-oriented without requiring any knowledge of scientific or social science domains.\\n\\nRegarding W2, I did check the example prompts in the appendix. Hence, I asked: if the task prompts provide such detailed instructions, does it even reflect a practical setting? The original prompts look more realistic than the qualified prompts.\\n\\nAlso, I fully agree with reviewers 5yBS and h1zC on the overall poor presentation and vague descriptions throughout the paper. I hope the authors update their paper following their suggestions.\"}", "{\"summary\": \"The paper introduces DataSciBench, a new benchmark designed to evaluate the capabilities of LLMs in data science tasks. It addresses limitations of existing benchmarks by focusing on complex, multi-task scenarios with challenging prompts. The authors develop a semi-automated pipeline for generating ground truth and validating evaluation metrics, called Task-Function-Code (TFC). The study tests 23 models, including API-based and open-source models, revealing that API-based models generally outperform open-source ones.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"None.\", \"weaknesses\": \"1. The writing of this manuscript is not clear and extremely hard to follow. For example, it is unclear to me what the tasks are, how many samples are there in the benchmark, and how does the TFC work, etc. The authors may consider re-write the manuscript, and add some examples of the samples for better comprehension.\\n2. The benchmark seems not novel. There exist many \\\"data science\\\" or coding-related benchmarks for LLMs. The authors claim that previous studies \\\"focusing on single tasks, simplistic evaluation metrics, and readily available ground truth\\\", which lacks citations and discussion. The complexity and necessity of this new benchmark are not convincingly demonstrated.\\n3. Although the evaluation includes numerous models, it lacks depth in insights. A more detailed analysis, such as examining model performance across different question types, could reveal knowledge and reasoning disparities among models.\", \"questions\": \"1. How does the TFC work? Why is it necessary?\\n2. How do the authors ensure the correctness of the generated ground truth, even if the so-called test cases pass? If the ground truth can be easily obtained by just generation and rule-based verification, the tasks may be very easy and straightforward. Then, what is the value of this benchmark?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents DataSciBench, a comprehensive benchmark for assessing large language models (LLMs) in data science applications. DataSciBench includes 6 task types: data cleaning and preprocessing, data exploration, data visualization, predictive modeling, data mining, and report generation. The authors also propose a semi-automated Task-Function-Code (TFC) framework, which assesses model performance from coarse-grained (e.g., completion and success rates) to fine-grained (e.g., data quality scores, visualization completeness) perspectives. The evaluations of 23 models show that API-based models (especially GPT-4o) consistently outperform open-source models. The benchmark sheds light on challenges for LLMs in handling complex, multi-step data science tasks and provides insights into their strengths and limitations in this domain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents DataSciBench, a comprehensive benchmark for assessing large language models (LLMs) in data science applications. I looked at several questions in the attached zip file. The questions are indeed complex enough. Figure 5 / Table 3 provides evidence for data contamination risks and correlation with LiveCodeBench and BigCodeBench.\\n\\n2. The authors propose a semi-automated Task-Function-Code (TFC) framework to generate ground truth and obtain evaluation metrics for each subtask and for both coarse-grained and fine-grained perspectives.\\n\\n3. The authors did extensive experiments including 23 models, ranging from API-based (closed-source), open-sourced general, and open-sourced code generation models. GPT-4o still leads the leaderboard, which is not surprising. But it's good to see performance among various models on DataSciBench and the low performance of hard level examples, which can help recognize the challenges in complex multi-hop reasoning.\\n\\n4. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. It's good to see such a comprehensive benchmark for data science released, but it seems somewhat trivial to me for collecting existing prompts in BigCodeBench or LLM-synthesized instructions. Essentially, what's the biggest difference between DataSciBench and previous code benchmarks for data science?\\n\\n2. The ground truths were generated by LLMs via self-consistency, which might contain false positive ground truths.\\n\\n3. The experimental analysis part concludes the overall performance (closed-sourced > open-sourced), the difficulty ablation, and non-contaminated as well as correlations with other two code benchmarks. However, for the insights part, the paper dies not give many details about how models fail on such coding tasks, typical error cases, and how to potentially improve models to solve these issues?\", \"questions\": \"1. Sorry if I missed it, but seems the paper does not mention the total example numbers in DataSciBench?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' detailed response. However, I remain unconvinced by the novelty of this benchmark - the rebuttals on W2 & W3 are mainly claims made by the authors, and there is no clear quantifiable evidence that this benchmark does indeed have more \\u201cnaturalness\\u201d, \\u201cchallenging\\u201d, \\u201cmulti-hop reasoning\\u201d, and \\u201cdiversity of result types\\u201d than existing benchmarks. Given that there are already many existing benchmarks, one would need a strong justification and differentiate that this is not yet another benchmark. The paper still has much room to improve in terms clarity in its presentation (as pointed out by all the other reviewers). I still do not fully appreciate the Task-Function-Code (TFC) list structure, as well as the expert review process, despite the rebuttals to W5 and Q2. I will therefore keep to my original score.\"}", "{\"title\": \"Response to Reviewer 5yBS\", \"comment\": \"```\", \"w5\": \"Concerns on the elaboration on key components.\\n```\\nThank you for raising this important question about the definition of the Task-Function-Code (TFC) list structure. The TFC framework was developed to address several critical challenges in automated evaluation of data science tasks:\\n\\n1. Systematic Task Selection:\\nTFC provides a structured approach to identify and categorize key tasks across six established types. This systematic organization ensures comprehensive coverage of essential data science operations and helps maintain evaluation consistency and completeness.\\n\\n2. Standardized Evaluation Metrics:\\nData science tasks often lack standardized evaluation criteria. TFC addresses this by explicitly defining appropriate evaluation functions (also called Aggregation Functions) for each task. For example, data preprocessing tasks require specific metrics that differ from visualization tasks. This standardization ensures fair and consistent assessment.\\n\\n3. Automated Execution Framework:\\nTFC includes executable code components (also called Programmatic Rules) for both tasks and evaluation metrics. This automation significantly improves evaluation efficiency, result reproducibility, and testing scalability.\\n\\n4. Ground Truth Generation:\\nTFC serves as a crucial foundation for establishing ground truth, particularly valuable for complex tasks where ground truth is not readily available, and enables systematic verification and validation of model outputs.\\n\\nOverall, the TFC structure represents a novel contribution by providing a comprehensive framework that bridges the gap between task definition, evaluation criteria, and automated assessment in data science contexts.\\n\\n\\n```\", \"q1\": \"Concerns on the difference between coarse-grained metrics and existing papers.\\n```\\nThank you for your query. We have adopted the established definition of Success Rate (SR) in line with previous works by Hong et al. (2024) and Chen et al. (2021). Furthermore, to assess the data science proficiency of Large Language Models (LLMs) distinctly, we have introduced fine-grained metrics tailored to each data science task, as detailed in Appendix A.5.\\n\\n\\n```\", \"q2\": \"Elaboration of Expert Review process.\\n```\\nThank you for your inquiry. In Stage 1, tasks deemed \\\"easy to evaluate\\\" are those with clearly identifiable correct solutions, such as handling missing values in a data frame. In Stage 2, \\\"unified instructions\\\" entail a standardized format comprising input data, input file, prompt, and expected output file. \\n\\n```\", \"q3\": \"Elaboration of detailed questions.\\n```\\n(1) Requirements in Line 191-192 pertain to prompts associated with the characteristics (1) in Line 068-069.\\n\\n(2) The few-shot examples mentioned in Line 193 are drawn from human-written prompts and are altered as per the task type variations.\\n\\n(3) We utilize around 167 initial prompts from BigCodeBench, refining them into our specified format with a Task-Function-Constraint (TFC) list for standardized evaluation.\\n\\nConcerning the Self-Consistency (SC) strategy, we initially employ this method and then validate the results manually through cross-verification by multiple authors to ensure accuracy and reliability.\"}", "{\"comment\": \"Thank you to the authors for their response. However, I believe some critical issues remain unaddressed:\\n\\n1. **Lack of clarity around TFC**: It is still unclear what TFC is and how it works. Although the paper claims TFC as its main contribution, it is neither strictly defined nor formally introduced in Section 3. Throughout the paper, I found multiple references to TFC, such as \\\"TFC generation and evaluation,\\\" \\\"TFC list,\\\" \\\"TFC pipeline,\\\" and \\\"each TFC in TFC list.\\\" These terms are not explained but instead appear abruptly in the text. Even the newly added Appendix A.3 and Figure 6 fail to provide a clear explanation. Could you elaborate on what TFC is with a specific example? Additionally, how does TFC contribute to task selection, ground truth generation, evaluation, and other aspects of your methodology?\\n\\n2. **Validity of ground truth generation**: The process of ground truth generation remains questionable. Could you provide more details on the self-consistency strategy and the manual verification process?\\n\\n3. **Concerns about novelty**: The novelty of this benchmark is not yet convincing. Upon reviewing the provided data samples, the prompts appear to give highly detailed instructions, which might make the tasks relatively straightforward for SoTA LLMs (please correct me if I am wrong). Could you clarify why the chosen questions represent real-world data science challenges?\\nHow do these tasks differ from or exceed the complexity of existing benchmarks, such as SWE-bench, which tackle realistic programming problems?\\n\\n4. **Lack of actionable insights**: The paper would benefit from a more in-depth and systematic analysis of model failures. Specifically: Why do the models fail on certain tasks? What actionable solutions can you propose to improve performance in these areas?\\n\\nAdditionally, the overall presentation of the paper **lacks clarity**, which makes it difficult to follow. The authors may want to avoid using vague expressions that fail to explain concepts clearly or introducing new terms without proper definition. To illustrate this, I provide some examples based on the paragraph on line 73 (new version), outlining the questions that came to mind as I read it for the first time:\\n- **\\\"The gap between task definition, evaluation criteria, and automated assessment in the data science context\\\"**:\\nWhat is the gap? Do existing benchmarks lack clear task definitions or evaluation criteria? Do they fail to support automated assessment? These points were not previously discussed in the paper.\\n- **\\\"From coarse-grained perspectives\\\"**:\\nWhat does \\\"coarse-grained\\\" refer to here? There is no example or figure illustrating the hierarchical structure of TFC, and the explanation does not clarify the two levels of granularity. Without this context, terms like \\\"coarse-grained\\\" and \\\"fine-grained\\\" are confusing.\\n- **\\\"We first aggregate the range of task types, functions, and corresponding codes\\\"**:\\nWhat is a \\\"task\\\"? Where are the task types defined? What is meant by a \\\"function\\\"? Is this a Python function, or something else? What is the purpose of the function? And what does the \\\"code\\\" correspond to? A brief introduction to the context would make this much clearer.\\n- ...\"}", "{\"summary\": \"This paper introduces DataSciBench, a novel benchmark for evaluating the performance of large language models (LLMs) on data science tasks. The authors propose a semi-automated data collection pipeline, complemented by filtering and expert review for data quality. DataSciBench includes 222 data science tasks of 6 task types. Comprehensive evaluation over 6 API-based models and 17 open-sourced models show that DataSciBench is challenging to even the best LLMs.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This work presents a new benchmark dataset for evaluating LLMs on data science tasks, which is a meaningful contribution to the community.\\n2. The benchmark covers representative task types in data science, from data processing to data mining and report generation.\\n3. The evaluation setup includes popular open-sourced and proprietary LLMs.\", \"weaknesses\": [\"While this work has the potential to contribute a valuable benchmark to the community, several key issues need to be addressed:\", \"1. The semi-automated pipeline uses a self-consistency strategy to generate ground truth for a portion of the tasks. However, there lacks detail on further quality control. Also, I think the difficulty and authenticity of model generated tasks is questionable.\", \"2. DataSciBench employs instance-specific evaluation scripts that are both generated and verified by LLMs. The quality measure of evaluation functions needs more elaboration.\", \"3. As the author noted in section 5.3, DataSciBench shows a high correlation with LiveCodeBench and BigCodeBench. I personally see this as a negative of the proposed benchmark. Why do we need a benchmark that correlates well with existing ones?\", \"4. It is unclear to me what is the motivation for introducing the Task-Function-Code (TFC) list data structure and how is it a significant contribution. Is there a baseline method that TFC outperforms?\", \"4. The writing of this paper is often hard to follow, lacking elaboration on a lot of key details:\", \"Section 3.2, Question Filtering: what are all the keywords for principle (1)? What does \\\"questions that align with human preferences and LLMs\\\" (line 200) mean?\", \"Section 3.2, Expert Review: stage 1, what does \\\"easy to evaluate\\\" (line 204) mean? In stage 2, what does \\\"unified instructions\\\" refer to?\", \"Details on metrics lacks elaboration (see below)\", \"Section 5.1, how is \\\"performance variance\\\" measured? What are the values for API-based and open-sourced models?\", \"Section 5.2, how many tasks are there in each difficulty level? How do you define \\\"consistent performance\\\" (line 417-418)?\", \"Section 5.3, how do you define if performance of two datasets \\\"mismatch\\\" (line 431)? Also, the scale of x and y axes of Figure 5 is not matched. Then, what is the dashed blue line? How does it help establish the insight?\", \"Section 6.2, what are the \\\"characteristics of data science tasks\\\" (line 518-519)? What are the \\\"relatively simple data analysis operations\\\" (line 526-527)? Further elaboration is needed to distinguish DataSciBench from existing benchmarks.\", \"5. The fine-grained metrics in Section 4.2 need further justification:\", \"VLM-as-a-judge: Which VLM is used for judgement? What is the \\\"predefined criteria\\\" (line 319-320)? A reasonable evaluation or reference is needed to justify this metric.\", \"Plot Validity: why checking the shape of the matrix can evaluate the quality of plot?\", \"Data Accuracy: how exactly is mean square error measured? Is the output of corresponding tasks normalized to a specific format?\", \"Visualization completeness: What does \\\"checking their existence\\\" mean? If it refers to checking the existence of the output file, I am afraid it is merely a necessary condition for task success and can not measure the quality of the output plot.\", \"Model Accuracy: When is boolean output or decimal used? Why and how can they be unified into a single metric?\", \"Relatedly, in Table 2, what is the \\\"Score\\\" column? Is it an aggregation of all fine-grained metrics (of different type)? How is it calculated?\"], \"questions\": \"My main question have been listed in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer amAm\", \"comment\": \"```\\nW1&W2&W3: Concerns on the significance, prompt, and task selection of DataSciBench.\\n```\\nThank you for your question. We appreciate your concern that the tasks may not fully reflect real-world data science complexities. We collected prompts from real-world questions, involving domain knowledge and messy datasets. Please refer to the Appendix for example prompts illustrating this complexity. While DataSciBench may not encompass every aspect of real-world data science, it provides a robust benchmark for core data science abilities. We welcome further discussion on improvements.\\n\\n```\", \"w4\": \"Concerns about the experiments with larger models.\\n```\\nThank you for pointing out the lack of experiments with larger models (e.g., >13B parameters). We appreciate your observation that larger models often demonstrate improved nuanced reasoning and are crucial for evaluating benchmark robustness. Table 2 presents results for CodeLlama-13b-Instruct and StarCoder2-15b. Furthermore, our analysis includes varying sizes of CodeLlama (7B, 13B, and 34B), Deepseek (1.3B, 6.7B, and 33B), and Qwen-2.5 (1.5B and 7B) to investigate the impact of model scale on performance.\\n \\n```\", \"w5\": \"Concerns on the novelty of DataSciBench.\\n```\\nThank you for your question. Thank you for this insightful question about the value proposition of DataSciBench despite its correlation with existing benchmarks. While DataSciBench does show a correlation with previous studies, our benchmark offers several unique and important contributions:\\n1. Domain-Specific Focus:\\nDataSciBench specifically targets data science and analytics tasks. However, existing benchmarks primarily focus on general programming problems. This specialization helps evaluate models' capabilities in handling real-world data analysis scenarios.\\n2. Task Diversity:\\nOur benchmark includes unique task types like data preprocessing, visualization, and statistical analysis. These tasks are underrepresented in current benchmarks. This provides deeper insights into models' data science-specific capabilities.\\n3. Complementary Insights:\\nWhile overall correlations exist, we observe meaningful differences in model rankings. For example, models like Meta-Llama-3-8B-Instruct and CodeLlama-34B-Instruct show distinct performance patterns. These differences highlight capabilities specific to data science tasks that other benchmarks may not capture.\\nThe correlation with existing benchmarks actually validates our evaluation methodology, while our domain-specific focus provides valuable new insights for assessing AI models in data science applications.\\n\\n```\", \"w6\": \"Concerns on the validity of ground truth.\\n```\\nThank you for raising these important concerns. We have implemented a comprehensive quality control process for both the ground truth generation and evaluation scripts.\", \"for_ground_truth_generation\": \"1. We use a self-consistency strategy as the initial mechanism\\n2. These results are then manually verified by multiple authors to ensure accuracy and reliability\"}", "{\"summary\": \"This paper presents a novel benchmark named DataSciBench for evaluating LLMs to assess LLMs data science capabilities on complex tasks. It highlights the main drawbacks of previous works: a lack of task diversity, easily obtainable ground truths, and simplistic evaluation metrics. To address these issues, this new benchmark introduces a semi-automated LLM-based pipeline called Task - Function - Code (TFC), which generates ground truths and evaluation metrics for each subtask. They evaluated six APIs, eight open generation models, and nine open-source code generation models.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tThis paper is timely, as there have been considerable discussions about current evaluations becoming overly simplistic for modern LLMs.\\n\\n\\u2022\\tThe study is fairly comprehensive, featuring a large evaluation body over various data science tasks, testing across six APIs, eight open generation models, and nine open-source code generation models.\\n\\n\\u2022\\tA new benchmark is appreciated, especially when well-motivated. Some readers may find the new insights from Section 5.1/5.4 valuable. It is indeed rather surprising that StarCoder2/ CodeLlama performed so poorly.\", \"weaknesses\": \"\\u2022\\tThe primary motivation behind this paper is the observation that existing research often relies on easily obtainable ground truths and straightforward evaluation metrics on LLM\\u2019s data science capabilities. The authors surmise that existing benchmarks are lacking as they focus on \\u201cnarrower tasks\\u201d and \\u201cwith easy to obtain ground truth and straightforward evaluation metrics\\u201d (line 045-051). But the examples given, eg MLAgentBench and SWE-Bench does not seems to be particularly \\u201cnarrow\\u201d. Also, easy to obtain ground truth and straightforward evaluation metrics may not always be a bad thing as sometimes they specifically measure a more direct performance of the models.\\n\\n\\u2022\\tThe broad, complex data science concepts that this paper is trying to address are neither easy to define nor quantify. It is unclear if this paper (as presented in its current form) has addressed the issues appropriately. The underlying requirements of the benchmark, as set out by the authors in Lines 067-070, about \\u201cnaturalness\\u201d, \\u201cchallenging\\u201d, \\u201cmulti-hop reasoning\\u201d, and \\u201cdiversity of result types\\u201d, were not specifically addressed in the subsequent design of the benchmark and its metrics. The various fine-grained metrics seemed to still be rather narrow and \\u201cstraightforward\\u201d, and it was not explained how these metrics were calculated for complex data science tasks that this study aims to benchmark. \\n\\n\\u2022\\tIt was unclear if their proposed benchmark is indeed be more sophisticated and trustable/higher-quality than previous works, as there were no comparisons with the related works on data science benchmarking. I was hoping for an in-depth discussion/establishment of the motivation of this benchmark. What sets this benchmark aside from the existing code benchmarks precisely? What exactly are the limitations of the existing code benchmarks that is covered by the DataSciBench?\\n\\n\\u2022\\tExtensive experiments results on both open and closed source models on the proposed benchmark were provided. While the insights may also be useful, they are not particularly surprising, as they mainly reinforce the idea that larger, closed-source models generally perform better compared to the evaluated open-source models. The insight from StarCoder2/CodeLlama mentioned in Section 5.1/5.4 is useful but the reasoning behind why it performs badly lacks empirical evidence to support it.\\n\\n\\u2022\\tIn terms of presentation, the missing/vague definitions of key components have made the paper hard to follow, which also raises doubts on the rigor and soundness of the study. For example, \\u201cdata science\\u201d is a broad term, and the main paper did not provide and define the list of data science capabilities that it is aiming to benchmark, and how they can be quantified. The main algorithm, Task-Function-Code (TFC) list, was presented abruptly. What is \\u201cFunction\\u201d with respect to \\u201cTask\\u201d? Since \\u201cCode\\u201d a key component, then shouldn\\u2019t we also consider the coding ability of LLMs? What do \\u201cData Interpreter\\u201d, \\u201cAggregate Function\\u201d, and \\u201cProgrammatic Rules\\u201d in Figure 1 represent? The six typical data science tasks were key to the study but they were \\u201cdefined\\u201d in a very broad and subjective manner. Similar issues for task integration, question filtering, and expert review. Who are the experts? How did they actually review the questions? These key concepts should be defined and explained clearly in the main body of the paper instead of relying on the readers to try to figure out by the examples in the Appendix later. Moreover, the ablation study on tasks with different difficulty levels is not well-motivated or clearly defined. Although the authors categorize tasks as easy, medium, or hard, they do not adequately explain the criteria for these classifications or who is responsible for making these decisions.\", \"questions\": \"\\u2022\\tWhat is the main difference between the coarse-grained metrics presented in this paper and the techniques in Hong et al. (2024) and Chen et al. (2021)? Are the authors applying the concepts from Hong et al. (2024) and Chen et al. (2021) in a different domain? The Success Rate (SR) introduced by Chen et al. (2021) is used to evaluate models for code generation. In line 514, the authors mention that data science evaluation is closely related to code generation. How does one evaluate an LLM\\u2019s data science capability instead of its coding ability?\\n\\no\\t(Chen et al. (2021)) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.\\n\\no\\t(Hong et al. (2024)) Sirui Hong, Yizhang Lin, Bangbang Liu, Binhao Wu, Danyang Li, Jiaqi Chen, Jiayi Zhang, Jinlin Wang, Lingyao Zhang, Mingchen Zhuge, et al. Data interpreter: An llm agent for data science. arXiv preprint arXiv:2402.18679, 2024.\\n\\n\\u2022\\tCould you explain the expert review process in detail?\\n\\n\\u2022\\tA few more detailed questions:\", \"oline191_192\": \"what were the requirements used?\", \"oline_193\": \"what were the few-shot example used? Where did you get the examples? Do you change the few-shot every time you prompt the LLM?\", \"oline_240_241\": \"What is the percentage of prompts you used from BigCodeBench? For self-consistency strategy, as it does not guarantee correctness but only improves it, do you use any post generation strategy to ensure that the code you obtain under this is accurate?\", \"otable_2\": \"Any insight on why does CodeLama-13b-Instruct outperforms the rest on VLM by a large margin but is poor in the other metrics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer h1zC\", \"comment\": \"Thanks a lot for acknowledging the strengths of this work as a new benchmark semi-automated pipeline, and comprehensive experiments.\\n\\n```\\nW1&Q1: Concerns on the motivation of Task-Function-Code (TFC).\\n```\\nThank you for raising this important question about the motivation and contribution of the Task-Function-Code (TFC) list structure. The TFC framework was developed to address several critical challenges in automated evaluation of data science tasks:\\n1. Systematic Task Selection:\\nTFC provides a structured approach to identify and categorize key tasks across six established types. This systematic organization ensures comprehensive coverage of essential data science operations and helps maintain evaluation consistency and completeness.\\n2. Standardized Evaluation Metrics:\\nData science tasks often lack standardized evaluation criteria. TFC addresses this by explicitly defining appropriate evaluation functions for each task. For example, data preprocessing tasks require specific metrics that differ from visualization tasks. This standardization ensures fair and consistent assessment.\\n3. Automated Execution Framework:\\nTFC includes executable code components for both tasks and evaluation metrics. This automation significantly improves evaluation efficiency, result reproducibility, and testing scalability.\\n4. Ground Truth Generation:\\nTFC serves as a crucial foundation for establishing ground truth, particularly valuable for complex tasks where ground truth is not readily available, and enables systematic verification and validation of model outputs.\\nOverall, the TFC structure represents a novel contribution by providing a comprehensive framework that bridges the gap between task definition, evaluation criteria, and automated assessment in data science contexts.\\n\\n```\", \"w2\": \"Concerns on the comparison with related benchmarks.\\n```\\nThank you for your question. Thank you for this insightful question about the value proposition of DataSciBench despite its correlation with existing benchmarks. While DataSciBench does show a correlation with previous studies, our benchmark offers several unique and important contributions:\\n1. Domain-Specific Focus:\\nDataSciBench specifically targets data science and analytics tasks. However, existing benchmarks primarily focus on general programming problems. This specialization helps evaluate models' capabilities in handling real-world data analysis scenarios.\\n2. Task Diversity:\\nOur benchmark includes unique task types like data preprocessing, visualization, and statistical analysis. These tasks are underrepresented in current benchmarks. This provides deeper insights into models' data science-specific capabilities.\\n3. Complementary Insights:\\nWhile overall correlations exist, we observe meaningful differences in model rankings. For example, models like Meta-Llama-3-8B-Instruct and CodeLlama-34B-Instruct show distinct performance patterns. These differences highlight capabilities specific to data science tasks that other benchmarks may not capture.\\nThe correlation with existing benchmarks actually validates our evaluation methodology, while our domain-specific focus provides valuable new insights for assessing AI models in data science applications.\\n```\", \"w3\": \"Concerns on the experimental result analysis.\\n```\\nThank you for this valuable feedback regarding the depth of our analysis. We have conducted a comprehensive evaluation across different dimensions of model performance:\", \"task_difficulty_analysis\": \"We systematically categorized tasks into three difficulty levels: Easy, Medium, and Hard. The detailed results are presented in Figure 4. This analysis reveals how different models perform across varying complexity levels.\\n\\n```\", \"q2\": \"Concerns on the validity of ground truth.\\n```\\nThank you for raising these important concerns. We have implemented a comprehensive quality control process for ground truth generation.\", \"for_ground_truth_generation\": \"1. We use a self-consistency strategy as the initial mechanism\\n2. These results are then manually verified by multiple authors to ensure accuracy and reliability \\n\\nWe appreciate your feedback and have incorporated these detailed quality control procedures into our revised manuscript to provide better transparency of our methodology.\"}", "{\"title\": \"Response to Reviewer V76c\", \"comment\": \"```\", \"w1\": \"Concerns on the correlation with previous code benchmarks.\\n```\\nThank you for this insightful question about the value proposition of DataSciBench despite its correlation with existing benchmarks. While DataSciBench does show a correlation with LCB/BCB, our benchmark offers several unique and important contributions:\\n1. Domain-Specific Focus:\\nDataSciBench specifically targets data science and analytics tasks. However, existing benchmarks primarily focus on general programming problems. This specialization helps evaluate models' capabilities in handling real-world data analysis scenarios.\\n2. Task Diversity:\\nOur benchmark includes unique task types like data preprocessing, visualization, and statistical analysis. These tasks are underrepresented in current benchmarks. This provides deeper insights into models' data science-specific capabilities.\\n3. Complementary Insights:\\nWhile overall correlations exist, we observe meaningful differences in model rankings. For example, models like Meta-Llama-3-8B-Instruct and CodeLlama-34B-Instruct show distinct performance patterns. These differences highlight capabilities specific to data science tasks that other benchmarks may not capture.\\nThe correlation with existing benchmarks actually validates our evaluation methodology, while our domain-specific focus provides valuable new insights for assessing AI models in data science applications.\\n```\", \"w2\": \"Concerns on the validity of ground truth.\\n```\\nThank you for raising these important concerns. We have implemented a comprehensive quality control process for both the ground truth generation and evaluation scripts.\", \"for_ground_truth_generation\": \"1. We use a self-consistency strategy as the initial mechanism\\n\\n2. These results are then manually verified by multiple authors to ensure accuracy and reliability\\n\\n\\n```\", \"w3\": \"Concerns on the experimental analysis.\\n```\\nThank you for your question. Models fail on coding tasks mainly include the following questions:\\n\\n1. Coding errors when solving data science problems using codes. And based on our observation, the main kind of this is execution error. It may be due to different reasons. For example, hallucination on the column name of a CSV file.\\n\\n2. Json format errors. These errors come from the agent framework side, where they use JSON format to wrap up actions, e.g. WriteAnalysis.\\n\\nError cases are shown in the Appendix B. In the future, we can improve models from these aspects. \\n\\n\\n```\", \"q1\": \"Concerns about the total example numbers.\\n```\\nThank you for your question. We conclude the total example number in the last part of Section 3.3 and the number is 222.\"}", "{\"summary\": \"The paper introduces DataSciBench, a benchmark aimed at evaluating the capabilities of Large Language Models (LLMs) in data science tasks. It targets more comprehensive assessment by utilizing complex, more detailed, multi-faceted prompts that involve data cleaning, data analysis, visualization, pattern matching, etc. For evaluation, the authors introduce a semi-automated Task-Function-Code (TFC) pipeline for generating ground truth codes/outputs and evaluating agent performance using LLMs. The benchmark tests six API-based models, eight general open-source models, and nine open-source code generation models, with the key conclusion or insight being API-based models tend to outperform open-source ones.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Comprehensive Experiments**: The design of DataSciBench is comprehensive, encompassing multiple facets of data science tasks with varied complexity levels and multiple open- and closed-source models.\\n2. **Empirical Evaluation**: The semi-automated evaluation approach provides a unified and granular evaluation.\", \"weaknesses\": \"1. **Limited Significance**: While DataSciBench claims to assess data science abilities, the paper does not provide enough evidence that the chosen tasks reflect realistic data science challenges. Real-world data science often requires domain knowledge, iterative hypothesis testing, and adaptability to complex, often messy datasets. In contrast, the tasks presented here appear to lack such depth, instead focusing on simpler, predefined tasks that may not mirror the complexity of real data science workflows.\\n2. **Overly Detailed Task Prompts**: It seems that task prompts provide step-by-step instructions. This makes the setting simpler guiding the model through the steps rather than requiring it to reason through the steps on its own. This detailed prompt shifts the evaluation focus toward correct code generation rather than genuine reasoning and problem-solving, which undermines the goal of assessing data science capability in LLMs. An effective data science benchmark should evaluate a model\\u2019s ability to break down complex tasks independently.\\n3. **Insufficient Transparency in Task Selection**: The selection criteria for the included tasks and prompts are not well-defined. It\\u2019s difficult to assess how representative these tasks are of the real-world data science landscape. Some tasks seem too rudimentary, raising questions about the intended difficulty level and relevance for LLM agents. The paper would benefit from explicitly discussing how these tasks align with the challenges data scientists face in practice.\\n4. **Lack of Experiments with Larger Models**: The paper does not include experiments with larger models (e.g., 13B or 70B parameters), which limits the benchmark\\u2019s insights into how model size impacts performance on complex data science tasks. Larger models are typically more capable of handling nuanced reasoning, making them essential for assessing benchmark robustness. \\n5. **Inadequate Novelty**: This work relies heavily on straightforward prompt generation and LLM validation techniques, much like previous code generation benchmarks. The benchmark introduces no fundamentally new types of task paradigms or significant results/insights that would justify its focus as a new data science-specific benchmark. \\n6. **Poor Quality Control**: The semi-automated ground truth generation process raises concerns about quality and reliability. Self-consistency verification without extensive human oversight risks introducing erroneous ground truths, especially for complex tasks.\", \"questions\": \"See weaknesses.\\n\\n*Minor*:\\n1. Could you update Table 1 to add the domain and number of tasks for a more holistic comparison of this work w.r.t previous works?\\n2. In tasks where the prompt outlines the solution steps, how do you account for the model\\u2019s independent reasoning capability as part of its evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer V76c\", \"comment\": \"Thanks to the authors for their detailed answers. After reviewing the clarifications and considering the perspectives of other reviewers, I still somewhat remain unconvinced about this work's novelty. I stand by my scores.\"}", "{\"title\": \"Response to Reviewer ZPJR\", \"comment\": \"```\", \"w5\": \"Elaboration on key details.\\n```\\nThank you for your feedback on the clarity of our writing. We apologize for the lack of elaboration on several key details. To address your concerns:\\n\\n1. Question Filtering: The keywords used for principle (1) include, but are not limited to, \\\"machine learning\\\", \\\"deep learning\\\", \\\"data preprocessing\\\", and \\\"data visualization\\\". \\\"Questions aligning with human preferences and LLMs\\\" refers to questions solvable by both humans and large language models, avoiding overly specialized or ambiguous queries.\\n\\n\\n2. Expert Review: In Stage 1, \\\"easy to evaluate\\\" signifies tasks with readily discernible correct answers. For example, handing missing values for a data frame. In Stage 2, \\\"unified instructions\\\" refers to a standardized format encompassing input data, input file, prompt, and expected output file.\\n\\n\\n3. Performance Variance (Section 5.1): This metric quantifies the performance difference between API-based and open-source models.\\n\\n\\n4. Task Difficulty (Section 5.2): The number of tasks per difficulty level is: Easy - 167, Medium - 30, and Hard - 25.\\n\\n\\n5. Dataset Mismatch (Section 5.3): A \\\"mismatch\\\" indicates significant performance discrepancies between two datasets with the same model. The dashed blue line is used to differentiate the model performance gap between HumanEval and DataSciBench.\\nWe will revise the manuscript to incorporate these clarifications and improve overall clarity.\\n\\n```\", \"w6\": \"Concerns on further justification in Section 4.2.\\n```\", \"vlm_as_a_judge\": \"we present some examples that use claude-3-5-sonnet-20240620, CodeLlama-13B-Instruct, and o1-mini as a judgment in Appendix A.3. The predefined criteria can be found in the Appendix A.3. We have added the hyperlink to that section in Section 4.2.\"}", "{\"title\": \"Response to Reviewer ZPJR\", \"comment\": \"Thanks a lot for acknowledging this work's strengths as a novel, comprehensive benchmark, and comprehensive evaluation.\\n```\\nW1&W2: Concerns on the validity of evaluation scripts and ground truth.\\n```\\nThank you for raising these important concerns. We have implemented a comprehensive quality control process for both the ground truth generation and evaluation scripts.\", \"for_ground_truth_generation\": \"1. We use a self-consistency strategy as the initial mechanism\\n\\n2. These results are then manually verified by multiple authors to ensure accuracy and reliability\", \"regarding_the_evaluation_scripts\": \"1. All LLM-generated evaluation scripts undergo thorough validation through a systematic review process\\n2. Our validation protocol includes:\\n\\n$\\\\bullet$ Manual verification of each evaluation function;\\n\\n$\\\\bullet$ Careful reviews of corresponding prompts;\\n\\n$\\\\bullet$ Assessment of task type categorization and generated code;\\n\\n$\\\\bullet$ Cross-checking by multiple authors.\\n\\nWe appreciate your feedback and have incorporated these detailed quality control procedures into our revised manuscript to provide better transparency of our methodology.\\n\\n```\", \"w3\": \"Concerns about the correlation between LiveCodeBench (LCB) and BigCodeBench (BCB).\\n```\\nThank you for this insightful question about the value proposition of DataSciBench despite its correlation with existing benchmarks. While DataSciBench does show a correlation with LCB/BCB, our benchmark offers several unique and important contributions:\\n\\n1. Domain-Specific Focus:\\nDataSciBench specifically targets data science and analytics tasks. However, existing benchmarks primarily focus on general programming problems. This specialization helps evaluate models' capabilities in handling real-world data analysis scenarios.\\n\\n2. Task Diversity:\\nOur benchmark includes unique task types like data preprocessing, visualization, and statistical analysis. These tasks are underrepresented in current benchmarks. This provides deeper insights into models' data science-specific capabilities.\\n\\n3. Complementary Insights:\\nWhile overall correlations exist, we observe meaningful differences in model rankings. For example, models like Meta-Llama-3-8B-Instruct and CodeLlama-34B-Instruct show distinct performance patterns. These differences highlight capabilities specific to data science tasks that other benchmarks may not capture.\\nThe correlation with existing benchmarks actually validates our evaluation methodology, while our domain-specific focus provides valuable new insights for assessing AI models in data science applications.\\n```\", \"w4\": \"Concerns on the motivation of Task-Function-Code (TFC).\\n```\\nThank you for raising this important question about the motivation and contribution of the Task-Function-Code (TFC) list structure. The TFC framework was developed to address several critical challenges in automated evaluation of data science tasks:\\n\\n1. Systematic Task Selection:\\nTFC provides a structured approach to identify and categorize key tasks across six established types. This systematic organization ensures comprehensive coverage of essential data science operations and helps maintain evaluation consistency and completeness.\\n\\n2. Standardized Evaluation Metrics:\\nData science tasks often lack standardized evaluation criteria. TFC addresses this by explicitly defining appropriate evaluation functions for each task. For example, data preprocessing tasks require specific metrics that differ from visualization tasks. This standardization ensures fair and consistent assessment.\\n\\n3. Automated Execution Framework:\\nTFC includes executable code components for both tasks and evaluation metrics. This automation significantly improves evaluation efficiency, result reproducibility, and testing scalability.\\n\\n4. Ground Truth Generation:\\nTFC serves as a crucial foundation for establishing ground truth, particularly valuable for complex tasks where ground truth is not readily available, and enables systematic verification and validation of model outputs.\\n\\nOverall, the TFC structure represents a novel contribution by providing a comprehensive framework that bridges the gap between task definition, evaluation criteria, and automated assessment in data science contexts.\"}", "{\"title\": \"Response to Reviewer 5yBS\", \"comment\": \"Thanks a lot for acknowledging the strengths of this work as a timely paper, comprehensive study, and new insights provided by this benchmark.\\n\\n```\\nW1&W2: Concerns on the primary motivation of DataSciBench on selecting unlabeled data.\\n```\\nThank you for this thoughtful observation about ground truth evaluation approaches. We agree that easily obtainable ground truth and straightforward metrics have their merits and serve important purposes in model evaluation. However, our motivation for DataSciBench stems from addressing common real-world scenarios where evaluation is more challenging:\\n1. Complex Evaluation Scenarios include Data visualization quality assessment, Data modeling result evaluation, Feature engineering effectiveness, and Statistical analysis appropriateness\\n2. Real-world Challenges:\\n\\n$\\\\bullet$ Many data science tasks lack clear-cut evaluation criteria\\n\\n$\\\\bullet$ Subjective elements require more sophisticated evaluation approaches\\n\\n$\\\\bullet$ Multiple valid solutions may exist for a single problem\\n\\n3. Complementary Approach:\\nWe view DataSciBench as complementary to existing benchmarks rather than replacing simple metrics, we aim to address scenarios where:\\n\\n$\\\\bullet$ Ground truth is not readily available\\n\\n$\\\\bullet$ Evaluation requires multi-dimensional assessment\\n\\n$\\\\bullet$ Quality assessment is inherently complex\\n\\nOur benchmark specifically targets these challenging evaluation scenarios while acknowledging the continued value of straightforward metrics where appropriate.\\n\\n```\", \"w3\": \"Concerns about the motivation and limitation of existing code benchmarks (LiveCodeBench (LCB) and BigCodeBench (BCB)).\\n```\\nThank you for this insightful question about the value proposition of DataSciBench despite its correlation with existing benchmarks. While DataSciBench does show a correlation with LCB or BCB, our benchmark offers several unique and important contributions:\\n\\n1. Domain-Specific Focus:\\nDataSciBench specifically targets data science and analytics tasks. However, existing benchmarks primarily focus on general programming problems. This specialization helps evaluate models' capabilities in handling real-world data analysis scenarios.\\n\\n2. Task Diversity:\\nOur benchmark includes unique task types like data preprocessing, visualization, and statistical analysis. These tasks are underrepresented in current benchmarks. This provides deeper insights into models' data science-specific capabilities.\\n\\n3. Complementary Insights:\\nWhile overall correlations exist, we observe meaningful differences in model rankings. For example, models like Meta-Llama-3-8B-Instruct and CodeLlama-34B-Instruct show distinct performance patterns. These differences highlight capabilities specific to data science tasks that other benchmarks may not capture.\\nThe correlation with existing benchmarks actually validates our evaluation methodology, while our domain-specific focus provides valuable new insights for assessing AI models in data science applications.\\n\\n```\", \"w4\": \"Concerns about the experimental results.\\n```\\nThank you for your question. We have done some more analysis regarding your question. Models fail on coding tasks mainly include the following questions:\\n1. Coding errors when solving data science problems using codes. And based on our observation, the main kind of this is execution error. It may be due to different reasons. For example, hallucination on the column name of a CSV file.\\n\\n2. Json format errors. These errors come from the agent framework side, where they use JSON format to wrap up actions, e.g. WriteAnalysis.\\n\\nError cases are shown in the Appendix B. In the future, we can improve models from these aspects.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"I appreciate the authors\\u2019 response. However, in addition to the overall presentation quality, I still have the following concerns:\\n\\n1. Task quality\\n\\n I understand the authors have performed manual review to ensure task quality. However, I still doubt the contribution of the collected tasks. **The ground truth programs are generated by self-consistency decoding, which means existing models are already capable of generating them**. I think this weakens the potential new challenges that DataSciBench can contribute, thus limiting the insight that people can conclude from evaluating on it.\\n \\n2. Correlation with existing benchmarks and justification of metrics\\n \\n The authors claim that \\u201cthe correlation with existing benchmarks actually validates our evaluation methodology\\u201d. I think this claim lacks support. **In my opinion, the validity of a benchmark\\u2019s evaluation measures should be justified on its own, rather than using correlation of evaluation results with other benchmarks.** On top of that, the justification for the fine-grained metrics (W6) is still missing.\\n \\n3. Significance of TFC\\n\\n As also pointed out by reviewer h1zC and 5yBS, the motivation, definition, and significance of the TFC data structure remains unclear to me. So far, the authors fail to provide a baseline method that TFC can be compared to. I think one can easily represent any coding task as (metadata, task, evaluation code, solution code) tuples and call it a TFC. I expect more explanations on this issue.\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely thank all the reviewers for their thoughtful comments and constructive suggestions, which significantly helped us strengthen our paper. We address reviewers\\u2019 concerns on the motivation of TFC, the difference between DataSciBench and LCB or BCB, experimental result analysis, and the definitions of key concepts. We are happy to share our revised pdf and diff written in blue color to respond to the reviewers\\u2019 feedback.\\n\\nThank you for your time!\\n\\nBest,\\n\\nThe authors of DataSciBench\"}" ] }
BltNzMweBY
ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark
[ "Zongqi Wang", "Baoyuan Wu", "Jingyuan Deng", "Yujiu Yang" ]
Embeddings as a Service (EaaS) is emerging as a crucial role in AI applications. Unfortunately, EaaS is vulnerable to model extraction attacks, highlighting the urgent need for copyright protection. Although some preliminary works propose applying embedding watermarks to protect EaaS, recent research reveals that these watermarks can be easily removed. Hence, it is crucial to inject robust watermarks resistant to watermark removal attacks. Existing watermarking methods typically inject a target embedding into embeddings through linear interpolation when the text contains triggers. However, this mechanism results in each watermarked embedding having the same component, which makes the watermark easy to identify and eliminate. Motivated by this, in this paper, we propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for EaaS. Our approach involves injecting unique, yet readily identifiable watermarks into each embedding. Watermarks inserted by ESpeW are designed to maintain a significant distance from one another and to avoid sharing common components, thus making it significantly more challenging to remove the watermarks. Extensive experiments on four popular datasets demonstrate that ESpeW can even watermark successfully against a highly aggressive removal strategy without sacrificing the quality of embeddings.
[ "NLP", "Copyright Protection", "Watermark", "Backdoor", "Embedding Model" ]
Reject
https://openreview.net/pdf?id=BltNzMweBY
https://openreview.net/forum?id=BltNzMweBY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w8GRponDTk", "s6rzEa9JRv", "qhbhQcv0dw", "oR2Pjm1IpE", "dZtvP0ENFe", "cSSxvH37Zs", "c6zZe3qgVz", "YwsqCm5zI0", "YaS7W63F7Y", "XXjMvTxHlM", "UyVVpIAgan", "SYX6Q7MdRi", "NU2NY8alww", "LhEvhvHpUP", "L7iCnw1Luk", "HtzUeHNOft", "HEvI6XD0VF", "GuH6CXoq91", "D5MYAemsiP", "Cv4zcgE0LS", "65Ies2wFlS", "3z9yeZmkqY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732692873311, 1732523883447, 1732291535705, 1732692724728, 1732291497215, 1734501246243, 1732692909343, 1730206759477, 1732291371950, 1730512081180, 1730086462345, 1732291397207, 1732291059203, 1732291243331, 1732291201380, 1732398162290, 1737523531969, 1732290787749, 1732290973091, 1732692840175, 1730688274984, 1732692762576 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Reviewer_3mUw" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Area_Chair_MoBF" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Reviewer_zbHr" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Reviewer_qoiC" ], [ "ICLR.cc/2025/Conference/Submission2777/Reviewer_3mUw" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Area_Chair_MoBF" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ], [ "ICLR.cc/2025/Conference/Submission2777/Reviewer_nW4C" ], [ "ICLR.cc/2025/Conference/Submission2777/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer qoiC,\\n\\nThank you again for your time. As the deadline for discussion is approaching, we do wish to hear from you to see if our response resolves your concerns. We are happy to provide any additional clarifications if needed.\"}", "{\"comment\": \"Thank you for the detailed response. It has addressed part of my concerns. I have decided to raise my rating to 5 based on the following concerns.\\n\\n1. **The threat model of this paper is limited**. The method proposed in this paper can only work when the adversary steals the EaaS model and also trains and deploys an EaaS model. In this case, the adversary can easily evade verification by using the embedding model for some downstream tasks.\\n2. **The robustness of the proposed method is not thoroughly verified**. This paper conducts experiments on some simple attacks such as dropout and fine-tuning. It may be better for the author(s) to consider a wider range of attacks, particularly adaptive attacks (i.e., the adversary knows the watermarking method and adaptively design a removal method).\"}", "{\"comment\": \"**Q5: (1) It would be better to test whether the watermark can be extracted from the unwatermarked models with different architectures. (2) Additionally, it\\u2019s important to confirm if randomly selected trigger samples and keys can pass the watermark verification. These experiments would better demonstrate the reliability of ESpeW.**\\n\\n**R5:** Thank you for your valuable feedback. We address your concerns from the following two points:\\n\\n**1.** For \\\"more unwatermarked models,\\\" we test the probability of the watermark can be extracted from the original model. To ensure representative results, we select models from the popular embedding model leaderboard, MTEB [1]. These models vary in architecture, model size, and embedding dimension. When setting the trigger set size to 20, which is used in our paper, the the probability of such an occurrence is as follows. It can be observed that **the probability of extracting watermark from unwatermarked models with different architectures is extremely low (less than $10^{-4}$)**.\\n\\n| Model Name | Embedding Dimension | Architecture | FPR |\\n|----------------------------|------------------------|------------------|------------|\\n| jinaai/jina-embeddings-v3 | 572 | XLM-RoBERTa-400M | $10^{-5}$ |\\n| dunzhang/stella_en_1.5B_v5 | 1024 | QWEN2-1.5B | $10^{-5}$ |\\n| OpenAI\\u2019s text-embedding-3 | 1536 | - | $10^{-4}$ |\\n| nvidia/NV-Embed-v2 | 4096 | Mistral-7B | $10^{-4}$ |\\n\\n**2.** For \\\"randomly selected trigger samples and keys can pass the watermark verification\\\", we ensure all the experimental results presented in the paper are obtained under the condition that **the probability of such an occurrence is less than $10^{-4}$.** This is, in fact, the false positive rate we discussed earlier. For further details, **please refer to Q2**.\\n\\n[1] MTEB: Massive Text Embedding Benchmark. https://huggingface.co/spaces/mteb/leaderboard. EACL 2023.\\n\\n**Q6: As shown in Table 1, the utility of the watermarked models is higher than the original models. Since the proposed method replaces part of the output embedding with arbitrary values, the results seem abnormal. Could you provide further analysis on this?**\\n\\n**R6:** Thanks for this concern. Actually, we have already explained this phenomenon in the manuscript (see Line 401)**, and proposed more reasonable evaluation **using the cosine similarity between watermarked embedding and clean embedding.** Original content: *Evaluating embedding quality solely by the performance of downstream tasks is insufficient due to the randomness of DNN training. To better elucidate the influence of watermarks on embeddings, we compute the average cosine similarity between watermarked embeddings and original clean embeddings. Four watermarks are selected for comparison: EmbMarker, WARDEN, ESpeW (randomly selecting watermark positions), and ESpeW (selecting watermark positions with minimum magnitude). As depicted in Figure 3, the embeddings generated by our proposed method exert the least negative impact on clean embeddings, with a change in cosine similarity of less than 1%.* \\n\\nTo be rigorous, if the model undergoes fine-tuning, the cosine similarity may also decrease, but this does not indicate a reduction in embedding quality. In our specific context, *i.e.*, evaluating the impact of watermark on embedding, using cosine similarity between watermarked and clean embedding as a metric is appropriate.\"}", "{\"title\": \"Re: Q1: The adversary may use the embedding model for some downstream tasks.\", \"comment\": \"Thank you for recognizing our response and providing valuable suggestions. We address your concerns below.\\n\\n**Q1: The adversary may use the embedding model for some downstream tasks.**\\n\\n**R1:** The scenario the reviewer mentioned is indeed a valid concern, where adversaries use the embedding model for downstream tasks and deploy downstream models to server. However, we believe the threat model in our work, i.e., the adversaries steal provider's EaaS and deploy their own EaaS, is also highly practical and significant for the following reasons:\\n\\n**1. First, the threat model in our paper directly competes with the original provider\\u2019s profitability and market position.** By offering similar services at lower prices or greater accessibility, adversaries can erode the provider's revenue. Tasks like classification do not present the same level of direct competition. **Moreover, compared to offering specific services (such as specific text classification), providing embeddings has a broader and more practical market.** For instance, popular providers like OpenAI, Cohere, Google, and Mistral all offer embedding services but have not launched specific services, such as text classification. Embedding-as-a-Service (EaaS) represents a more practical scenario in the real world.\\n\\n**2. Then, not all downstream tasks cannot be solved by proposed method.** The proposed method can be applied to tasks that return similarity scores, such as: (1) Similarity calculation. (2) Information retrieval tasks where users can specify a knowledge base, such as paper reading, where target samples can be inserted into the knowledge base. (3) QA matching tasks that return the scores of candidate answers. \\n\\n**3.** Currently, there are some works [1][2] that modify pre-trained language models (PLMs) so that the watermark can be activated after the adversaries fine-tuning the PLM on downstream tasks. **However, our scenario is significantly more chanlleging. We can only manipulate the embeddings used for training and have no control over the initial weights used in adversaries' fine-tuning.** At present, we do not have an effective solution for such a highly challenging scenario. **However, we also argue that classification tasks do not have a fatal impact on the provider's service, as classification models are typically limited to a very narrow range of applications.**\\n\\nWe sincerely hope that our statement helps clarify **the critical importance of the threat model we adopt** and **expands the potential scope of applications** of our method.\\n\\n[1] PLMmark: A Secure and Robust Black-Box Watermarking Framework for Pre-trained Language Models. AAAI 2023.\\n\\n[2] Backdoor Pre-trained Models Can Transfer to All. CCS 2021.\"}", "{\"comment\": \"**Q4: The resistance of watermark to fine-tuning.**\\n\\n**R4:** Thank you for your question. We address your concern here:\\n\\nWe focus on unsupervised fine-tuning, as supervised fine-tuning is unsuitable for embedding models; it introduces excessive label information, undermining semantic integrity. **To evaluate our method's robustness against fine-tuning attacks, we adopt the unsupervised fine-tuning approach SimCSE [1].** SimCSE uses contrastive learning by applying random dropout masks in the Transformer encoder. Positive samples are created by feeding the same input with different dropout masks, while negative samples come from other sentences in the batch. \\n\\nIn our experiment, we use **the same hyperparameters as [1]: a learning rate of $3 \\\\times 10^{-5}$ and a batch size of 64**. We test on Enron Spam dataset. **Fine-tuning introduces instability in embeddings, causing p-values to inflate abnormally and lose reliability**, particularly with a significantly large epoch number. **So, we use $\\\\Delta \\\\text{cos}$(\\\\%) and $\\\\Delta l_{2}$ (\\\\%) for detection here.** The metrics $\\\\Delta \\\\text{cos}$ (\\\\%) and $\\\\Delta l_{2}$ (\\\\%), as defined in our paper, address this issue effectively. **By adjusting thresholds of $\\\\Delta \\\\text{cos}$ (\\\\%) and $\\\\Delta l_{2}$ (\\\\%), we maintain a false positive rate (FPR) below $10^{-5}$.**\\n\\n**The table below demonstrates that, with the FPR $<10^{-5}$, this approach effectively defends against fine-tuning attacks, even after 100 epochs of fine-tuning.** Considering that the stealing only undergoes 10 epochs, the cost of 100 epochs is significant.\\n\\n| epoch | pvalue | $\\\\Delta \\\\text{cos}$ (\\\\%) $\\\\uparrow$ | $\\\\Delta l_{2}$ (\\\\%) $\\\\downarrow$ | FPR@ 0.05 | FPR@ 0.01 | FPR@1e-3 | FPR@1e-4 | FPR@1e-5 |\\n|-------|--------|------|-------|----------|----------|----------|----------|----------|\\n| 0 | 5.8e-10 | 8.10 | -16.21 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 1 | 1.1e-8 | 18.45| -36.91 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 2 | 1.4e-7 | 11.92| -23.84 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 3 | 1.3e-6 | 9.11 | -18.23 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 4 | 1.4e-7 | 12.42| -24.83 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 5 | 1.1e-3 | 7.91 | -15.81 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 6 | 1.1e-8 | 14.12| -28.24 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 7 | 1.3e-6 | 12.33| -24.66 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 8 | 4.0e-3 | 6.56 | -13.12 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 9 | 4.0e-3 | 4.39 | -8.77 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 10 | 2.7e-4 | 6.21 | -12.42 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 20 | 2.7e-4 | 6.80 | -13.60 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 35 | 0.03 | 5.82 | -11.64 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 50 | 0.08 | 2.21 | -4.42 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n| 100 | 0.34 | 3.60 | -7.19 | \\u2714 | \\u2714 | \\u2714 | \\u2714 | \\u2714 |\\n\\nHere are the thresholds we use. This is validated through 100,000 experiments on unwatermarked models. **The values on the right indicate the thresholds of $\\\\Delta \\\\text{cos}$ (\\\\%) and $\\\\Delta l_{2}$ (\\\\%) required to achieve the corresponding FPR on the left.** For instance, at an FPR of $10^{-5}$, the thresholds are $1.09$ for $\\\\Delta \\\\text{cos}$ (\\\\%) and $-4.10$ for $\\\\Delta l_{2}$ (\\\\%).\\n\\n| FPR | Threshold of $\\\\Delta \\\\text{cos}$ (\\\\%) | Threshold of $\\\\Delta l_{2}$ (\\\\%) |\\n|-------------|------------------|-----------------|\\n| $0.05$ | 0.41 | -1.57 |\\n| $0.01$ | 0.59 | -2.32 |\\n| $10^{-3}$ | 0.82 | -3.16 |\\n| $10^{-4}$ | 1.08 | -3.93 |\\n| $10^{-5}$ | 1.09 | -4.10 |\\n\\n[1] SimCSE: Simple Contrastive Learning of Sentence Embeddings. Tianyu Gao, Xingcheng Yao, Danqi Chen, EMNLP 2021.\"}", "{\"metareview\": \"In this work, the authors proposed copyright protection of LLM-based EaaS via Embedding-Specific Watermark. This study is motivated from the authors' claim that ``... it is crucial to inject robust watermarks resistant to watermark removal attacks.'' and observation that ``...Existing watermarking methods typically inject a target embedding into embeddings through linear interpolation when the text contains triggers. However, this mechanism results in each watermarked embedding having the same component, which makes the watermark easy to identify and eliminate.''\\n\\nDuring the rebuttal period, only one reviewer among a total of four has responded to authors' responses.\\nIn particular, the concern of high computation cost, raised by at least two reviewers, has been properly addressed by the authors in terms of Smallest-magnitude vs. Random Selection together with time-complexity analysis. \\n\\nThe main weakness of this work is insufficient robustness evaluation.\\nInitially, the authors were not aware of this critical issue, and did not actively provide comprehensive robustness evaluations.\\nAlthough the authors added more experiments during the rebuttal period, it is a pity that the reviewers may not have the chance to review them. More importantly, it is needed to conduct robustness evaluation as a whole instead of just adding more experiments. For those that were not considered for experiments, it is hard to conclude the robustness of proposed framework.\", \"additional_comments_on_reviewer_discussion\": \"Computation cost and robustness evaluation are two issues that were mainly discussed during the rebuttal period. As commented by Reviewer 3mUw, (s)he was still unsatisfactory about the robustness evaluation.\"}", "{\"comment\": \"Dear Reviewer zbHr,\\n\\nThank you again for your time. As the deadline for discussion is approaching, we do wish to hear from you to see if our response resolves your concerns. We are happy to provide any additional clarifications if needed.\"}", "{\"summary\": \"This paper addresses the vulnerability of Embeddings as a Service (EaaS) to model extraction attacks, emphasizing the urgent need for robust copyright protection. Traditional watermarking methods are easily removed, leading to the development of our novel embedding-specific watermarking (ESpeW) mechanism. ESpeW injects unique, identifiable watermarks into each embedding, making them harder to detect and eliminate. Experiments on popular datasets demonstrate that ESpeW effectively resists aggressive removal strategies without compromising embedding quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and well-organized.\\n2. The experimental results demonstrate the robustness of the watermarking method against CSE.\", \"weaknesses\": \"1. The contribution seems limited, as the primary difference from EmbMarker [1] lies only in the embedding equation (Eq. (2)).\\n2. The contributions may be overstated. For contribution 2), this paper may not be the \\\"first to propose\\\" a robust watermark approach against CSE, as WARDEN [2] has already addressed this. For contribution 3), the claim that it is the \\\"**only**\\u00a0method that remains effective\\\" should be restricted to the baselines listed.\\n3. The paper claims that the \\u201cproposed method can inject watermark successfully with a minimum \\u03b1 value of 15%\\u201d. This implies that at least 15% of each embedding's values are directly replaced with the corresponding values from the target embedding e_t, which seems more detectable. Despite the different replacement positions in each embedding, the 15% replacement rate means that some positions will have the same values replaced. By statistically analyzing the frequency of values at each position, e_t might be estimated.\\n4. In section 4.4, the paper states \\\"when the $\\\\alpha$ is set to 100%, our method is almost the same as EmbMarker.\\\" However, when $\\\\alpha$ is set to 100% in Eq. (1), all entries in M are 1, so e_p = e_t. This is significantly different from EmbMarker.\\n\\n[1] Peng, Wenjun, et al. \\\"Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark.\\\" Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023.\\n[2] Shetty, Anudeex, et al. \\\"WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection.\\\" arXiv preprint arXiv:2403.01472 (2024).\", \"questions\": [\"1. WARDEN also claims to be effective against CSE. Why does this paper reach a different conclusion?\", \"2. Minor issues:\", \"Line 175: A period is missing before \\\"Right\\\".\", \"Line 268: The comma position is incorrect \\\"(,i.e., ...\\\".\", \"The citation format in the text is incorrect, affecting readability. Please use \\\\citep and \\\\citet correctly.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1: I think a (maybe more) practical scenario is that the adversary steals the model and uses the model for a downstream task. In such a scenario, the defender can get the final output predictions instead of the embeddings. Is ESpeW still effective?**\\n\\n**R1:** Thank you for your insightful comment. Here are our responses:\\n\\n**1. EaaS is a practical threat model. Many organizations have already started offering EaaS, such as GPT-3 text-embedding-002 API from OpenAI [1], mistral-embed from Mistral [2], Gemini text-embedding-004 from Google [3], and Embed of Cohere [4], etc.** Existing work also accepts this attack setting [5][6][7]. So, developing watermarks for EaaS is practically meaningful. \\n\\n**2. As for extending our watermark to output predictions [8][9], we have some initial ideas.** For example, when access to confidence scores of top-K labels is available, we could potentially insert watermark samples that do not change the top-1 prediction but influence the confidence distribution of other labels, which would make the watermark more imperceptible.\\n\\n**However, adapting our method to the output predictions introduces several key challenges**: (1) The **discrete nature** of predictions limits the flexibility of watermark insertion, (2) The **compression of information** from embeddings to predictions reduces the capacity for embedding a watermark, and (3) The statistical properties of prediction distributions present **additional complexities**, necessitating further adjustments to our approach.\\n\\nAlthough some principles from our embedding-based watermarking method may still be relevant, applying ESpeW to output predictions requires **substantial modifications** to the methodology. These modifications involve **rethinking how watermarks interact with discrete outputs**, managing the **information compression** of output labels, and addressing **more attacks targeting prediction-level watermarks**. Tackling these challenges is far beyond the scope of this paper, but we regard them as promising opportunities for future work.\\n\\n[1] OpenAI, https://openai.com/index/new-and-improved-embedding-model.\\n\\n[2] Mistral, https://docs.mistral.ai/capabilities/embeddings.\\n\\n[3] Google, https://ai.google.dev/gemini-api/docs/embeddings.\\n\\n[4] Cohere, https://cohere.com/embed.\\n\\n[5] Stolenencoder: stealing pre-trained encoders in self-supervised learning. CCS 2022.\\n\\n[6] Are you copying my model? protecting the copyright of large language models for eaas via backdoor watermark. ACL 2023.\\n\\n[7] WARDEN: Multi-directional backdoor watermarks for embedding-as-a-service copyright protection. ACL 2024.\\n\\n[8] PLMmark: A Secure and Robust Black-Box Watermarking Framework for Pre-trained Language Models. AAAI 2023.\\n\\n[9] Watermarking Pre-trained Language Models with Backdooring. Arxiv 2022.\"}", "{\"summary\": \"The paper introduces a novel watermarking technique (ESpeW) designed to protect IP for EaaS provided by LLMs. ESpeW aims to counteract model extraction attacks by embedding unique, hard-to-remove watermarks into each embedding instance. Unlike the existing methods that inject uniform watermark components across all embeddings, ESpeW uses selective, distinct watermark placements, making it more resilient against removal attacks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. ESpeW\\u2019s approach of using embedding-specific, non-uniform watermark placements addresses significant vulnerabilities in traditional methods. By ensuring that each embedding\\u2019s watermark location is unique, ESpeW can reduce the risk of watermark identification and removal, achieving robustness against targeted removal attacks.\\n2. The selective embedding technique of ESpeW allows the watermarks to remain mostly imperceptible, preserving the original embedding quality and minimizing any adverse effect on downstream task performance. \\n3. The paper provides a clear framework for assessing key watermark properties, such as harmlessness, persistence, and resistance to permutation and unauthorized detection. The inclusion of metrics like cosine similarity, L2 distance, and the Kolmogorov-Smirnov (KS) test strengthens the credibility of ESpeW\\u2019s evaluation process.\", \"weaknesses\": \"1. ESpeW's robustness depends heavily on the confidentiality of the target embedding (used as a private key). If this target embedding were compromised, attackers could potentially reverse-engineer the watermark positions.\\n2. While ESpeW achieves robustness through selective watermark embedding, identifying the smallest-magnitude positions in each embedding may be computationally intensive for large-scale implementations. \\n3. The method may be model-specific since different models can produce embeddings with varying distributions and magnitudes.\", \"questions\": \"1. To address the computational costs associated with selective position identification, the authors could consider evaluating approximate methods, such as random position selection or grouping embeddings with similar magnitude distributions, to balance efficiency and robustness.\\n2. Given ESpeW\\u2019s reliance on the confidentiality of the target embedding, discussing fallback mechanisms (e.g., embedding renewal or multiple target embeddings) could improve the resilience of the approach under various threat models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a model watermarking method to protect the Embedding-as-a-Service (EaaS) model from model stealing. Specifically, this paper first selects various tokens. The sentences containing these tokens are regarded as trigger samples. If one of the trigger samples is input into the model, this paper proposes to replace part of the trigger sample's embedding with predefined values. Once an adversary utilizes the watermarked embeddings to train a model, the owner of the EaaS model can verify the ownership by validating whether the output embeddings of the trigger samples are more similar to the predefined values than the benign samples.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method is robust to the removal attack, CSE.\\n2. This paper is generally easy to read.\", \"weaknesses\": \"1. **About the threat model:** This paper assumes that the adversary steals the EaaS model and trains its own EaaS model. In this case, the defender can get the output embeddings. However, I think a (maybe more) practical scenario is that the adversary steals the model and uses the model for a downstream task. In such a scenario, the defender can get the final output predictions instead of the embeddings. Is ESpeW proposed in this paper still effective in this scenario?\\n2. **About the hypothesis test:** This paper proposes to use the hypothesis test to validate whether the distributions of the cosine similarity values in set $C_b$ and $C_n$ are consistent. However, if two benign datasets that are not identically distributed are selected, it is also likely to reject the null hypothesis. Therefore, the reliability of the hypothesis test proposed in this paper is doubtful.\\n3. **About the robustness against permutation**: The authors claim that their method can resist permutation and they conduct experiments to prove so. However, I did not find the experimental results of this attack in the section of experiments (if I miss such an experiment, please kindly notify me). \\n4. **About the robustness against model-based attacks**: This paper only considers attacks that modify the output embeddings of the victim model. However, the adversary may also conduct model-based attacks. For instance, the adversary can adaptively design a loss function and fine-tune its model to remove the watermark inside the model.\\n5. **About the reliability**: This paper only tests whether the watermark can be extracted from the original model. It may be better for the authors to test more unwatermarked models with different architectures. Also, it is also necessary to further confirm whether it is possible for the randomly selected trigger samples and keys to pass the watermark verification. These experiments may comprehensively demonstrate the reliability of ESpeW.\\n6. **About the experimental results:** As shown in Table 1, the utility of the watermarked models are even higher than the original model. Considering that the proposed method is to replace part of the output embedding with arbitrary values and reduce the available features in the embedding, I think the results are abnormal. It may be better to provide a further study or analysis on the results.\", \"questions\": \"Please address the concerns in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q2: If two benign datasets that are not identically distributed are selected, it is also likely to reject the null hypothesis. Therefore, the reliability of the hypothesis test proposed in this paper is doubtful.**\\n\\n**R2:** Thank you for your thoughtful consideration and valuable feedback. Here are our responses:\\n\\n**1. We analyze and verify that the false positive rate (FPR) is correlated with the trigger set size. FPR means the ratio of non-watermarked models are mistakenly identified as watermarked.** To illustrate this, consider an extreme case where the trigger set size is set to just 1. In this case, the embeddings of watermarked texts show high semantic similarity due to the shared token, causing them to cluster too closely. As a result, even if a watermark has not been successfully injected, the watermarked and non-watermarked embeddings still exhibit differences, leading to the incorrect conclusion that the embeddings are watermarked. \\n\\n**2. For a more rigorous evaluation**, we adopt another metric FPR@$f$, which is **widely used in text watermark. FPR@$f$ indicates that the watermark are evaluated under the constraint that the FPR is lower than a threshold $f$.** FPR@$f$ is highly suitable for our task because it allows us to evaluate the performance of the watermark under a fixed FPR. In the following table, we present the relationship between trigger set size and FPR. To ensure reliability, we conduct 100,000 repeated experiment for each size. All other parameters are same as in our work. Specifically, the verify dataset size is 40, the verify sentences' length is 20, the max trigger number in one sentence is 4. We can see that when the trigger set size is set to 4, we can only ensure an FPR of less than 0.3891. **However, when the trigger set size is increased to 20 (i.e., the setting used in our paper), we can ensure an FPR of less than $10^{-4}$.** \\n\\n| Trigger Set Size | FPR |\\n|------------------|----------|\\n| 4 | <0.3891 |\\n| 6 | <0.0239 |\\n| 8 | <0.0044 |\\n| 10 | <0.0013 |\\n| 20 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 30 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 40 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 50 | $<10^{-4}, \\\\ge10^{-5}$ |\\n\\n**The above analysis demonstrates that our experimental results are reliable**, since all experiments in our paper ensure FPR@$10^{-4}$, which is a **sufficiently small value**.\\n\\n**Q3: Experiments about robustness against permutation.**\\n\\n**R3:** Thanks. All our experiments are conducted under permutation attack by default. **We have relevant statement in our paper (Line 834).** Original content: *To illustrate that all methods exhibit the Persistence-to-Permutation property described in Section 3.2, we assume that the stealer will apply the same permutation rule to all provider\\u2019s embeddings before training the stealer\\u2019s model.*\"}", "{\"comment\": \"Thank you for your time and valuable comments, which will help improve our paper. We address your questions as follows:\\n\\n**Q1: The contribution seems limited, as the primary difference from EmbMarker lies only in the embedding equation.**\\n\\n**R1:** Thank you for your valuable feedback. In fact, **in the watermark field, the core difference often lies in the watermark injection algorithm, and it's common for different approaches to follow similar steps.** For instance, both EmbMarker and WARDEN follow this common framework. Our method stands out due to the following key points:\\n\\n1. **The primary motivation for developing embedding-specific watermarks is fundamentally different from EmbMarker (Figure 2 in paper for illustration).** Unlike traditional fragile watermarking methods like EmbMarker, which injects **an embedding-shared watermark** into all embeddings and is therefore **easier to identify and eliminate**, our approach focuses on **embedding-specific watermarks** that are **much harder to identify and eliminate**. This shift in focus highlights the uniqueness of our contribution and its importance in advancing the applicability of watermarking in EaaS.\\n2. **Our method demonstrates robustness against the watermark removal method CSE, whereas EmbMarker does not (Table 1 in paper).** This highlights the practicality of our approach for real-world deployment, ensuring that our method can effectively preserve watermark integrity even in adversarial scenarios. **This represents the fundamental distinction from EmbMarker.**\\n3. **Our method induces minimal distortion (less than 1%) to the clean embedding, whereas EmbMarker modifies approximately 3% (Figure 3 in paper).** Given that embedding quality is crucial for EaaS providers, this improvement is highly significant.\\n4. **The core of a watermark lies in the injection mechanism, and our ESpeW method introduces a novel embedding-specific injection mechanism that we believe provides a significant advancement in the area.** While other methods may follow similar high-level procedures, our unique injection mechanism enhances robustness while minimizing alterations to the embedding, which we believe a meaningful step forward in watermarking technology.\\n\\nIn summary, while the research problem and tasks in watermarking are aligned with previous works, our approach offers a new perspective on robust watermarking with practical, real-world benefits. **We believe our work fills a crucial gap in copyright protection for EaaS applications.** Thank you once again for your feedback, and we hope this clarifies the novelty and significance of our contributions.\\n\\n**Q2: The contributions may be overstated. This paper may not be the \\\"first to propose\\\" a robust watermark approach against CSE, as WARDEN [2] has already addressed this.**\\n\\n**R2:** Thank you for pointing out these important issues. We appreciate your suggestion and thoroughly analyze the matter. First, we conclude that **WARDEN can not actually defense against CSE.** The misleading results in WARDEN arise from their experimental setup, where they increase the number of watermarks to 5 but keep the total trigger set size at 20, resulting in each watermark having only 4 triggers. **Their use of an excessively small trigger set size leads to false positives(verified in point 1, 2). False Positive (FP) means non-watermarked models are mistakenly identified as watermarked.** Consequently, when they evaluate WARDEN under the CSE attack, even though CSE has already removed the watermark, they still mistakenly conclude that the watermark exists due to false positives (verified in point 3). Also, we verify all experiments in our paper ensure a very low false positive rate (FPR) of $10^{-4}$ (verified in points 2, 4). **For rigorous, we have stated in revision paper that our method is the only one robust under the listed baselines** by adding *'To the best of our knowledge'* (Line 075).\\n\\n`Due to limited space, we kindly invite you to refer to the next block for more details.`\"}", "{\"comment\": \"**Q3: At least 15% of each embedding's values are directly replaced with the corresponding values from the target embedding e_t. By statistically analyzing the frequency of values at each position, e_t might be estimated.**\\n\\n**R3:** Thank you. You mention an adaptive attack based on statistical analysis. We address your question from two points:\\n\\n**1. The statement of direct replacement is not accurate. Note that we perform a normalization operation on the embedding before returning it, which changes the values of the embedding. After normalization, the same watermarked positions in the embedding no longer have the same values.** Note that the Provider's EaaS normalizes the embedding before returning it. This means the embedding is divided by its L2 norm (a common technique used in embedding processing). This normalization process ensures that, even though we add the same value to the same positions in the embedding, after normalization, the values at those positions are no longer the same. Therefore, in fact, it is chanlleging to conduct the statistical analysis attack.\\n\\n**2. Experimental results demonstrate that statistical analysis attacks will not succeed unless watermark quality is degraded to as low as 64.78% or even 28.35% of the original.** We first provide a detailed description of the statistical analysis attack here.\\n\\n1. Assume that the training set of the stealer is $D_c \\\\in \\\\mathbb{R}^{N \\\\times M}$, and for a specific index $i$ of embedding, the corresponding array is $DE_i \\\\in \\\\mathbb{R}^N$. \\n2. Set a small tolerance level $T$, and using this tolerance as the step size to partition $DE_i$ and count the number of elements in each partition.\\n3. Initialize $SE = \\\\{\\\\}$. Then, add the partition with the highest number of elements to $SE$. This is because, when the tolerance is set to a particularly small value, if the watermark values cluster, these watermark values are likely to cluster within a specific partition and its neighboring partitions. Next, we add these $N_T$ neighboring partitions around the clustered partition to $SE$.\\n4. Calculate the upper and lower bounds of $SE$, and set the numbers within this interval to $0$. \\n5. Repeat steps 1-4 for all indices $i$.\\n6. Normalize the obtained embedding.\\n\\nThrough this algorithm, we can identify the abnormally clustered values, thereby carrying out the statistical analysis attack. In our experiments, we fix $T$ to a small value $10^{-4}$ and test the attack performance with varying $N_T$. Since the SAA operation only have negative affect on embedding quality, we can use cos-clean only (the cosine similarity between the embedding and clean embedding) to measure watermark quality. All other parameters the same as in our paper. The results are as follows:\\n\\n| $N_T$ | p-value\\u2193 | \\u2206cos(%) \\u2191 | \\u2206l2(%) \\u2193 | cos-clean (embedding quality) \\u2191 |\\n|------------------|-------------|---------------|----------------|---------------|\\n| 1 | 5.80E-10 | 7.85 | -15.69 | 0.9887 |\\n| 5 | 5.80E-10 | 7.84 | -15.69 | 0.9815 |\\n| 10 | 5.80E-10 | 7.36 | -14.71 | 0.9738 |\\n| 20 | 5.80E-10 | 5.99 | -11.99 | 0.9576 |\\n| 30 | 1.13E-08 | 5.67 | -11.34 | 0.9419 |\\n| 100 | 5.80E-10 | 7.95 | -15.91 | 0.8276 |\\n| 200 | 0.001116 | 7.36 | -14.73 | 0.6478 |\\n| 250 | 0.033541 | 5.24 | -10.48 | 0.5481 |\\n| 300 | 0.012299 | 2.22 | -4.44 | 0.4511 |\\n| 350 | 0.012299 | -7.27 | 14.54 | 0.3620 |\\n| 400 | 0.003967 | -9.99 | 19.98 | 0.2835 |\\n\\nThe results show that with $N_T$ set to 200, p-value based detection becomes ineffective in identifying watermarks, while the watermark quality degrades to 64.78% of its original level. But in this situation, the \\u2206cos and \\u2206l2 is still high, which can be used to detect watermark. When $N_T$ is set to 200, our watermark are ineffective with an embedding quality of 45.11%.\\n\\n**Q4: When \\u03b1 is set to 100% in Eq. (1), the proposed method is significantly different from EmbMarker.**\\n\\n**R4:** Thank you for your valuable feedback. When $\\\\alpha=1$, our method replaces entirely the original embedding with the target embedding instead of same as EmbMarker. We have made the revisions for more rigorous expression (Line 426).\\n\\n**Q5: Typo issue.**\\n\\n**R5:** Thank you; we will make sure to correct it thoroughly.\"}", "{\"comment\": \"`Continuing from the previous block (Q2).`\\n\\nBelow, we present a **step-by-step analysis** to support our claims:\\n\\n**1. We first preliminarily verify that small trigger set sizes will lead to false for WARDEN.** We re-test WARDEN with different total trigger set size **on unwatermarked model**. Our experiments are conducted using the **official open-source code of WARDEN**, ensuring that our results can be easily verified. All other parameters are the same as theirs. We can see that, **following their setting, where the total trigger set size is 20, false positives occur.** In fact, the authors of WARDEN already mention the issue of high false positives in their paper, but they do not find the underlying reason for the false positives is because the small trigger set size. \\n\\n| Trigger Set Size (Total for 5 watermarks) | p-value | \\u2206cos(%) | \\u2206l2(%) | COPY? | False Positive? |\\n|------------------|----------|---------------|---------------|-------|-------|\\n| 20 | 10^-10 | 0.17 \\u00b1 0.43 | -0.34 \\u00b1 0.42 | yes | yes |\\n| 50 | 10^-8 | 1.27 \\u00b1 0.16 | -2.53 \\u00b1 0.42 | yes | yes |\\n| 100 | 0.0003 | 0.04 \\u00b1 0.11 | -0.09 \\u00b1 0.43 | no | no |\\n| 150 | 0.0011 | -0.35 \\u00b1 0.13 | 0.69 \\u00b1 0.50 | no | no |\\n| 200 | 0.0011 | 0.08 \\u00b1 0.28 | 0.50 \\u00b1 1.89 | no | no |\\n\\n**2. We analyze and verify that the false positive rate (FPR) is correlated with the trigger set size. FPR means the ratio of non-watermarked models are mistakenly identified as watermarked.** To illustrate this, consider an extreme case where the trigger set size is set to just 1. In this case, the embeddings of watermarked texts show high semantic similarity due to the shared token, causing them to cluster too closely. As a result, even if a watermark has not been successfully injected, the watermarked and non-watermarked embeddings still exhibit differences, leading to the incorrect conclusion that the embeddings are watermarked. \\n\\nFor a more **rigorous evaluation**, we adopt another metric FPR@$f$, which is **widely used in text watermark. FPR@$f$ indicates that the watermark are evaluated under the constraint that the FPR is lower than a threshold $f$.** This metric is highly suitable for our task because it allows us to evaluate the performance of the watermark under a fixed FPR. In the following table, we present the relationship between trigger set size and FPR. To ensure reliability, we conduct 100,000 repeated experiment for each size. Other parameters that might influence FPR are set as follows: the verify dataset size is 40, the verify sentences' length is 20, the max trigger number in one sentence is 4, and the model is considered to have a watermark if the p-value is less than $10^{-3}$. **We can see that when the trigger set size is set to 4, we can only ensure an FPR of less than 0.3891. However, when the trigger set size is increased to 20, we can ensure an FPR of less than $10^{-4}$.**\\n\\n| Trigger Set Size | FPR |\\n|------------------|----------|\\n| 4 | <0.3891 |\\n| 6 | <0.0239 |\\n| 8 | <0.0044 |\\n| 10 | <0.0013 |\\n| 20 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 30 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 40 | $<10^{-4}, \\\\ge10^{-5}$ |\\n| 50 | $<10^{-4}, \\\\ge10^{-5}$ |\\n\\n**3.** Based on the above analysis, we **formally demonstrate here that WARDEN cannot actually defend against CSE** by setting trigger set size to 20 for each watermark (total 100 for 5 watermarks). **This ensures that the FPR is lower than $10^{-4}$.** This test is conducted on **watermarked models**, with all other parameters are the same as theirs. The results show that when K (hyper-parameter of CSE) is greater than or equal to 50, **WARDEN fails to extract the watermark, i.e, fails to defense against CSE**.\\n\\n| K(CSE) | ACC(%) | p-value | \\u2206cos(%) | \\u2206l2(%) | COPY? |\\n|--------|---------|-----------|------------|-------------|-------|\\n| 0 | 94.15 | 10^-11 | 14.16 | -28.31 | yes |\\n| 1 | 93.46 | 10^-11 | 88.86 | -177.72 | yes |\\n| 5 | 93.01 | 10^-6 | 18.98 | -37.97 | yes |\\n| 50 | 89.68 | 0.0122 | 5.78 | -11.56 | no |\\n| 100 | 87.39 | 0.0040 | -7.69 | 15.39 | no |\\n| 1000 | 82.00 | 0.0040 | 8.49 | -16.98 | no |\\n\\n\\n**4. The results of our paper are reliable, since all experiments in our paper ensure FPR@$10^{-4}$, which indicates the FPR is lower that $10^{-4}$.** In our paper, the experiment sets the trigger set size for each watermark to 20. According to the table above, we can see that when the trigger set size is 20, we actually meet FPR@$10^{-4}$. $10^{-4}$ is a sufficiently small value.\"}", "{\"comment\": \"Dear Reviewers,\\nThe authors have responded to your valuable comments.\\nPlease take a look at them!\\n\\nBest,\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for recognizing our work and providing valuable suggestions. We address questions and concerns below.\\n\\n**Q1: High computational load to select positions with the lowest magnitudes.**\\n\\n**R1:** As we claimed in our paper (Line 530), we have provided an alternative approach, i.e., random selection, to reduce computational load. In the following, we present a **systematic analysis** about comparison of random selection and smallest-magnitude selection:\\n\\n**(1) We first give a detailed description of Random Selection algorithm.** Direct random selection is not ideal, as the watermarked positions for the same sentence can vary across queries. An attacker could exploit this by using multiple queries to detect or remove watermark. To address this, we use hash value of embedding as a seed, ensuring consistent position selection. Below is the algorithm: \\n\\n * Convert the original embedding $e_o$ to byte format;\\n * Generate a random seed using the SHA-256 hash of the byte format of $e_o$;\\n * Select random indices based on the generated random seed. These indices are watermark positions.\\n\\n**(2) We then evaluate the time consumption of Smallest-magnitude and Random Selection through both analysis and experiments.**\\n\\n**For analysis:** we conduct the time complexity analysis:\\n\\n* **Smallest-magnitude Selection:** Using heap sort for the top-k problem is the most common approach, achieving a time complexity of $O(N \\\\log k)$. Thus, the total time complexity of Smallest-magnitude Selection is $O(N \\\\log k)$. \\n* **Random Selection:** Converting $e_o$ to byte format requires $O(N)$, SHA-256 hashing also takes $O(N)$, and selecting random indices needs $O(k)$. Therefore, the total time complexity of Random Selection is $O(2N + k)$. \\n\\nConsidering the high-dimensional nature of embeddings, random selection typically has a much lower time complexity than smallest-magnitude selection.\\n\\n**For experiments:** To test time consumption, we use two popular embedding models: NV-Embed-v2 (based on Mistral-7B) and Stella (based on Qwen2-1.5B). We measure time for 2,000 generations, repeating the experiment 5 times to reduce randomness. Experiments run on Ubuntu 18.04 with an AMD EPYC 7Y83 64-Core CPU and a 4090 GPU.\\n\\n| **Model** | **Model Size** | **Embedding Size** | **Inference Time (ms)** | **Smallest-magnitude Selection Time (ms)** | **Random Selection Time (ms)** |\\n|-------------------|----------------|--------------------|-------------------------|--------------------------------------|--------------------------------|\\n| **Stella** | 1.5B | 1024 | 4371.80 \\u00b1 204.80 | 716.30 \\u00b1 1.50 | 31.49 \\u00b1 0.40 |\\n| **NV-Embed-v2** | 7B | 4096 | 13799.46 \\u00b1 459.30 | 3761.18 \\u00b1 276.59 | 86.33 \\u00b1 0.49 |\\n\\n**(3) Below is the comparison of watermark performance using smallest-magnitude and random selection.** We report p-value, \\u2206cos, and \\u2206l2 for detection capability, and cosine similarity to assess embedding quality. We set $K = 50$. The other parameters are the same as in the paper.\\n\\n| Dataset | Method | p-value \\u2193 | \\u2206cos(%) \\u2191 | \\u2206l2(%) \\u2193 | cos(%) w/o \\u2191 |\\n|-------------|----------|---------------|------------------|------------------|-----------|\\n| SST2 | Minimum | $10^{-11}$ | 65.11 | -130.23 | 99.19 |\\n| | Random | $10^{-11}$ | 72.81 | -145.62 | 98.87 |\\n| MIND | Minimum | $10^{-11}$ | 72.14 | -144.28 | 99.23 |\\n| | Random | $10^{-11}$ | 77.27 | -154.55 | 98.69 |\\n| AGNews | Minimum | $10^{-10}$ | 21.83 | -43.65 | 99.27 |\\n| | Random | $10^{-11}$ | 53.13 | -106.27 | 98.97 |\\n| Enron Spam | Minimum | $10^{-10}$ | 47.75 | -95.5 | 99.21 |\\n| | Random | $10^{-11}$ | 68.38 | -136.75 | 98.92 |\\n\\n**In summary, above analyses and experiments persent that both smallest-magnitude selection and random selection have their irreplaceable advantages and suited to their respective application scenarios:**\\n* **Smallest-magnitude selection significantly benefits preserving embedding quality, with modifications to clean embeddings under 1%.** This is crucial for real-world scenarios where organizations aim to achieve higher rankings on leaderboards to promote their products while protecting their copyright.\\n* **Random selection, in contrast, though sacrificing more embedding quality, saves considerable time**, making it more suitable for product deployment.\\n\\nWe believe that **both approaches are meaningful**, and users can choose between them based on their specific application scenarios. We have included these analyses and experiment results in revised paper (Line 859). We're open to further feedback.\"}", "{\"comment\": \"Thank you for carefully reading our paper. We thank the reviewer for the constructive comments and suggestions. We address your concerns below:\\n\\n**Q1: If this target embedding (private key) was compromised, attackers could potentially reverse-engineer the watermark positions.**\\n\\n**R1:** Thank you for your comment. We would like to explain from the following two points.\\n\\n1. **The private position is hard to infer.** The EaaS provider would normalize embeddings before returning the embedding, ensuring added values at same positions are no longer same. This makes it challenging to pinpoint watermark positions even if the key is compromised.\\n2. **Key leakage risks and strategies.** The leakage risk mainly comes from security vulnerabilities such as poor storage, insecure transmission, or insider leaks. Mitigation strategies: (1) Regularly renew the key. (2) Use multiple keys to limit impact. (3) Audit and monitor access. (4) Encrypt storage and transmission. (5) Limit employee access.\\n\\nWe have added the discussion about dealing with privacy key leakage to revised version. (Line 1301)\\n\\n**Q2: High computational load to select positions with the lowest magnitudes.**\\n\\n**R2:** Thank you for your insightful question. We would like to refer you to **our response to reviewer nW4C in R1** for a **systematic analysis** about comparison of random selection and smallest-magnitude selection. Or see the revised paper (Line 859).\\n\\n**Q3: The method may be model-specific since different models can produce embeddings with varying distributions and magnitudes.**\\n\\n**R3:** Thank you for your question. We clarify it from three aspects:\\n\\n1. **Our method is model-agnostic.** The mechanism of our method is independent from model and can be applied to any EaaS system.\\n2. **Our watermark is specific to the embedding instead of model.** The watermark added to each embedding is unique. This means that the watermark is not a fixed pattern but is instead dynamically generated based on the properties of each individual embedding. By binding the watermark to the specific embedding, we ensure that the watermarked embeddings are more robust against removal attack, as there is no universal watermark template that can be easily extracted or removed.\\n3. **To strengthen our point, we apply our watermark to more models to verify its effectiveness.** We select two additional embedding models: NV-Embed-v2 (the top model in the MTEB Leaderboard, developed by Nvidia with Mistral-7B, embedding dimension 4096) and Stella-1.5B-V5 (the top 1.5B model in MTEB, based on Qwen2-1.5B, embedding dimension 1024). We also put test on GPT-3 text-embedding-002 API (embedding dimension 1536) here for comparison. Using the Enron spam dataset and $K=50$, we evaluate watermark performance with different $\\\\alpha$, keeping other parameters the same as in our main experiment.\\n\\n| $\\\\alpha$ | ACC(\\\\%) | p-value \\u2193 | \\u2206cos(%) \\u2191 | \\u2206l2(%) \\u2193 |\\n|-------|-------------|-----------|---------------|---------------|\\n| **Stella** | | | | |\\n| 0.05 | 95.69 | 9.55E-06 | 13.12| -26.23 |\\n| 0.1 | 95.81 | 1.13E-08 | 27.02| -54.04 |\\n| 0.15 | 95.99 | 1.13E-08 | 36.62| -73.24 |\\n| 0.2 | 95.39 | 5.80E-10 | 47.30| -94.60 |\\n| 0.25 | 95.99 | 5.80E-10 | 56.77| -113.54 |\\n| 0.3 | 95.99 | 5.80E-10 | 62.31| -124.62 |\\n| 0.6 | 95.32 | 9.55E-06 | 10.45| -20.89 |\\n| **GPT** | | | | |\\n| 0.05 | 95.85 | 5.57E-05 | 10.89| -21.78 |\\n| 0.1 | 95.50 | 1.43E-07 | 20.59| -41.17 |\\n| 0.15 | 95.50 | 5.80E-10 | 31.25| -62.49 |\\n| 0.2 | 95.45 | 5.80E-10 | 44.70| -89.40 |\\n| 0.25 | 95.15 | 5.80E-10 | 51.01| -102.03 |\\n| 0.3 | 95.50 | 1.45E-11 | 61.91| -123.82 |\\n| 0.6 | 95.75 | 9.55E-06 | 17.63| -35.26 |\\n| **NV-Embed** | | | | |\\n| 0.05 | 96.20 | 2.70E-04 | 9.04 | -18.08 |\\n| 0.1 | 96.10 | 1.13E-08 | 23.90| -47.79 |\\n| 0.15 | 95.70 | 5.80E-10 | 40.56| -81.13 |\\n| 0.2 | 95.90 | 1.45E-11 | 52.08| -104.17 |\\n| 0.25 | 96.25 | 1.45E-11 | 65.99| -131.98 |\\n| 0.3 | 95.95 | 1.45E-11 | 72.47| -144.93 |\\n| 0.6 | 96.10 | 1.45E-11 | 53.36| -106.72 |\\n\\nHere are our experimental results, which show that **our watermark is effective across all three models**. The only difference is that the optimal detection performance is achieved at slightly different alpha values for each model. We have added related content in revised paper (Line 1059).\"}", "{\"comment\": \"Dear Reviewer nW4C,\\n\\nThank you again for your time. As the deadline for discussion is approaching, we do wish to hear from you to see if our response resolves your concerns. We are happy to provide any additional clarifications if needed.\"}", "{\"summary\": \"The paper introduces ESPEW, a novel approach aimed at providing robust copyright protection for Embeddings as a Service (EaaS). Existing watermarking techniques have been found inadequate, as they can be easily removed by attackers. The authors propose a new watermarking method that injects unique, identifiable watermarks into embeddings, ensuring that these watermarks are difficult to detect and eliminate. The paper presents extensive experimental results demonstrating the effectiveness and robustness of the ESPEW method against various watermark removal attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed method effectively addresses the limitations of existing watermarking techniques by making it difficult for attackers to identify and remove watermarks. The use of distinct watermark positions in embeddings contributes to this robustness.\\nThe authors conduct extensive experiments on four popular datasets under various removal intensities, showcasing the effectiveness of ESPEW compared to traditional methods.\", \"weaknesses\": \"Given the need to select specific positions in embeddings with the lowest magnitudes, this approach could impose a high computational load on servers, particularly under scenarios with heavy API usage. This might limit the applicability of ESpeW in high-demand environments or for EaaS providers with extensive traffic.\", \"questions\": \"See weekness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re: Q2: Testing on possible adaptive attacks.\", \"comment\": \"**Q2: Testing on possible adaptive attacks.**\\n\\n**R2:** The key to ensuring the safety of our method is keeping the private key (target embedding) secure, rather than the security of the watermarking mechanism itself. **If only the watermarking mechanism is known but the private key is not leaked, conducting adaptive attacks are highly challenging**. \\n\\n**Below, we demonstrate a possible adaptive attack**. By statistically analyzing **the frequency of values at each position** and **identifying the most frequent value**, one could infer potential watermark positions. By setting these positions' values to zero, the attacker may attempt to remove the watermark. However, our experiments verifies that **this type of adaptive attack cannot successfully remove the watermark without significantly degrading the embedding quality (reduced to 36.20% of the original).** Below are the detailed explanation and experiments:\\n\\n**1. Note that we perform a normalization operation on the embedding before returning it, which changes the values of the embedding. After normalization, the same watermarked positions in the embedding no longer have the same values.** Note that the Provider's EaaS normalizes the embedding before returning it. This means the embedding is divided by its L2 norm (a common technique used in embedding processing). This normalization process ensures that, even though we add the same value to the same positions in the embedding, after normalization, the values at those positions are no longer the same. Therefore, in fact, it is chanlleging to conduct the statistical analysis attack.\\n\\n**2. Experimental results demonstrate that statistical analysis attacks will not succeed unless watermark quality is degraded to as low as 36.20%.** We first provide a detailed description of the statistical analysis attack here.\\n\\n1. Assume that the training set of the stealer is $D_c \\\\in \\\\mathbb{R}^{N \\\\times M}$, and for a specific index $i$ of embedding, the corresponding array is $DE_i \\\\in \\\\mathbb{R}^N$. \\n2. Set a small tolerance level $T$, and using this tolerance as the step size to partition $DE_i$ and count the number of elements in each partition.\\n3. Initialize $SE = \\\\{\\\\}$. Then, add the partition with the highest number of elements to $SE$. This is because, when the tolerance is set to a particularly small value, if the watermark values cluster, these watermark values are likely to cluster within a specific partition and its neighboring partitions. Next, we add these $N_T$ neighboring partitions around the clustered partition to $SE$.\\n4. Calculate the upper and lower bounds of $SE$, and set the numbers within this interval to $0$. \\n5. Repeat steps 1-4 for all indices $i$.\\n6. Normalize the obtained embedding.\\n\\nThrough this algorithm, we can identify the abnormally clustered values, thereby carrying out the statistical analysis attack. In our experiments, we fix $T$ to a small value $10^{-4}$ and test the attack performance with varying $N_T$. Since the SAA operation only have negative affect on embedding quality, we can use cos-clean only (the cosine similarity between the embedding and clean embedding) to measure watermark quality. All other parameters the same as in our paper. The results are as follows:\\n\\n| $N_T$ | p-value\\u2193 | \\u2206cos(%) \\u2191 | \\u2206l2(%) \\u2193 | cos-clean (embedding quality) \\u2191 | FPR@1e-4| FPR@1e-5|\\n|------------------|-------------|---------------|----------------|---------------|---------------|---------------|\\n| 1 | 5.80E-10 | 7.85 | -15.69 | 0.9887 | \\u2714 | \\u2714 |\\n| 5 | 5.80E-10 | 7.84 | -15.69 | 0.9815 | \\u2714 | \\u2714 |\\n| 10 | 5.80E-10 | 7.36 | -14.71 | 0.9738 | \\u2714 | \\u2714 |\\n| 20 | 5.80E-10 | 5.99 | -11.99 | 0.9576 | \\u2714 | \\u2714 |\\n| 30 | 1.13E-08 | 5.67 | -11.34 | 0.9419 | \\u2714 | \\u2714 |\\n| 100 | 5.80E-10 | 7.95 | -15.91 | 0.8276 | \\u2714 | \\u2714 |\\n| 200 | 0.001115802 | 7.36 | -14.73 | 0.6478 | \\u2714 | \\u2714 |\\n| 250 | 0.033541659 | 5.24 | -10.48 | 0.5481 | \\u2714 | \\u2714 |\\n| 300 | 0.012298613 | 2.22 | -4.44 | 0.4511 | \\u2714 | \\u2714 |\\n| 350 | 0.012298613 | -7.27 | 14.54 | 0.3620 | \\u2714 | \\u2714 |\\n| 400 | 0.003967294 | -9.99 | 19.98 | 0.2835 | \\u2714 | \\u2714 |\\n\\nThe results show that with $N_T$ set to 350, all three metrics becomes ineffective in identifying watermarks, while the watermark quality degrades to 36.20% of its original level. That is, this adaptive attack cannot successfully remove the watermark without significantly degrading the embedding quality. **We have already incorporated these contents into revised version (Line 1128).**\"}" ] }
BlSIKSPhfz
Non-Equilibrium Dynamics of Hybrid Continuous-Discrete Ground-State Sampling
[ "Timothee Leleu", "Sam Reifenstein" ]
We propose a general framework for a hybrid continuous-discrete algorithm that integrates continuous-time deterministic dynamics with Metropolis-Hastings (MH) steps to combine search dynamics that either preserve or break detailed balance. Our purpose is to study the non-equilibrium dynamics that leads to the ground state of rugged energy landscapes in this general setting. Our results show that MH-driven dynamics reach ``easy'' ground states more quickly, indicating a stronger bias toward these solutions in algorithms using reversible transition probabilities. To validate this, we construct a set of Ising problem instances with a controllable bias in the energy landscape that makes certain degenerate solutions more accessible than others. The constructed hybrid algorithm demonstrates significant improvements in convergence and ground-state sampling accuracy, achieving a 100x speedup on GPU compared to simulated annealing, making it well-suited for large-scale applications.
[ "Combinatorial optimization", "Degenerate ground-state sampling", "Metropolis-Hastings algorithm", "Chaotic dynamics", "Wishart planted ensemble" ]
Accept (Poster)
https://openreview.net/pdf?id=BlSIKSPhfz
https://openreview.net/forum?id=BlSIKSPhfz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zdG3hnTdxa", "uTfYEgNi1e", "rbHhUkUk12", "rSXnDNIalc", "pyIl0mrOG0", "oZp4FU0vRP", "nO0wEfEYPV", "hlUkhMfucs", "ddwyrKo0YU", "blyz5NAcRr", "bW2xYeiy2Z", "UBBmsddXQB", "STsSO3zPei", "R6ke8vikTo", "PsO0jE5Mek", "OquXWbu6d0", "IC5mOStZrh", "HXMtwRgqLr", "Fg9wQehEn2", "F39oZQMSDi", "E7MDJ6kb0k", "8udiuizOT8", "8Wml87G7nb", "1Ncs2FzihC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732633518690, 1732578911135, 1732577995125, 1732647711402, 1730700964146, 1737524114035, 1732649260468, 1732662169184, 1731188003905, 1732578331295, 1733089928255, 1732579319625, 1732841514041, 1732579481462, 1732579827662, 1734620645760, 1732580909410, 1732680473378, 1730607807277, 1732845448441, 1732579084216, 1729908924205, 1730814294144, 1732678747975 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_SwgT" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_g5qM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_ymkA" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_SwgT" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_ymkA" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_g5qM" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_wvMs" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Area_Chair_32H9" ], [ "ICLR.cc/2025/Conference/Submission11258/Area_Chair_32H9" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_zbqu" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_zbqu" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_SwgT" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_SwgT" ], [ "ICLR.cc/2025/Conference/Submission11258/Reviewer_wvMs" ], [ "ICLR.cc/2025/Conference/Submission11258/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the authors\", \"comment\": \"I appreciate the authors\\u2019 response and have adjusted my score. While I recognize the potential of the proposed method, I remain concerned about the MH correction implemented in the algorithm.\\n\\nAs mentioned earlier, I think it is necessary to correct both the momentum and position variables, as a detailed balance should hold for the joint distribution of these variables rather than just the marginal distribution of the position variable. Furthermore, proving that such a correction satisfies detailed balance might be a non-trivial task. I would recommend the authors carefully verify whether the proposed algorithm satisfies the detailed balance property.\\n\\n**Reference**\\n\\nA conceptual introduction to Hamiltonian Monte Carlo. arXiv:1701.02434 (2017).\"}", "{\"comment\": \"(C1) \\\" the theoretical understanding of the sampling properties of the MHCACm algorithm remains unaddressed [...]\\\"\\n\\n(R1) Obtaining theoretical guarantees for combinatorial optimization problems with a rugged landscape resembling spin glasses is challenging. Analytical estimations of certain thermodynamic quantities, such as the number of stable fixed points and mixing times, can be derived using replica calculations. However, even in simpler cases, establishing convergence and non-equilibrium dynamics analysis remains difficult (refer to Bernaschi 2020 in the references).\\n\\nThe algorithm analyzed in this paper introduces asymmetric connections due to the influence of auxiliary variables e multiplying the Ising couplings. This asymmetry significantly complicates statistical analysis, as it introduces the potential for limit cycles and chaotic dynamics. Although possible avenues of analysis are the dynamical cavity approach and related methods, our focus in this study has been on conducting numerical experiments.\\n\\nOur current hypothesis for behavior of the algorithm is discussed in section 4.4 (see \\u201cThe energy landscape is structured [...]\\u201c)\\n\\n(C2) \\\"I would like to suggest the authors include more comparison with other prominent sampling algorithms using collective variables, for example, 'Sampling metastable systems using collective variables and Jarzynski\\u2013Crooks paths' by G. Stoltz et al [...]\\\"\\n\\n(R2) Thank you for suggesting this interesting work. We have added the reference to our manuscript to make the references more exhaustive. Since the algorithm by G. Stoltz et al. is designed for continuous sampling rather than discrete, it could not be easily applied to our benchmark.\\n\\nWe think that combining the two approaches is not trivial, given that it is not straightforward to apply dynamic collective variables space to our scenario of discrete optimization in the binary space. Our approach uses a relaxation to continuous dynamics, but the underlying problem is discrete. Finding out how to combine these two approaches can be the subject of interesting future works.\\n\\n(C3) \\\"I wonder if the authors could provide specific examples of problem types or landscapes where their method may face challenges.\\\"\\n\\n(R3) In the manuscript, we compare the impact of adding the MH correction in two scenarios: unbiased degenerate Wishart planted instances (where all ground states have symmetric properties in the energy landscape) and biased degenerate Wishart planted instances (where some degenerate ground states are more easily reachable due to the structure of the energy landscape).\\nAs shown in Fig. 2, the unbiased instances are examples for which the introduction of the hybrid approach with the MH step hurts performance of the algorithm, whereas it helps in the case of the biased instances.\\nThe unbiased Wishart planted instances are one example which showcases the limits of the hybrid approach.\\n\\n(C4) \\\"I wonder if the authors could provide additional insights into how MHCACm scales with increased problem complexity [...]\\\"\\n\\nThank you very much for the interesting comment. We conducted additional numerical simulations and considered the impact of the parameter \\u03b1_WPE, which determines the complexity of Wishart planted instances. For certain values of \\u03b1_WPE, the recovery of the planted solution becomes easier or harder.\\n\\nWe added the new Figure 4, which compares performance of CACm and MHCACm with respect to this complexity parameter \\u03b1_WPE. We observe that the relative reduction in TTS due to the introduction of the MH step (for the biased WPE case) is more pronounced for instances of higher complexity (i.e., smaller parameter \\u03b1WPE).\\n\\n(C5) \\\"I wonder if the authors could provide quantitative results on the algorithm's performance in finding both \\\"easy\\\" and \\\"hard\\\" ground states across different problem instances\\\"\\n\\nThe answer to your question is contained in Figure 3, where the time to find \\u201ceasy\\u201d and \\u201chard\\u201d ground-states are compared in Fig. 3 (a) and (b), respectively. It is shown that the hybrid algorithm MHCACm shows reduced time to find \\u201ceasy\\u201d ground-state for biased instances (b>>0).\\n\\n(C6) \\\"Additionally, I wonder if the authors could discuss potential modifications to the algorithm that could help balance the performance of exploration of both easy and hard ground states.\\\"\\n\\n(R6) In the current algorithm, the acceptance criterion of the MH step does not really deal with the asymmetric flow of the dynamics due to the auxiliary variables e since the variables e are reset to 1 after each MH step. Given than CACm shows similar performance for both \\u201ceasy\\u201d and \\u201chard\\u201d, it is possible that modifying the MH step to take advantage of the information contained in the e variables could provide the benefits of MH step for both \\u201chard\\u201d and \\u201ceasy\\u201d ground-state. However, this is a speculation of this stage and further work is needed to explore this possibility.\"}", "{\"title\": \"Revisions\", \"comment\": \"We want to thank the referees for their insightful comments. We have run additional experiments and made some changes to the manuscript to respond to their questions. We believe the manuscript is substantially improved by these changes. The list of updates are as follows:\\n\\n1) Figure 3 has been modified to show the success probability of finding ground-states (see Panel (c)).\\n2) A new Figure 4 has been added, which shows the dependence of the time to solution with respect to the complexity parameter \\u03b1_WPE of Wishart planted instances.\\n3) In Table 4, the run time in seconds of additional algorithms (AIM, CACm) have been added and typos in the other numbers updated. Conclusions derived from these results are the same as in the previous version.\\n4) Additional numerical results in Appendix S6 for comparison with a recently proposed sampling algorithm.\\n5) The notation has been improved for clarity.\\n6) Some explanations have been rephrased to improve readability.\\n\\nMajor revisions to the manuscript are shown in red color.\"}", "{\"comment\": \"Thank you very much for your interest and updated score.\\n\\nWhile we totally agree with your comment in the context of HMC, we think that the situation is different in our current scenario. This is because the momentum term is effectively reset after each Metropolis-Hasting jump. We let the deterministic path run free for n steps, during which the momentum and auxiliary variables are free to be updated without MH step. Then, we set auxiliary variables e to e(nk+0)=1 and internal state u to u(nk+0) = u(nk-1) = 0 at the start of a new set of probabilistic jump and deterministic path. In our case, we do not need frequent sampling at every step, because the goal is to find degenerate ground-states in the binary configuration space rather than approximate a continuous distribution.\\n\\nNote that we have verified convergence to the Boltzmann distribution in Appendix section S3 and MHCACm converge in distribution very close to the Boltzmann distribution at the corresponding temperature.\\n\\nThat said, we agree with you that it would be interesting in future work to include momentum information within the MH step without reset, making our chaotic sampling method closer to HMC. Additionally, including information about the auxiliary variables e without resetting them would be another interesting development of this approach.\"}", "{\"summary\": \"The paper proposes a class of hybrid continuous-discrete algorithms by integrating continuous dynamics with Metropolis-Hastings steps. The paper also constructs a set of Ising problems with a tunable parameter to trade off between easy ground states and hard degenerate ground states, in order to experiment with the bias of different algorithms. The proposed class of algorithms are also fast solvers that achieve a great amount of acceleration on GPU due to a parallelizable structure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper writing is nice and structured, with a comprehensive literature review and detailed problem set-up.\", \"The paper looks technically sound with solid mathematical proof.\", \"The proposed algorithm is evaluated on multiple tasks and compared with various other benchmark methods, showing competitive performance.\"], \"weaknesses\": [\"I am not very familiar with the literature, but seems that the tasks of ground-state sampling are not formally defined in the paper, as well as the idea of non-equilibrium dynamics.\", \"The connection between ground-state sampling and deep learning optimization/generalization mentioned in the paper is interesting, but the discussion is very limited.\", \"For numerical experiments, the definition of TTS is hard to comprehend. Does smaller TTS indicate better algorithmic performance?\"], \"questions\": [\"Based on the algorithmic design in this paper, is there any insight we can draw on what an ideal optimizer for deep neural nets should look like?\", \"Can you elaborate more on the numerical performance of CACm and MHCACm? From the charts, the two performances seem to be close to each other.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I thank the authors for their detailed responses, especially adding additional experiments. Most of my concerns are resolved. I have updated my score.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for your response. I revisited your responses, Section 3, and Algorithm 1 to gain a better understanding of the implementation.\\n\\nMy primary concern remains with the correctness of the MH steps. Based on your response, I understand that $u$ and $e$ are reset after $n$ sampling steps and that the MH correction is applied every $n$ step. In other words, the implementation appears to accept $\\\\tau$ with probability $A$ (every after $n$ step), while rejecting other variables such as $u$ and $e$ at the same time. Since $u$ and $e$ play a role in updating $\\\\tau$, _they should also be considered in the MH correction step (accept or reject all $\\\\tau$, $u$, and $e$ simultaneously)._ \\n\\nWhile this approach might still perform well empirically (possibly due to the small bias introduced by ignoring $u$ and $e$ in the MH steps), _the joint distribution of all variables no longer satisfies detailed balance, and the stationary distribution may not converge to the target anymore._ In other words, since the algorithm still does not satisfy detailed balance, why do we need the extra MH steps? From my perspective, this remains a significant theoretical issue. \\n\\nAdditionally, I noticed several issues with the clarity and consistency of the notations. For instance, the variable $\\\\tau$ in Eq. (7) is not clearly defined elsewhere (maybe set $\\\\tau=x((k+1)n)$?). In Algorithm 1 (Appendix S2), the input to \\\"DETERMINISTIC PATH\\\" ($\\\\sigma$) and its output ($x$) are not explicitly linked. The input $y$ to \\\"PROBABILISTIC JUMP\\\" is not used, and the inputs $y$ and $\\\\tilde{x}$ in \\\"METROPOLIS-HASTINGS STEP\\\" are not utilized in line 22. Furthermore, there seems to be a missing step to update $\\\\tau$ after computing $A$ in line 22. \\n\\nI recommend revisiting the notation and definitions to improve its clarity and accessibility. Additionally, moving Algorithm 1 to the main text might help readers better follow the proposed method and understand its implementation.\"}", "{\"summary\": \"This paper proposes a new algorithm that combines chaotic search and Metropolis-Hastings. The goal seems to solve optimization problems in discrete non-convex energy landscapes. The proposed algorithm is tested on several combinatorial optimization tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The empirical results include multiple baselines and comparisons in terms of different metrics.\", \"The visualization in Fig.1 clearly shows the main algorithmic idea.\"], \"weaknesses\": [\"The problem that this paper aims to solve is vague. Is the goal to develop an algorithm that samples better in non-convex energy landscapes, or for optimization in discrete landscapes? Or is the goal to understand non-equilibrium dynamics in non-convex energy landscapes? Similarly, the motivation of the proposed algorithm which combines chaotic search with Metropolis-Hastings is not well-explained.\", \"The novelty of the proposed algorithm is unclear. Is the algorithm a straightforward combination of chaotic search and MH? If not, what is the challenge, and how does the paper solve the challenge?\", \"The empirical improvement is not consistent. For example, Fig.2 shows that CACm is better than proposed method also the variance of the proposed is significantly larger than the baselines.\", \"The runtime comparison only considers simulated annealing. It will be better to include other baselines as well.\", \"The paper compared the standard Gibbs with gradient which is developed for combinatorial optimization. It will be more convincing to compare with gradient-based discrete MCMC that is developed for CO, such as [1].\", \"[1] Revisiting sampling for combinatorial optimization, ICML 2023\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"(C1) \\\"The problem that this paper aims to solve is vague [...]\\\"\\n\\n(R1) We have added a definition of the task of ground-state sampling, which is the focus of this work, in the introduction to make things clearer:\\n\\n\\u201cGround-state sampling involves finding not just any ground state, as it is often defined in combinatorial optimization, but multiple degenerate ground states. In this context, non-equilibrium dynamics refers to the processes through which systems evolve over time toward steady states, with varying rates of reaching degenerate ground states that differ from equilibrium expectations\\u201c\\n\\nThe goal is to study the effect of adding an MH step on the time required to find degenerate ground states and how this depends on the shape of the energy landscape. This is achieved by comparing biased and unbiased Wishart planted instances and other benchmark sets. A byproduct of this work is the definition of a general algorithm that, when properly tuned, demonstrates state-of-the-art performance on several benchmarks.\\n\\n(C2) \\\"The novelty of the proposed algorithm is unclear [...]\\\"\\n\\n(R2) The main contributions of this work are as follows:\\n\\n1) A unified framework, MHCACm, that generalizes many existing methods, including simulated annealing, Hopfield neural networks, analog iterative machines, and CACm. This framework achieves optimal performance when parameters are properly tuned.\\n\\n2) A new set of planted instances enables the study of the effects of non-equilibrium dynamics. We observe that the optimal algorithm (or parameter settings for a general algorithm) depends on whether the objective is to sample any ground state or all ground states.\\nThe combination of these two contributions enables the study of how the time to find degenerate ground states is influenced by the effects of hybridization with the MH step and various algorithmic settings.\\n\\n(C3) \\\"The empirical improvement is not consistent. For example, Fig.2 shows that CACm is better than proposed method also the variance of the proposed is significantly larger than the baselines.\\\"\\n\\n(R3) Let us rephrase your comment to ensure we understand it correctly:\\n\\n1) CACm has a lower TTS than MHCACm in Fig. 2a.\\n2) MHCACm exhibits larger variance in Fig. 2b.\", \"our_response\": \"The main argument of the paper is that there is no single \\u201cwinner\\u201d algorithm for all scenarios. Specifically, for the task of sampling from unbiased Wishart instances, CACm indeed performs better. However, in the case of biased instances, MHCACm outperforms CACm. This highlights the \\\"no free lunch\\\" principle.\", \"figure_3_explains_why_mhcacm_is_better_when_there_is_a_bias\": \"MHCACm is more effective at finding \\u201ceasy-to-reach\\u201d ground states (Fig. 3a) but is less effective at finding the \\u201chard-to-reach\\u201d ones (Fig. 3b). In contrast, CACm does not exhibit significant differences between these cases. Therefore, when the task is to find any ground state, MHCACm achieves this much faster (resulting in a smaller TTS), which accounts for the difference observed in Fig. 2a.\\n\\nThe larger variance in TTS for MHCACm, as shown in Fig. 2b, is indeed expected. This is because the optimal TTS for MHCACm occurs at smaller values of T and p0\\u200b , which naturally increases the variance of TTS. This is supported by Fig. 3c, where the optimal T for MHCACm, corresponding to a smaller TTS, is much lower than that of CACm.\\n\\n(C4) \\\"The runtime comparison only considers simulated annealing. It will be better to include other baselines as well.\\\"\\n\\n(R4) Thank you for the suggestion. We have included AIM and CACm on CPU as additional baselines, as these two algorithms are considered state-of-the-art. As anticipated, MHCACm outperforms the others on CPU for biased instances.\\n\\n(C5) \\\"The paper compared the standard Gibbs with gradient which is developed for combinatorial optimization. It will be more convincing to compare with gradient-based discrete MCMC that is developed for CO, such as [1].\\n[1] Revisiting sampling for combinatorial optimization, ICML 2023\\\"\\n\\n(R5) We have experimented with the algorithm mentioned in the paper you referenced and included the numerical results in Appendix S6. MHCACm appears to exhibit a higher probability of finding ground states in both unbiased and biased instances. We would be happy to discuss this point in more detail if you are interested.\"}", "{\"comment\": \"I appreciated the author's further explanations and clarifications. I am happy to maintain my score to reflect my positive support for this manuscript.\"}", "{\"comment\": \"Thank you for the response and all updates! I modified my rating score for this work.\"}", "{\"comment\": \"We appreciate your concern regarding the theoretical correctness of the detailed balance property. From our perspective, the explanation that the acceptance rule does not depend on past values of u and e due to the reset mechanism appears to address this point comprehensively. We are open to further clarification if there are specific aspects that you believe remain unresolved, but it is not immediately clear to us what additional proof could be provided in this regard.\\n\\nWe sincerely value your feedback and thank you for taking the time to review our work.\"}", "{\"comment\": \"(C1) \\\"Specifically, it is not clear why a fair sampling strategy in the CAC framework would lead to a better performance in an optimization (ground state finding) problem [...]\\\"\\n\\n(R1) Our paper primarily focuses on numerical results and the observation that there is a difference in the behavior of CAC with and without the Metropolis-Hastings step. To explore this, we construct a new type of planted instances that exhibit a tunable bias within their degenerate ground states.\\nWhile we agree that a theoretical justification would be highly interesting, the effects discussed in this paper are non-equilibrium dynamical effects, for which developing a theoretical framework can be challenging (refer to Bernaschi 2020 in the references).\\nMoreover, the algorithm analyzed in this paper introduces asymmetric connections due to the influence of auxiliary variables e multiplying the Ising couplings. This asymmetry significantly complicates statistical analysis, as it creates the potential for limit cycles and chaotic dynamics. Providing a theoretical justification would require substantial additional work, which is beyond the scope of this paper. The current work already presents new ideas and concepts.\\n\\nOur current hypothesis for behavior of the algorithm is discussed in section 4.4 (see \\u201cThe energy landscape is structured [...]\\u201c)\\n\\n(C2) \\\"No empirical results for real-world optimization problems [...]\\\"\\n\\n(R2) We agree with you. However, there is a limitation when working with real-world optimization problems: the ground state is not known a priori. From an experimental perspective, using planted instances is much more rigorous. It would be interesting to explore the application of our methods to real-world problems in future work.\\n\\n(C3) \\\"It would be more transparent if the success probability and runtime data could be provided as well [...]\\\"\\n\\n(R3) Thank you for this suggestion. We have revised Figure 3 to include subplot (c), which shows the probability of finding the \\\"easy-to-reach\\\" ground states, along with the corresponding TTS_easy\\u200b in subplot (d). The additional results demonstrate that the improved performance of MHCACm is indeed due to a higher success probability, rather than GPU parallelization.\\n\\n(C4) \\\"Can you elaborate more on the \\\"dual-primal Lagrangian approach\\\" and its difference from CAC?\\\"\\n\\n(R4) The relationship between CAC and the dual-primal Lagrangian approach is discussed in [1]. CAC is based on the concept of relaxing binary variables to continuous variables (or soft spins), using gradient descent, and employing auxiliary variables to modulate the dynamics and constrain the system to return to a binary state after a transient phase.\\n\\nThe dual-primal Lagrangian approach achieves a similar objective through the concept of descent-ascent, where gradient descent is performed in the relaxed continuous space of soft spins, and gradient ascent is carried out in the space of Lagrangian multipliers used to enforce a binary state constraint.\\n\\nHowever, [1] demonstrates that the auxiliary variables used in the dual-primal Lagrangian approach and CAC are not equivalent: in CAC, the auxiliary variables act as a pre-factor to the gradient, which is not the case in the dual-primal Lagrangian approach. As a result, CAC introduces effective asymmetric connections.\\n\\n[1] Sri Krishna Vadlamani, Tianyao Patrick Xiao, and Eli Yablonovitch. Physics successfully implements lagrange multiplier optimization. Proceedings of the National Academy of Sciences, 117(43): 26639\\u201326650, 2020.\\n\\n(C5) \\\"Some minor typos: e.g., line 122 'in order (to) benchmark this algorithm's ability'\\\"\\n\\n(R5) Thanks.\"}", "{\"comment\": \"(C1) \\\"While the authors present an interesting optimization algorithm, the clarity of the writing is a major concern [...]\\\"\\n\\n(R1) We have made some changes to the manuscript to improve the notation and readability. In particular, we have replaced some symbols with superscripts previously used to other symbols which are easier to read.\\n\\nWe also have rephrased several explanations. We believe the clarity of the manuscript has been substantially improved. If you can point more specifically to notation issues, please let us know and we will change them.\\n\\n(C2) \\\"The momentum and pre-conditioning typically serve different roles in optimization [...]\\\"\\n\\n(R2) Momentum and pre-conditioning due to the correction of amplitude heterogeneity have indeed different roles in the dynamics. Recently proposed Ising problem solvers have used these two mechanisms independently (see AIM [Kalinin et al. 2023] and dSBM [Goto et al. 2021] for momentum and CAC [Leleu et al. 2019] for pre-conditioning related to correction of amplitude heterogeneity). Numerical experiments indeed show that the combination of the two has better performance than either one used separately. In table 2 for example, the time to solution of CACm is smaller than that of CAC and AIM.\\n\\nIn appendix section S1, the preconditioning is analyzed without momentum. A similar analysis can be extended to take into account momentum but is out of scope of this work. In this paper, we focus on showing using numerical experiments that the combination of the two is useful for combinatorial optimization.\\n\\n(C3) \\\"The paper lacks theoretical guarantees regarding the convergence or performance of the proposed algorithm\\\"\\n\\n(R3) Obtaining theoretical guarantees for combinatorial optimization problems exhibiting a rugged landscape akin to spin glasses is not straightforward. It is possible to obtain using replica calculation and analytical estimation of some thermodynamic quantities, such as the number of stable fixed points etc. Even in simpler scenarios, analysis of convergence and non-equilibrium dynamics is difficult to establish (see Bernaschi 2020 in the references).\\n\\nMoreover, the algorithm considered in this paper exhibits asymmetric connections due to the effect of the auxiliary variables e multiplying the Ising couplings. Consequently, statistical analysis is rendered much more complicated due to the possibility of limit cycles and chaotic dynamics. This is why we have focused on numerical experiments in this work.\\n\\nOur current hypothesis for behavior of the algorithm is discussed in section 4.4 (see \\u201cThe energy landscape is structured [...]\\u201c)\\n\\n(C4) \\\"Since the algorithm incorporates a momentum variable, it would be more consistent to account for this momentum within the MH step\\\"\\n\\n(R4) We completely agree, and this is indeed an interesting direction to explore in future research. In this work, we focused on a simpler scenario as an initial step\\u2014quantifying the effect of hybrid dynamics with an MH step on biased Wishart planted instances. Incorporating momentum is a natural next step in this line of investigation.\\n\\n(C5) \\\"The paper lacks clear definitions for essential variables (e.g., u and e in Equation (1)).\\\"\\n\\n(R5) Thank you for the comment. We have added a definition for the general reader as follows:\\n\\nThe variables x are often referred to as \\\"soft spins.\\\" The variables u represent the internal states of these soft spins, while the auxiliary variable e accounts for variations in their amplitude.\\n\\n(C6) \\\"The time variable t is used ambiguously. It denotes continuous evolution in Equation (1) but has discrete updates in Equations (4)-(5).\\\"\\n\\n(R6) Thank you for pointing this out. We have replaced t by index m.\\n\\n(C7) \\\"Although the authors appear to focus on ground-state sampling, the formulation provided in Section 3.2 is more oriented toward sampling from a Gibbs measure [...]\\\"\\n\\n(R7) We have clarified that we focus on ground-state sampling by explaining in section 3.2:\\n\\nThe goal is to design a dynamical system capable of sampling from the ground states of $V$, specifically from the zero-temperature distribution P(\\u03c3) defined on the discrete space [...].\\n\\n(C8) \\\"Ensure that all variables and abbreviations are clearly defined. \\\"\\n\\n(R8) We have detailed the abbreviation in Table 1.\"}", "{\"metareview\": \"This paper considers designing non-equilibrium dynamics for computing the ground state of rugged energy landscapes. Two variants of the CAC (chaotic amplitude control) algorithm, namely CAC with momentum (CACm) and Metropolis-Hastings CAC with momentum (HMCACm) were proposed. Empirical results show fast convergence of the methods. Reviewers expressed concerns about the precise definition of the problem to solve, comparison with existing approaches, and theoretical justification, but after discussions they suggested most of the concerns were resolved. My impression is overall the strengths overweight the weaknesses, hence the recommendation of acceptance. However, the authors should account for the discussions in a revision.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers expressed concerns about the precise definition of the problem to solve, comparison with existing approaches, and theoretical justification, but after discussions they suggested most of the concerns were resolved. My impression is overall the strengths overweight the weaknesses, hence the recommendation of acceptance.\"}", "{\"comment\": \"Dear Reviewers ymkA, g5qM, zbqu, SwgT,\\nIf not already, could you please take a look at the authors' rebuttal? Thank you for this important service.\\n-AC\"}", "{\"comment\": \"We thank the authors for their response. I have decided to maintain my current score.\"}", "{\"summary\": \"This paper presents two variants of the CAC (chaotic amplitude control) algorithm, namely CAC with momentum (CACm) and Metropolis-Hastings CAC with momentum (HMCACm). CACm is a deterministic continuous-time dynamical model for combinatorial optimization, and HMCACm is a Metropolis-Hastings adjusted version of CACm with the Boltzmann distribution as the theoretical equilibrium. MHCACm can be regarded as a unified framework that generalizes many existing methods, including simulated annealing, Hopfield neural networks, analog iterative machines, and CAC(m). Numerical results show that this method illustrates faster relaxation time on NP-hard problems. In particular, MHCACm exhibits excellent performance in sampling from easier ground states, which may be relevant to training over-parametrized neural networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"State-of-the-art algorithms exploiting relaxation to a continuous state form discrete combinatorial optimization do not sample fairly from the Boltzmann distribution due to the lack of detailed balance. The MHCACm algorithm fills in this conceptual gap by adding a Metropolis-Hasting step. This new design ensures that MHCACm samples fairly from a discrete distribution while iterating over the relaxed (continuous) search space.\", \"The numerical results look strong. Table 3 shows that MHCACm has a success probability higher than dSBM, another well-known Ising solver based on GPU.\", \"MHCACm is well-suited for large-scale deployment on GPU because its computational bottleneck is matrix-vector multiplication.\"], \"weaknesses\": [\"Little theoretical justification for the effectiveness of MHCACm is provided. Specifically, it is not clear why a fair sampling strategy in the CAC framework would lead to a better performance in an optimization (ground state finding) problem. Relaxation to the ground state does not necessarily need to go through a detailed-balance algorithmic path. It would be nice to discuss how the Metropolis-Hastings step interacts with the CAC dynamics to potentially improve optimization performance.\", \"No empirical results for real-world optimization problems. While the performance of MHCACm has been benchmarked over the dWPE instances and GSET, these test instances are highly artificial and may not reflect the performance of the algorithm in a practical setting (e.g., quadratic assignment problems, portfolio optimization problems, etc.).\"], \"questions\": [\"The paper only reports the TTS in the experiments on dWPE instances (section 4.4). It would be more transparent if the success probability and runtime data could be provided as well, as it is not clear whether the advantage comes from a higher success probability or a shorter wall-clock runtime due to GPU parallelization.\", \"Can you elaborate more on the \\\"dual-primal Lagrangian approach\\\" and its difference from CAC?\", \"Some minor typos: e.g., line 122 \\\"in order (to) benchmark this algorithm's ability\\\".\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thanks for the authors\\u2019 detailed explanation. Upon revisiting the algorithm's MH steps, I may misunderstand the implementation. I rechecked the algorithm and the algorithm should satisfy the detailed balance. I have adjusted my score accordingly.\"}", "{\"comment\": \"(C1) \\\"the tasks of ground-state sampling are not formally defined in the paper, as well as the idea of non-equilibrium dynamics\\\"\\n\\n(R1) Thank you for the suggestion. We have added the following explanation in introduction:\\n\\n\\u201cGround-state sampling involves finding not just any ground state, as it is often defined in combinatorial optimization, but multiple degenerate ground states. In this context, non-equilibrium dynamics refers to the processes through which systems evolve over time toward steady states, with varying rates of reaching degenerate ground states that differ from equilibrium expectations\\u201d\\n\\n(C2) \\\"The connection between ground-state sampling and deep learning optimization/generalization mentioned in the paper is interesting, but the discussion is very limited.\\\"\\n\\n(R2) The connection between ground-state sampling and deep neural networks involves the concept of implicit bias (the inherent tendencies of optimization algorithms, such as stochastic gradient descent, to prefer certain solutions or behaviors over others) and the fact that there are many solutions of zero training error in overparameterized deep neural networks (see more details in [Soudry et al., 2018; Baity-Jesi et al., 2018; Feng & Tu, 2021; Baldassi et al., 2022; 2023]. From the viewpoint of combinatorial optimization, these concepts are reminiscent of non-equilibrium dynamics and sampling of degenerate ground-states. Although this connection is important, it is the subject of future work to develop it further.\\n\\n(C3) \\\"For numerical experiments, the definition of TTS is hard to comprehend. Does smaller TTS indicate better algorithmic performance?\\\"\\n\\n(R3) Yes, we have added the following to be clear:\\n\\nA common metric for evaluating the performance of Ising solvers is the \\\"time to solution\\\" (TTS) which measures the number of steps needed to have 99% probability of finding any ground state (the smaller, the better the algorithm's performance).\\n\\n(C4) \\\"Based on the algorithmic design in this paper, is there any insight we can draw on what an ideal optimizer for deep neural nets should look like?\\\"\\n\\n(R4) When applied to learning in deep neural networks, our results suggest that intermittent jumps, coupled with Metropolis-Hastings (MH) corrections, could enhance optimization by facilitating transitions to states corresponding to larger basins of attraction. These states are often linked to better generalization, as indicated in some literature.\\n\\n(C5) \\\"Can you elaborate more on the numerical performance of CACm and MHCACm? From the charts, the two performances seem to be close to each other.\\\"\\n\\n(R5) In the case of biased Wishart planted instances, the performance of MHCACm is about 10x better than CACm and AIM (see Fig. 3 (a) at b=12 and table 4). Indeed, the time to solution is 10x smaller for MHCACm. This is a significant difference given that AIM and CACm are state of the art algorithms for combinatorial optimization.\"}", "{\"summary\": \"The authors propose a novel hybrid continuous-discrete algorithm that combines deterministic continuous dynamics with Metropolis-Hastings (MH) steps for ground-state sampling in non-equilibrium dynamics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The combination of chaotic dynamics with an MH step to ensure convergence to a target distribution represents an innovative approach to hybrid sampling.\\n2. The hybrid algorithm shows potential for improving ground-state sampling performance in combinatorial optimization\\n3. The authors address parallelization for efficient computation on GPUs\\n4. The discussion on combining chaotic dynamics with probabilistic methods provides a useful context for researchers working at the intersection of machine learning and statistical physics.\", \"weaknesses\": \"While the authors present an interesting optimization algorithm, the clarity of the writing is a major concern. The main ideas are difficult to follow in the current presentation. I would encourage the authors to reconsider their notation and improve their writing to convey their ideas more effectively to readers.\\n\\n\\n**Major concerns**\\n1. The momentum and pre-conditioning typically serve different roles in optimization: momentum accumulates past gradients, and pre-conditioning captures curvature information. I do not think the connection demonstrated in Section 3.2 is trivial. A detailed explanation is needed in the main text to connect them.\\n2. The paper lacks theoretical guarantees regarding the convergence or performance of the proposed algorithm.\\n3. Since the algorithm incorporates a momentum variable, it would be more consistent to account for this momentum within the MH step (Equation (7)), rather than applying it solely to $\\\\boldsymbol{\\\\sigma}$.\\n\\n**Other suggestions**\\n1. The paper lacks clear definitions for essential variables (e.g., $\\\\boldsymbol{u}$ and $\\\\boldsymbol{e}$ in Equation (1)).\\n2. The time variable $t$ is used ambiguously. It denotes continuous evolution in Equation (1) but has discrete updates in Equations (4)-(5).\\n3. Although the authors appear to focus on ground-state sampling, the formulation provided in Section 3.2 is more oriented toward sampling from a Gibbs measure rather than explicitly defining the ground-state sampling.\\n4. Ensure that all variables and abbreviations are clearly defined. For example, abbreviations such as SA, HNN, and CACm used in Table 1 should be explicitly explained in the main text.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Overall, this paper proposes a promising hybrid continuous-discrete sampling framework that demonstrates clear benefits in convergence speed and sampling accuracy for rugged energy landscapes. This paper could benefit from additional theoretical insights and comparisons with established methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1)The proposed method effectively leverages the Metropolis-Hastings method within a continuous-discrete framework, enhancing sampling efficiency for ground-state discovery in challenging discrete landscapes. This approach demonstrates practical advantages, such as notable improvements in convergence speed and sampling accuracy on GPU architectures.\\n\\n(2) The method\\u2019s focus on non-equilibrium dynamics and its capacity to identify accessible ground states faster than traditional approaches offer a valuable contribution to optimization in rugged energy landscapes.\", \"weaknesses\": \"(1) Although the paper demonstrates the practical benefits of the hybrid continuous-discrete approach, the theoretical understanding of the sampling properties of the MHCACm algorithm remains unaddressed. I wonder if the authors could provide a discussion on potential directions for analytical proof of the sampling capabilities of MHCACm, such as convergence rates or mixing times.\\n\\n(2) I would like to suggest the authors include more comparison with other prominent sampling algorithms using collective variables, for example, 'Sampling metastable systems using collective variables and Jarzynski\\u2013Crooks paths' by G. Stoltz et al. \\n\\nIn particular, I am curious to see how the use of collective variables in that work relates to or differs from the proposed method of this paper, and if the authors can combine both approaches.\", \"questions\": \"(1) While the method achieves a 100x speedup over simulated annealing on GPUs, a discussion of any limitations or computational trade-offs encountered in specific scenarios (such as highly multimodal landscapes) would be beneficial, for instance, I wonder if the authors could provide specific examples of problem types or landscapes where their method may face challenges.\\n\\n(2) I wonder if the authors could provide additional insights into how MHCACm scales with increased problem complexity, for instance, if the authors could demonstrate how the empirical scaling results and the performance of proposed algorithm changes with increasing complexity for a range of benchmark examples.\\n\\n(3) The paper implies that the method\\u2019s bias towards \\u201ceasy\\u201d ground states is advantageous, but this effect could also limit the algorithm\\u2019s ability to reach more challenging or rare ground states. I wonder if the authors could provide quantitative results on the algorithm's performance in finding both \\\"easy\\\" and \\\"hard\\\" ground states across different problem instances. \\n\\n(4) Additionally, I wonder if the authors could discuss potential modifications to the algorithm that could help balance the performance of exploration of both easy and hard ground states.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We really appreciate you taking the time to check again and, indeed, there were some clarifications that were needed; especially in the pseudo-code.\\n\\nThe definition of $\\\\tau$ is given in the paragraph after equation (7), at the top of page 5. We have added a more direct definition for clarify.\", \"we_have_made_the_pseudo_code_of_appendix_s2_more_explicit_and_made_a_few_updates_to_the_notation\": [\"we have removed $p$ and used only $y$ in \\u201cDETERMINISTIC PATH\\u201d for simplicity,\", \"the input to PROBABILISTIC JUMP is fixed,\", \"we have made explicit that the \\u201cMETROPOLIS-HASTINGS STEP\\u201d depends also on $y$ and $\\\\tilde{x}$ ,\", \"we have written explicitly the update of variables for $\\\\tau$.\", \"We agree that our previous version of the pseudo-code was too reliant on implicit information written in the main text and we hope this version is clearer. Thank you very much for pointing this out.\", \"Concerning the dependance of the update rule on the momentum term, the terms $u$ and $e$ are solely a function of the initial state $\\\\sigma$, given that $u$ and $e$ are reset at every deterministic path. Thus, $y$ and, in turn, $\\\\tau$ only depends on the initial state of the deterministic trajectory $\\\\sigma$. Numerical results support this approach in practice.\", \"We think it is better to focus on the main numerical results in the main manuscript rather than detailing the pseudo-code, which is taking too much space.\"]}" ] }
Bl3e8HV9xW
Leveraging Variable Sparsity to Refine Pareto Stationarity in Multi-Objective Optimization
[ "Zeou Hu", "Yaoliang Yu" ]
Gradient-based multi-objective optimization (MOO) is essential in modern machine learning, with applications in e.g., multi-task learning, federated learning, algorithmic fairness and reinforcement learning. In this work, we first reveal some limitations of Pareto stationarity, a widely accepted first-order condition for Pareto optimality, in the presence of sparse function-variable structures. Next, to account for such sparsity, we propose a novel solution concept termed Refined Pareto Stationarity (RPS), which we prove is always sandwiched between Pareto optimality and Pareto stationarity. We give an efficient partitioning algorithm to automatically mine the function-variable dependency and substantially trim non-optimal Pareto stationary solutions. Then, we show that gradient-based descent algorithms in MOO can be enhanced with our refined partitioning. In particular, we propose Multiple Gradient Descent Algorithm with Refined Partition (RP-MGDA) as an example method that converges to RPS, while still enjoying a similar per-step complexity and convergence rate. Lastly, we validate our approach through experiments on both synthetic examples and realistic application scenarios where distinct function-variable dependency structures appear. Our results highlight the importance of exploiting function-variable structure in gradient-based MOO, and provide a seamless enhancement to existing approaches.
[ "Multi-Objective Optimization", "Machine Learning", "Deep Learning", "Multi-task Learning", "Gradient-Based Optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=Bl3e8HV9xW
https://openreview.net/forum?id=Bl3e8HV9xW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xoVo1KegEU", "pMFDPsnqhI", "ozRdEsruUA", "kwABdKHiEe", "kkJDKFXhWT", "jupmVGsb4u", "fF1j27HgBC", "ZkHs39hVqn", "Z779elfzK2", "YM40xEBxYR", "Y0NSCvkZoI", "WmfXmzE625", "WDQSpZ8aS4", "TP3707xGLV", "Qy7q9C8WY0", "K7YOmXpdT2", "JPbCDPVILh", "HkLGX2rPHm", "F0F3Z0GZHo", "AIBfCEy28c", "99Dfcp9rM1", "3u52ixcIIU" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731987332621, 1732991635963, 1730650810437, 1731986872026, 1737523923558, 1732390643782, 1731986527924, 1731985277827, 1730607712318, 1731985611465, 1730597008297, 1733088543594, 1734148156496, 1729702180061, 1731987477597, 1732378184746, 1732991807797, 1733076807200, 1731987003516, 1732378707010, 1731985122378, 1731987736751 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_MUMM" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_EgBd" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_Vj3K" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Area_Chair_gxvh" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_h4fS" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_MUMM" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Reviewer_EgBd" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ], [ "ICLR.cc/2025/Conference/Submission8647/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for the feedback\", \"comment\": [\"**W1. Stochastic variant of RP-MGDA. Can the RP-MGDA algorithm be adapted into a corresponding stochastic variant, and if so, would it still handle sparse parameter issues effectively after randomization?**\"], \"a\": \"Thank you for the questions.\\n - (1) As pointed out in Section 5 (see Lemma 2, Theorem 1 and Figure 3), RPS provides a sharper characterization than PS (PO $\\\\subseteq$ RPS $\\\\subseteq$ PS), meaning it is 'closer' to the desired Pareto Optimal set and narrows down sub-optimal PS solutions. Theorem 2 (also empirically verified in Section 7.2.3 and Appendix A, Example 3) further demonstrates that RPS is guaranteed to achieve PO under more relaxed assumptions, whereas PS is not. Together, these results indicate that RPS is a more effective proxy to pursue than PS.\\n - (2) The challenge lies in finding the *correct* partitioning of variables, which is crucial because an overly fine-grained partition may result in Generalized Pareto Stationary (GPS) with respect to that partition failing to imply Pareto Stationary (PS), and block-wise MGDA may fail (see Example 2). Conversely, an overly coarse partition yields no benifit and suffers from the drawback of PS (see Example 1). \\n \\n We successfully address this challenge by proposing REFINED_PARTITION() procedure which leverages the function-variable dependency structure (represented as bipartite graph) through cycle detection and variable merging, to identify a *valid* refined partition. This systematic procedure is theoretically guaranteed by Theorem 1, whose proof relies on the final bipartite graph being acyclic.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThe discussion period is due on Dec 2nd, could you please provide some feedback or continue the discussion if you have further questions? \\nWe really appreciate your time and effort in reviewing this paper and want to make sure that our work is correctly understood. We spent the most time trying to clarify and address your questions, and your response will be of great importance to us.\\n\\nBest regards,\\nThe authors.\"}", "{\"summary\": \"By leveraging refined variable partitioning, this work introduces a novel solution concept, Refined Pareto Stationarity (RPS), and a variant of the Multiple Gradient Descent Algorithm (MGDA), termed RP-MGDA, to address limitations of Pareto stationarity and MGDA in multi-objective optimization. RPS provides a sharper characterization than Pareto stationarity, and RP-MGDA is proven to converge to RPS. Comprehensive experiments demonstrate the superior performance of RP-MGDA over vanilla MGDA.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"$\\\\textbf{S1:}$ This work reveals limitations of Pareto stationarity and Multiple Gradient Descent Algorithm (MGDA), commonly used in multi-objective optimization, particularly when the function-variable dependency structure is sparse. The authors provide illustrative examples showing that a variable partitioning scheme is crucial for addressing these limitations.\\n\\n$\\\\textbf{S2:}$ Building on this insight, the authors introduce a novel solution concept, Refined Pareto Stationarity (RPS), which corresponds to the finest (or refined) variable partition aligned with the function-variable dependency structure. RPS is shown to be a sharper characterization than Pareto stationarity, from both the necessary and sufficient conditions perspective for Pareto optimality. \\n\\n$\\\\textbf{S3:}$ Utilizing this refined variable partitioning, the authors propose a variant of MGDA, termed RP-MGDA, which converges to RPS and is theoretically more efficient. Empirical results further demonstrate the superior performance of RP-MGDA compared to vanilla MGDA.\", \"weaknesses\": \"$\\\\textbf{W1:}$ If the full gradient is used in (11) of Definition 4 (Generalized Pareto Stationarity), then any Pareto stationary point with respect to any variable partition would also be Pareto stationary with respect to the trivial partition. This suggests that the current version of Definition 4 is not consistent with the motivation behind RPS or Algorithm 1 (RP-MGDA). It seems that replacing the full gradient $\\\\nabla \\\\bf{f}^{P_j}$ with the partial gradient $\\\\nabla_{\\\\bf{w}_{P_j}} \\\\bf{f}^{P_j}$ could address this issue. Please let me know if my understanding is incorrect, as I may adjust my ratings based on your response.\\n\\n$\\\\textbf{W2:}$ Since RP-MGDA and the vanilla MGDA have a similar convergence rate, the authors discuss the computational complexity of RP-MGDA at the end of Section 6 and state that RP-MGDA is theoretically cheaper than the vanilla MGDA, aside from the one-time overhead in line 1. However, the supporting argument lacks detail. A more thorough complexity analysis, including explicit computational cost comparisons for solving the dual subproblem in both algorithms, would be beneficial in clarifying the computational savings.\", \"questions\": \"Apart from the questions raised in the Weaknesses section, I have a few additional questions:\\n\\n$\\\\textbf{Q1:}$ In Theorem 3, it appears that $\\\\eta \\\\leq \\\\min_i \\\\frac{1}{L_i}$ may be insufficient, as it seems that (32) cannot be derived directly from (31) using (33). Could you please verify (32) in the proof of Theorem 3 on Page 16? If my observation is correct, one alternative might be to set $\\\\eta \\\\leq \\\\min_i \\\\frac{1}{2L_i}$ in Theorem 3 and adjust the proof accordingly. \\n\\n$\\\\textbf{Q2:}$ In the experiments of Section 7.2, is a stochastic variant of RP-MGDA used? Additionally, when ReLU is applied, what type of derivative is used in the implementation of RP-MGDA? Could this method be extended to a stochastic setting, and what significant challenges might arise in making such an extension?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**W3. Why in lines 467-468, function-variable dependency structure is between objectives and specific layers? Shouldn't the dependency be between objectives and part of weights in each layer for a sparse model? Discuss the simple example I proposed and how to do the partition.**\"], \"a\": [\"Thank you for the question.\", \"As noted in our previous response to W1, the choice of 'variables' is flexible and context-dependent, ranging from fine-grained to coarse-grained. In Hierarchical Classification (Section 7.2.2), we consider variables to be feature extraction layers in BranchNet, as this best captures the relevant dependency structure for this problem setup.\", \"It is certainly possible to define variables differently\\u2014for example, by treating each individual weight as a variable. However, following the REFINED_PARTITION() procedure, one will find that these weights will need to be merged due to their inclusion in certain cycles (in the exact same way as illustrated in Example 2, Figure 2 Right). This outcome, unsurprisingly, aligns with common sense and the underlying principles of our framework, which, in turn, supports the soundness of the REFINED_PARTITION() procedure.\", \"__Further clarification on 'Sparsity'.__ Below, we want to clearly distinguish between the following two concepts, as this distinction is crucial for accurately understanding our paper:\", \"(1) __Sparse model weights:__ This refers to models where many of the final optimal weights are zero after training is complete (e.g., in Lasso regression). Importantly, during training, the objective function still depends on all weights.\", \"(2) __Sparse function-variable dependency:__ This concept pertains to the structure of the problem itself, observed externally, where 'variables' typically correspond to neural network modules in empirical deep learning problems. The dependency structure is naturally determined even before training begins.\", \"Our paper specifically focuses on the __second__ concept, and we do not claim or imply sparsity in model weights in our work. Next, we walk through the Hierarchical Classification (HC) example to further explain the concept:\", \"In our HC setup, the dependency structure is inherently determined by the architecture of BranchNet and where we put the 'off-ramp' classifiers. Each 'off-ramp' classifier corresponds to a different classification objective (from easy to difficult). For example, the first objective $f_1$ uses only the representation produced after FEX-Layer 1 and does not interact with the 2nd or 3rd layers. Consequently, it depends solely on $\\\\mathbf{w}_1$. Similarly, $f_2$ does not interact with the 3rd layer, thus depending on $\\\\mathbf{w}_1$ and $\\\\mathbf{w}_2$, but not $\\\\mathbf{w}_3$. Finally, $f_3$ uses the output after FEX-Layer 3, thus depending on all the modules mentioned before, i.e. $\\\\mathbf{w}_1$, $\\\\mathbf{w}_2$ and $\\\\mathbf{w}_3$.\", \"It is now clear why the dependency structure is represented as shown in Figure 5 (Middle). We can then use REFINED_PARTITION() to explore whether a finer partition can be identified for this problem structure. While this Ladder structure is not particularly 'sparse', we can still find a finer partition.\", \"We appreciate the interesting setting you have proposed and would be happy to discuss it further.\", \"First, we follow your proposed dependency literally, where each objective $f_i$ depends on $[w_{1,i},\\\\ldots,w_{n,i}]$ only. Then in our refined partition framework, every variable (treating each weight $w_{k,i}$ as a variable) can be optimized separately using gradient descent, with the corresponding refined partition being the finest partition. Note that for this dependency structure, even MGDA is not needed since no two objectives share the same variable. Indeed, this is essentially equivalent to $m$ separate single-objective optimization problems since no variable is shared at all.\", \"This dependency structure seems somewhat unconventional, especially regarding how the proposed dependency might be enforced. For example, if a function, say $f_i$, depends on the output of the $n$-th layer, it would naturally involve all weights in the preceding layers. It is unclear how it could depend solely on $w_{k,i}$ for the $k$-th layer.\", \"We would like to reiterate our earlier clarification that 'sparsity' refers to sparse function-variable dependencies, not sparse model weights, to ensure we are aligned and avoid any potential misunderstandings.\", \"We kindly ask for clarification on the context of the objective functions being discussed or an example to help us better understand your perspective. If our current interpretation is incorrect, we would appreciate more details\\u2014for instance, whether the objective functions are loss functions, what representations they depend on, or an example of how the proposed dependency could hold true. We would be happy to discuss further based on this additional information.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We sincerely thank all the reviewers for their time, effort, and valuable feedback on our paper. We appreciate the constructive comments and insightful questions, which have helped us further clarify and strengthen our work.\\n\\nWe are particularly encouraged by the positive feedback from Reviewers MUMM and h4fS, who both acknowledged the novelty and strengths of our contributions. Notably, Reviewer MUMM updated their score to 8 (accept, good paper), highlighting their appreciation for the refined partitioning approach and the introduction of Refined Pareto Stationarity (RPS) as a sharper solution concept.\\n\\nIn our individual responses, we have addressed questions and concerns raised by the reviewers. We hope these clarifications resolve any ambiguities and further demonstrate the value of our contributions.\\n\\nWe summarize the changes made in the latest revision (highlighted in purple) as follows:\\n1. Added experiments comparing RP-MGDA and MGDA with *stochastic* gradients to address questions from Reviewers MUMM and Vj3K (Appendix B.3.4).\\n2. Added experiments comparing PCGrad with and without refined partitioning, further supporting the RP approach and addressing Reviewer EgBd's question (Appendix B.3.5).\\n3. Included experiments using ELU as the activation function (ensuring differentiability everywhere) to address Reviewer MUMM's question (Appendix B.3.3).\\n4. Provided pseudocode for the REFINED_PARTITION() procedure in Appendix A.1, offering detailed reference for the process described in the main paper (Lines 285\\u2013286).\\n5. Corrected some notation issues and added further justification to the proof of Theorem 3 (Page 16), addressing Reviewer MUMM's question.\"}", "{\"comment\": [\"**W2: what does \\\"finer\\\" or \\\"coarser\\\" partition mean?**\"], \"a\": \"Thank you for this question. A partition $\\\\mathcal{P}$ of a set $\\\\\\\\{1, 2, \\\\ldots, d \\\\\\\\} $ is a collection of subsets $P \\\\subseteq \\\\\\\\{1, 2, \\\\ldots, d \\\\\\\\}$ such that every element is included in *exactly* one subset $P \\\\in \\\\mathcal{P}$. We call a partition $\\\\mathcal{P}$ finer than another partition $\\\\mathcal{Q}$ (equivalently, $\\\\mathcal{Q}$ is coarser than $\\\\mathcal{P}$), if for any $Q \\\\in \\\\mathcal{Q}$ there exists a $P \\\\in \\\\mathcal{P}$ such that $P \\\\subseteq Q$; see Remark 2 in Appendix A (line 742-744). In other words, a finer partition consists of more subsets, with each subset being smaller. For example, let $$ \\\\mathcal{P} = \\\\\\\\{ \\\\\\\\{1\\\\\\\\}, \\\\ldots, \\\\\\\\{d\\\\\\\\} \\\\\\\\}, ~~\\\\mathcal{Q} = \\\\\\\\{\\\\\\\\{1, \\\\ldots, d\\\\\\\\} \\\\\\\\}.$$\\n Then, $\\\\mathcal{P}$ is finer than $\\\\mathcal{Q}$. In fact, this $\\\\mathcal{P}$ is finest and this $\\\\mathcal{Q}$ is coarsest (referred to as the trivial partition).\\n\\n In this terminology, MGDA and other works in MOO [1] are typically formulated using the coarsest variable partition $\\\\mathcal{Q}$, while we demonstrate in this paper that a *proper* finer partition of variables (when it exists) is strictly superior to using the coarsest partition.\\n\\n - There are two perspectives to understand the refined-partition idea: top-down and bottom-up.\\n - The __top-down perspective__ starts with the coarsest partition where Theorem 1 trivially holds (e.g., MGDA in Example 1), and then identifies the finest-grained partition possible while ensuring Theorem 1 still holds. This approach refines the traditional PS concept, where PS corresponds to the special case of the trivial partition $\\\\mathcal{Q}=[d]$. This perspective is more suitable for theoretical purposes.\\n - The __bottom-up perspective__ starts with the default of treating every variable separately (e.g, coordinate-wise MGDA in Example 2), where Theorem 1 is not guaranteed. Variables must then be iteratively merged based on function dependencies, until a coarser partition is reached where Theorem 1 holds. This perspective is more suited for constructive purposes.\\n \\n[1] Fernando, H. D., H. Shen, M. Liu, S. Chaudhury, K. Murugesan, and T. Chen (2023). \\u201cMitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Approach\\u201d. In: International Conference on Learning Representations.\"}", "{\"comment\": [\"**Q2. implementation details on RP-MGDA: did we use stochastic variant in Section 7.2 benchmark experiments? What is the gradient for ReLU? Discuss the potential extension and challenges to stochastic setting.**\"], \"a\": \"Thank you for this comment. We use the deterministic versions of both MGDA and RP-MGDA in our benchmark experiments to align with our methodology and theoretical framework. This ensures the results are not confounded by other less relevant factors.\\n - Extension to stochastic setting and challenges: \\n It is natural to propose the stochastic counterpart of RP-MGDA (i.e. simply replacing deterministic gradients with stochastic ones). \\n However, analyzing this extension (e.g., convergence properties) requires substantial effort. \\n - Although implementing a stochastic version of MGDA is straightforward, there is a developing body of literature addressing various potential issues, such as biased descent directions and the need for additional assumptions to establish convergence ([2] [3]).\\n - We see this as a promising direction to explore, especially given the generality of the proposed RPS solution concept, which is not limited to any specific algorithm. However, there are challenges to consider:\\n\\n (i) In the stochastic setting, the descent property may not hold in every iteration, potentially undermining RP-MGDA's effectiveness as a post-processing refinement algorithm (Figure 4 Left). In particular, the commonly used 'compact sublevel set' argument (due to descending of the algorithm) no longer applies.\\n\\n (ii) Addressing the bias in stochastic descent directions without introducing additional dependencies between functions and variables could require extra caution. Note that for Refined Partitioning, it is crucial to preserve sparsity as much as possible to ensure the most effective variable partitioning.\\n - We have added experiments using stochastic gradients for both MGDA and RP-MGDA; see Appendix B (Pages 24 and 25, Figures 21\\u201323). The empirical results continue to validate the superiority of the refined partitioning approach, with the benefits being arguably even greater in the stochastic setting. However, we note that RP-MGDA and MGDA no longer guarantee descent in every iteration (see Figures 22 and 23). The performance of both methods drops slightly, potentially due to bias in the descent direction, which supports our conjecture regarding the associated challenges.\\n - We use the standard PyTorch package for automatic differentiation, where the derivative of ReLU is defined to be $0$ at $0$ (see [here](https://discuss.pytorch.org/t/gradient-of-relu-at-0/64345)). This is standard in deep learning, including MGDA-related ones (e.g., MTL-MOO [4], PCGrad [5]), although technically ReLU is not differentiable at 0 (a subgradient can be defined). A rigorous analysis may require multi-objective subgradient methods, e.g., [6], which would be an interesting future extension. We have also added additional experiments with ELU replacing ReLU as activation (ELU is differentiable everywhere), and verified that the experiment results and conclusions are similar (see Appendix B, Figure 18-20).\\n\\n\\n[2] Fernando, H. et al. (2023). \\\"Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Approach\\\". In International Conference on Learning Representations.\\n\\n[3] Liu, S., & Vicente, L. N. (2021). \\\"The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning\\\". Annals of Operations Research, vol. 339, pp. 1119\\u20131148.\\n\\n[4] Sener, O. and V. Koltun. (2018). \\\"Multi-Task Learning as Multi-Objective Optimization\\\". In: Advances in Neural Information Processing Systems.\\n\\n[5] Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., & Finn, C. (2020). \\\"Gradient Surgery for Multi-Task Learning\\\". Advances in Neural Information Processing Systems, 33, 5824-5836.\\n\\n[6] Da Cruz Neto, J.X., Da Silva, G.J.P., Ferreira, O.P. et al. (2013). \\\"A subgradient method for multiobjective optimization\\\". Comput Optim Appl 54, 461\\u2013472.\"}", "{\"summary\": \"This paper studies the multi-objective optimization problem and reveals the limitation of the widely used metric, Pareto stationarity. Accordingly, the authors propose the refined Pareto stationarity sandwiched between Pareto stationarity and Pareto optimality. Then, the authors verify the benefit of their RP-MGDA empirically and theoretically.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper has a solid standing point that Pareto stationarity could be short in complex and sparse settings.\\n2. Experiments explore multiple variable dependency structures.\", \"weaknesses\": \"1. Some important parts are confusing in the paper. 1) How can we get the function-variable dependency structure? This is important because the dependency structures vary in different settings. 2) Algorithm REFINED_PARTITION is not clear. A more detailed explanation of it should be added. In the current version, I cannot say I fully understand it.\\n2. Since I did not fully understand the partition, what does \\\"finer\\\" or \\\"coarser\\\" partition mean? \\n3. According to the illustration of the function-variable dependency structure and the experimental setup in lines 467-468, I understand this function-variable dependency structure as a dependency between objectives and \\\"specific layers\\\" in the model. However, I do not feel this setup fits my sense. From my point of view, a dependency should be between objectives and some part of weights in each layer if the model is sparse (Correct me if I am wrong). A simple example would be this: suppose we have m objectives and the model is a n-layer neural network. Then the objective $i$ is dependent on weights $[w_{1, i},..., w_{n,i}]$ where the first index of weights represents the layer index and the second index of weight represents the objective index. Why do the authors consider the dependency in your way? Also, if consider my settings, how to do the partition?\\n4. Though this paper has considered multiple dependency structures in the experiments, tasks, and datasets are a bit easy. Previous related papers compare the performance of Cityscapes, NYU-v2, CelebA, etc. Do authors consider checking these experiments? In addition, this paper only compares MGDA and RP-MGDA, which is short in the number of methods. Lastly, can the partition be added to other MGDA-based methods?\", \"questions\": \"Please check the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your detailed feedback. We have addressed your questions below and provided an important clarification on 'sparsity' in our response to W3.\", \"comment\": [\"**W1. Some important parts are confusing in the paper. 1) How can we get the function-variable dependency structure? 2) Procedure REFINED_PARTITION() is not clear.**\"], \"a\": \"Thank you for the questions. We aim to provide as much clarification as possible below, and please let us know if our answer addresses your concern.\\n - (1) The function-variable dependency structure is part of the problem definition: when implementing any objective function, it is already clear which variable(s) it depends on\\u2014or does not. In other words, the adjacency matrix $A$ representing the function-variable dependencies of the problem is given. The challenge is whether and how to effectively exploit this given structure for better optimization. In synthetic examples, the dependencies are explicitly defined by the function expressions. In empirical problems, the dependency structure is inherently dictated by the problem setup----informally, by what is shared and what is not. For instance, in our Personalized Federated Learning setup, a global neural network model is shared across users, whereas each user's local personalization model is unique to their respective objective, see Section 7.2.1 and Appendix B.3.1 for details. \\n\\n For an intuitive understanding, __we recommend referring to Example 1 and 2 in Section 4, particularly Figure 2__. Furthermore, please refer to Section 7.2 where we provide three detailed examples demonstrating various dependency structures in real-world applications, __see Figure 5 for a graphical illustration__. \\n\\n - It is worth noting that what constitutes 'variables' is flexible and context-dependent; it can be fine-grained or coarse-grained. For instance:\\n - In Example 1 and 2, variables are scalars.\\n - In Personalized Federated Learning (Section 7.2.1), variables are personalized neural network models and a global neural network model.\\n - In Hierarchical Classification (Section 7.2.2), variables are feature extraction layers of the BranchNet.\\n - In MOL-PI (Section 7.2.3), variables are (grouped) linear regression weights. \\n - (2) Briefly, REFINED_PARTITION() is a repeated process that finds any cycle (if exists) in the underlying bipartite graph (see Eq (12) for the adjacency matrix A representation of the bipartite graph), then groups (aka. 'merges') all variables appeared in that cycle together and thus contracts the graph. The process is repeated until the final graph is acyclic. __Please refer to Figure 1 for an illustration__. We have also added the pseudo-code for REFINED_PARTITION() in Appendix A, Page 16, Line 848-863. Below, we provide a simplified plain-language description of the procedure.\\n\\n \\n **Algorithm: REFINED_PARTITION()** \\n **Input:** A - Adjacency matrix representing the function-variable dependencies (bipartite graph). \\n **Output:** $\\\\mathcal{P}$ - Refined partition of variables.\\n 1. Initialize $\\\\mathcal{P}$ \\u2190 { {$w_1$}, {$w_2$}, ..., {$w_d$} } (each variable starts in its own group).\\n 2. While a cycle exists in the bipartite graph represented by A:\\n\\n 2.1. Detect a cycle $C$ in the bipartite graph.\\n\\n 2.2. Merge all variable nodes in $C$ into a single group (e.g., merge {$w_1$} and {$w_2$} into {$w_1, w_2$}).\\n\\n 2.3. Update A to reflect the contraction of the graph.\\n\\n 2.4. Replace the original groups in $\\\\mathcal{P}$, corresponding to the variables in $C$, with the merged group.\\n\\n 3. Return $\\\\mathcal{P}$.\\n\\n On a high level, REFINED_PARTITION() is a key systematic approach we propose to identify a *valid* partition of variables into 'blocks' (referred to interchangeably as 'groups' or 'subsets') for a given problem. This partition enables a sharper characterization (i.e. RPS) than Pareto stationarity, and allows for block-wise MGDA (i.e. RP-MGDA) to function without the risk of converging to degenerate solutions (see Example 2 for a counterexample). \\n\\n Otherwise, if a partition is too fine, Generalized Pareto Stationary with respect to that partition is not guaranteed to imply Pareto Stationary and block-wise MGDA may fail (see Example 2); conversly, if a partition is too coarse, no benifits are gained, and we will suffer from the drawback of Pareto Stationarity (see Example 1).\"}", "{\"summary\": \"This work highlights the limitations of Pareto stationarity when dealing with sparse function-variable structures, offering compelling examples to illustrate these constraints. To overcome these challenges, they introduce a new solution concept called Refined Pareto Stationarity (RPS) and present an efficient partitioning algorithm to automatically uncover function-variable dependencies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work is presented with good writing style, where the summarized problems with detailed explanations make it easy for readers to understand the problem addressed in this article.\\n\\n2. The set defined by Pareto stationarity is broader than that defined by Pareto optimal, a detail that previous algorithms based on Pareto stationarity have overlooked. This paper faces this gap and introduces algorithms to overcome this limitation, representing a significant improvement.\\n\\n3.Theorems 1 and 2 provide a detailed discussion on the relationships among Refined Pareto Stationarity, Pareto optimality, and Pareto stationary points. Additionally, Theorems 3 and Corollary 1 offer theoretical guarantees for the specific convergence rate of RP-MGDA.\\nThe proof seems solid but I have not carefully checked the whole Appendix.\\n\\n4.The performance of MGDA and RP-MGDA was compared across various scenarios, demonstrating the effectiveness and versatility of RP-MGDA.\", \"weaknesses\": \"1.Computational Cost. Neither MGDA nor RP-MGDA seems well-suited to large-scale machine learning problems, as stochastic variants (e.g. MOCO, MODO) are often more computationally efficient in practice. Can the RP-MGDA algorithm be adapted into a corresponding stochastic variant, and if so, would it still handle sparse parameter issues effectively after randomization?\\n\\n2.Theoretical challenges. Compared to the RPS concept proposed in this paper, Pareto stationary points appear to be a clearer optimization target. What changes does the Refined Pareto Stationarity bring to the theoretical proof? What challenges arise from these changes, and how are you addressed?\\n\\n3.Illustrations on Pareto front. The paper could provide a more in-depth analysis of Pareto optimal, Pareto stationarity, and Refined Pareto Stationarity. For example, the experiments could visualize the fronts corresponding to each of these concepts. Additionally, plotting the convergence trajectories of MGDA and RP-MGDA would further emphasize the effectiveness of RP-MGDA.\", \"questions\": \"1.Lack of practical examples. Could you provide some real-world examples of sparse function-variable structures? For instance, cases that exist in multi-objective federated learning or reinforcement learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period is nearing its end, we kindly look forward to your feedback on our responses. We wanted to share that the other reviewers who have responded have positively updated their scores after reviewing our rebuttal and the additional clarifications and experiments we provided. We would greatly appreciate your feedback to ensure a comprehensive evaluation of our work.\\n\\nThank you for your time and effort.\\n\\nBest regards,\\nThe Authors\"}", "{\"metareview\": \"This paper studies gradient-based multi-objective optimization (MOO). In particular, the authors first revealed some limitations of the Pareto Stationarity, and proposed a novel Refined Pareto Stationarity (RPS) and the associated RP-MGDA algorithm. RPS provides a sharper characterization than Pareto Stationarity and RP-MGDA is proved to converge to RPS. Numerical experiments demonstrated the advantages of RP-MGDA over the original MGDA. The authors are advised to incorporate various concerns that the reviewers raised into the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Added numerical experiments and clarified some proof steps.\"}", "{\"summary\": \"Multi-objective optimization (MOO) have many applications in machine learning. Multiple Gradient Descent Algorithm (MGDA) is essential to solve MOO, converging to a Pareto stationary solution, which serves as a first-order necessary condition for Pareto optimality. This work demonstrates that the Pareto Stationarity (PS) has limitations when sparse function-variable dependencies exist, and to address it, they propose a concept named Refined Pareto Stationarity (RPS). With a suitable designed partitioning procedure, they propose an optimization algorithm RP-MGDA, which is effective in both the theory and experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The contributions of this paper are very clear. They provide a refined concept, and based on this, they propose a novel MOO algorithm.\", \"Under some convexity, they prove RPS reduces exactly to Pareto optimality, whereas the widely-used PS does not, suggesting that the new solution concept is more superior.\", \"A more powerful algorithm is proposed, and the advantages are supported in both the convergence and the experiments.\"], \"weaknesses\": [\"The aim of the partition of variables should be clearly interpreted in the Introduction.\", \"I apologize that I am not very familiar with this topic and the relevant references. I will carefully refer to the comments of other reviewers.\"], \"questions\": \"Is variable sparsity a common phenomenon in MOO?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**W3. Illustrations on Pareto front**\"], \"a\": \"In Experiment Section 7.2, we explored three examples with different function-variable dependency structures, derived from real-world applications:\\n\\n - Personalized federated learning [1,2] (MTL structure)\\n - Hierarchical classification [3,4] (Ladder structure)\\n - Multi-objective learning with partial information [5] (Chain structure)\\n \\n The illustrations of the problem structures and the corresponding variable partitioning are shown in Figure 5, with detailed setups for these experiments discussed in Section 7.2 and Appendix B. Experimental results have demonstrated the effectiveness of Refined Partitioning. While it is clear that we cannot exhaustively cover all real-world MOO problems with non-trivial function-variable dependency structures in this paper, the examples presented serve as a \\u2018proof of concept\\u2019. They provide a foundation for practitioners to apply refined partitioning to other realistic MOO problems of interest with non-trivial structures. \\n\\n Note that our framework is general and capable of handling any function-variable dependency structure, including those perceived as 'dense.' In the worst-case scenario (where the only valid partition is $\\\\mathcal{Q}=[d]$), it reduces RPS to PS and RP-MGDA to MGDA, serving as a baseline.\\n \\n[1] Liang, Paul Pu, et al. \\\"Think locally, act globally: Federated learning with local and global representations.\\\" arXiv preprint arXiv:2001.01523 (2020).\\n\\n[2] Tan, Alysa Ziying, et al. \\\"Towards personalized federated learning.\\\" IEEE transactions on neural networks and learning systems 34.12 (2022): 9587-9603.\\n\\n[3] Zhu, Xinqi, and Michael Bain. \\\"B-CNN: branch convolutional neural network for hierarchical classification.\\\" arXiv preprint arXiv:1709.09890 (2017).\\n\\n[4] Seo, Yian, and Kyung-shik Shin. \\\"Hierarchical convolutional neural networks for fashion image classification.\\\" Expert systems with applications 116 (2019): 328-339.\\n\\n[5] Liu, Yang, et al. \\\"Vertical federated learning: Concepts, advances, and challenges.\\\" IEEE Transactions on Knowledge and Data Engineering (2024).\"}", "{\"title\": \"Thank you for the response. I raise my rating to 8.\", \"comment\": \"Thank you for your response and the additional experimental results, which effectively addressed my concerns. I appreciate the refined partition approach and the introduction of the Refined Pareto Stationarity solution concept. Accordingly, I raise my rating to 8 (accept, good paper).\"}", "{\"comment\": \"Dear reviewer,\\n\\nThe discussion period is due on Dec 2nd, could you please provide some feedback or continue the discussion if you have further questions? \\nWe really appreciate your time and effort in reviewing this paper. Your response is important to us. \\n\\nBest regards,\\nThe authors.\"}", "{\"comment\": \"Thanks for the detailed answers. I have raised my score.\"}", "{\"comment\": [\"**W4. Though this paper has considered multiple dependency structures in the experiments, tasks, and datasets are a bit easy. Previous related papers compare the performance of Cityscapes, NYU-v2, CelebA, etc. Do authors consider checking these experiments? In addition, this paper only compares MGDA and RP-MGDA, which is short in the number of methods. Lastly, can the partition be added to other MGDA-based methods?**\"], \"a\": \"Thank you for your comments. We would like to address your points as follows:\\n - Regarding the choice of tasks and datasets: While we acknowledge that most multi-task learning papers have explored datasets such as Cityscapes, NYU-v2, and CelebA, our primary focus in this work is on demonstrating the effectiveness of the refined partitioning framework for multi-objective optimization across *various* dependency structures, rather than the complexity of specific tasks or datasets for multi-task learning. The dependency structures we selected were designed to align closely with the theoretical contributions of our paper and to serve as clear, illustrative examples. Also note that the MTL dependency structure in [2] is essentially explored through our PFL experiments which share the same structure. Expanding to more challenging datasets like Cityscapes or NYU-v2, while valuable, is not essential to validate our primary claims. However, we appreciate the suggestion and do consider incorporating such experiments in the revision or as part of an extended study if they are found to provide additional meaningful insights.\\n - On the number of methods compared: The goal of our experiments is to explore and validate the effectiveness of the refined partitioning approach under various dependency structures. Since we use MGDA as a solid starting point and rigorously analyze its refined-partitioning variant (e.g., Theorem 3), this comparison is the most fair and relevant for evaluating our contributions. In contrast, directly comparing RP-MGDA to methods like PCGrad [3] would not be as fair or meaningful in this context. On the other hand, we do agree that additional comparisons, such as evaluating another method (e.g., PCGrad) with and without refined partitioning, could further enhance confidence in our RP approach. __In the revision, we included experiments comparing PCGrad+ (a modified version of PCGrad ensuring it is a common descent algorithm) with and without Refined Partitioning.__ The results demonstrate that RP improves empirical convergence and solution quality (see Appendix B.3.5, Pages 25\\u201326).\\n - On adding partitions to other MGDA-based methods: Thank you for this question. The refined partition approach and the RPS solution concept are designed to be general and could be extended to other gradient-based descent methods that converge to Pareto Stationarity (see the discussion in Section 6 Line 313-318). We agree with you that studying these extensions is a valuable and interesting direction, but it goes a bit beyond the scope of this work. We extensively study the widely-used foundational MGDA algorithm as a starting point, and we hope this work will inspire further research in this direction.\\n \\n[2] Sener, O. and V. Koltun. (2018). \\\"Multi-Task Learning as Multi-Objective Optimization\\\". In: Advances in Neural Information Processing Systems.\\n\\n[3] Yu, Tianhe, et al. \\\"Gradient surgery for multi-task learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 5824-5836.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for recognizing our contributions. We greatly appreciate your time and effort in reviewing our work and are glad that the additional experiments addressed your concerns.\"}", "{\"title\": \"Thank you for your insightful comments and we answer your questions below.\", \"comment\": [\"**W1. For Definition 4 (11) in the main paper, should we use partial gradient?**\"], \"a\": \"Thank you for the question. In the revision, we've added further justification for this step. We recall that for a 1-strongly convex (not necessarily differentiable) function $h$ (i.e., $h-\\\\tfrac12 \\\\|\\\\cdot\\\\|^2$ is convex), the following inequality holds, see e.g. Nesterov (2018, Corollary 3.2.3, p. 210): $$\\\\forall x, ~~ h(x) \\\\geq h(x^*) +\\\\tfrac{1}{2} \\\\|x-x^*\\\\|^2,$$\\n where $x^*$ is the unique minimizer of $h$.\\n\\n We apply the above inequality to the definition of $\\\\mathbf{d} _ {P_j}^t$: \\n $$\\\\mathbf{d} _ {P_j}^t = \\\\mathop{\\\\mathrm{argmin}} _ {\\\\mathbf{d} _ {P_j}} ~ \\\\max _ {i \\\\in F_j} ~ \\\\left[\\\\nabla _ {\\\\mathbf{w} _ {P_j}}f_i(\\\\mathbf{w}^t) \\\\cdot \\\\mathbf{d} _ {P_j} + \\\\tfrac{1}{2} \\\\|\\\\mathbf{d} _ {P_j}\\\\|^2\\\\right].\\n $$\\n Indeed, the objective above is 1-strongly convex in $\\\\mathbf{d} _ {P_j}$ (note that the norm term does not depend on $i$, so it can be pulled out of the max operator). Setting $x = \\\\mathbf{0}$ and $x^*= \\\\mathbf{d} _ {P_j}^t$ we obtain:\\n\\n \\\\begin{align}\\n 0 \\\\geq \\\\max _ {i \\\\in F_j} ~ \\\\left[\\\\nabla _ {P_j}f_i(\\\\mathbf{w}^t) \\\\cdot \\\\mathbf{d} _ {P_j}^t + \\\\tfrac{1}{2} \\\\|\\\\mathbf{d} _ {P_j}^t\\\\|^2\\\\right] + \\\\tfrac12 \\\\|\\\\mathbf{0} - \\\\mathbf{d} _ {P_j}^t \\\\|^2.\\n \\\\end{align}\\n Thus, rearranging we obtain (32) from (31), i.e. Line 815 to 819 in Appendix A.\"}", "{\"title\": \"Thank you for your positive feedback.\", \"comment\": [\"**W1. The aim of the partition of variables should be clearly interpreted in the Introduction.**\"], \"a\": \"Yes, there are structured problems in MOO with sparse dependencies (e.g., see [1]). However, the interpretation of what constitutes 'common' and 'sparse' may vary depending on individual perspectives. Please refer to the examples provided in Section 7.2 of our experiments, which illustrate dependency structures in various problems, including PFL, HC, and MOL-PI. Figure 5 provides a visual representation of these structures. Note that our framework is general and capable of handling any function-variable dependency structure, including those perceived as 'dense.' In the worst-case scenario (where the only valid partition is $\\\\mathcal{Q}=[d]$), it reduces RPS to PS and RP-MGDA to MGDA, serving as a baseline.\\n\\n[1] Sener, O. and V. Koltun. (2018). \\\"Multi-Task Learning as Multi-Objective Optimization\\\". In: Advances in Neural Information Processing Systems.\"}" ] }
BkwCrIsTbR
Scaling Instruction-tuned LLMs to Million-token Contexts via Hierarchical Synthetic Data Generation
[ "Linda He", "Jue WANG", "Maurice Weber", "Shang Zhu", "Ben Athiwaratkun", "Ce Zhang" ]
Large Language Models (LLMs) struggle with long-context reasoning, not only due to the quadratic scaling of computational complexity with sequence length but also because of the scarcity and expense of annotating long-context data. There has been barely any open-source work that systematically ablates long-context data, nor is there any openly available instruction tuning dataset with contexts surpassing 100K tokens. To bridge this gap, we introduce a novel post-training synthetic data generation strategy designed to efficiently extend the context window of LLMs while preserving their general task performance. Our approach scalably extends to arbitrarily long context lengths, unconstrained by the length of available real-world data, which effectively addresses the scarcity of raw long-context data. Through a step-by-step rotary position embedding (RoPE) scaling training strategy, we demonstrate that our model, with a context length of up to 1M tokens, performs well on the RULER benchmark and InfiniteBench and maintains robust performance on general language tasks.
[ "Large Language Models", "Long Context", "Instruction-Tuning Data" ]
Accept (Poster)
https://openreview.net/pdf?id=BkwCrIsTbR
https://openreview.net/forum?id=BkwCrIsTbR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zJ4Tz3Majr", "ymioOpEtO0", "xskPA5w7gJ", "wcyhKxMU7M", "wZtUWl9Myh", "vJtVaRw1Vb", "v7fKFsetr0", "tDJZ7MrMK1", "syXixCYjj0", "s6ku8vcFJ4", "ls35aP3Xfz", "lpmxAU32uK", "inefeLuyfS", "iZBCrEQFk3", "hewpUDj4i1", "gDMAAn0tL1", "eH07uiVk5c", "duNB6WBiM3", "XLMGag07M6", "XCOyyL52Pp", "VjN1d70bZg", "SCQHIoIxKL", "RVKdAOPMay", "QLj1BGQoX3", "PaLjOpslXQ", "ONWVtl5tlt", "Mv3uWfaBDS", "Mau9bFQcxk", "MD4NiZjC8c", "KtKMxTlhUq", "CYvRqhyTLe", "6KUgk9BXx0", "48pOxeIf0B", "3czby7iTly", "0cxWx9ELvn" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732661521198, 1733016908977, 1732659189410, 1733018092046, 1732660160896, 1733017792338, 1733067459215, 1732670988197, 1732662639346, 1730456882031, 1732685343564, 1730685873183, 1732658425645, 1733024325913, 1733099948898, 1732670759827, 1734855382642, 1730671178837, 1733067094986, 1732661364350, 1733116773432, 1732724703499, 1732670874915, 1732662888188, 1733117965313, 1732670804910, 1732657891961, 1733067024292, 1729222446277, 1732658485119, 1737524081634, 1732658587166, 1732656075831, 1733118100195, 1732658545658 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_LvUc" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_ZqBD" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_L4p6" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_ZqBD" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_L4p6" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Area_Chair_Mh8z" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_DSHN" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_LvUc" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_DSHN" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Reviewer_ZqBD" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ], [ "ICLR.cc/2025/Conference/Submission10855/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks & Initial Response to Reviewer DSHN (W2)\", \"comment\": \"**[W2: RULER evaluations at extended context lengths (like 1M): It would be insightful to include evaluations with context lengths of 1M tokens (or 500k), as most evaluations in the paper are currently under 200k. This would help clarify whether the proposed method maintains performance at significantly larger context lengths.]**\\n\\nThank you for your insightful comment. Below is an overview of the evaluation setup and how it is applied in this paper:\\nRULER serves as the most comprehensive benchmark currently available for evaluating models on arbitrarily long context lengths, including those up to 1M tokens. This benchmark allows us to rigorously test whether our proposed method maintains performance at these extended context lengths. InfiniteBench, on the other hand, focuses on downstream tasks with context lengths up to around 100K tokens, providing a complementary perspective on long-context capabilities. For shorter to medium context tasks (around 10K tokens), we rely on LongBench to evaluate the model\\u2019s generalizability and performance on standard-length contexts.\\n\\nTo address your suggestion, we included evaluations on RULER, with results provided in Table 1. These evaluations measure accuracy across context lengths up to 1M tokens (x-axis). While InfiniteBench is limited to 100K contexts, the combination of RULER and LongBench ensures we comprehensively cover both extremely long and more typical context scenarios.\\n\\nWe hope this explanation clarifies the breadth of our evaluation strategy and how it demonstrates the scalability and robustness of our approach across varying context lengths. Please let us know if additional details or further experiments would be helpful.\"}", "{\"title\": \"Follow up Response to Reviewer DSHN\", \"comment\": \"Thank you for revisiting our work and for raising your evaluation score. We greatly appreciate your recognition of the improvements in our paper and the value of our proposed approach. Your feedback has been invaluable in helping us refine our work, and we are deeply grateful for your support.\\n\\nTo address your suggestions and further enhance the paper, we have updated the manuscript with the following improvements:\\n- **Expanded evaluations**: We now include results on the MMLU benchmark and additional evaluations of the Gradient AI 1M context length model. We show that our models retain its performance on the MMLU benchmark and surpasses the Gradient AI 1M model on all four benchmarks (MMLU, LongBench, InfiniteBench, RULER). \\n- **Smaller generator models**: To demonstrate that our improvements are not solely driven by the larger generator model (Qwen-2-72B-Instruct), we incorporated results using smaller generator models\\u2014Llama-3.1-8B-Instruct and Qwen-2.5-7B\\u2014for training up to 650K context lengths. These models achieved gains on InfiniteBench and RULER while maintaining strong performance on LongBench and MMLU, emphasizing the robustness and generalizability of our approach across model sizes.\\n- **Clarifications**: Key sections have been revised for greater clarity and precision, addressing prior points of ambiguity.\\n\\nThank you again for your constructive feedback and for contributing to the strength of our manuscript. We are excited about the improved presentation and look forward to any further input you may have.\"}", "{\"title\": \"Thanks & Initial Response to Reviewer L4p6 (W1 & Q1)\", \"comment\": \"**[W1: A main concern with the proposed work is that for many of the results (e.g. Table 1, 5) the improvements appear to primarily come from a single task Retrieve.KV. Improvements on the other datasets are smaller. While this leads to an overall increase, it's important to understand the importance of this subtask rather than simply the reported overall average increase.]**\\n\\n**[Q1: What is the Retrieve.KV task and are there hypotheses about why this task in particular has large improvements?]**\\n\\nThank you for raising this concern. We address this by providing (1) a detailed analysis of why our model shows the most significant improvement on the Retrieve.KV task, (2) an explanation of why this improvement is crucial for long-context tasks, and (3) evidence that other tasks also exhibit significant improvements.\\n\\nFor InfiniteBench (Table 5), the Retrieve.KV task shows the largest performance gain. However, other tasks also exhibit meaningful improvements. For example, benchmarks like RULER focus on tasks beyond simple key-value retrieval and require reasoning over long contexts. Our model consistently performs well across these diverse benchmarks, as shown in Table 1.\\n\\nThe significant improvement in the Retrieve.KV task stems from the design of our instruction-tuning data. The synthetic data generation process includes a mix of question types that follow a logical order within documents while also revisiting previous documents to draw connections. This approach helps the model learn to associate specific document sections with relevant information, a skill that aligns closely with the requirements of key-value retrieval. This focus is especially relevant for long-context models, where RAG (retrieval-augmented generation) techniques and accurate context memorization are critical for success.\\n\\nThe following table quantifies the percentage increases across eight tasks. While the improvement is most pronounced in Retrieve.KV (107.82%\\u201348.45% across models), other tasks, such as LongBook.QA and LongDialogue.QA, also see notable gains. The average increase across all tasks and context lengths is non-negligible, with median increases of 6.85% for 180K, 5.39% for 350K, 7.83% for 650K, and 3.39% for 900K context lengths.\\n\\n| Metric | LLAMA-3.1-8B-Instruct | 180K | 350K | 650K | 1M |\\n|------------------|------------------------:|:----------------|:----------------|:----------------|:----------------|\\n| Retrieve.PassKey | 100 | 100.0 (0%) | 100.0 (0%) | 100.0 (0%) | 100.0 (0%) |\\n| Retrieve.Number | 95.33 | 99.33 (4.19%) | 100.0 (4.89%) | 100.0 (4.89%) | 100.0 (4.89%) |\\n| Retrieve.KV | 42.66 | 88.66 (107.82%) | 92.0 (115.65%) | 63.33 (48.45%) | 57.33 (34.38%) |\\n| En.Sum | 27.63 | 24.01 (-13.10%) | 23.51 (-14.91%) | 23.68 (-14.29%) | 23.06 (-16.53%) |\\n| En.QA | 24.83 | 34.26 (37.97%) | 33.23 (33.83%) | 31.72 (27.74%) | 31.97 (28.75%) |\\n| En.MC | 68 | 74.0 (8.82%) | 72.0 (5.88%) | 75.33 (10.77%) | 74.0 (8.82%) |\\n| En.Dia | 16.66 | 18.0 (8.04%) | 18.0 (8.04%) | 22.0 (32.05%) | 16.0 (-3.96%) |\\n| Math.Find | 35.33 | 37.33 (5.66%) | 35.33 (0%) | 36.0 (1.89%) | 36.0 (1.89%) |\\n\\nIt is crucial to underscore the significance of InfiniteBench and RULER in evaluating our model's capabilities. While InfiniteBench evaluates samples at 100K context lengths\\u2014substantially shorter than the 1M context lengths our model is designed to process\\u2014it serves as a valuable baseline to showcase our model\\u2019s robustness. Notably, our model not only outperforms Llama-3.1-8b-instruct, which handles a 128K context window, but also demonstrates the scalability to excel at significantly longer context lengths (RULER) where traditional benchmarks fall short. This highlights our model\\u2019s ability to adapt to increasingly demanding scenarios, making it uniquely positioned to handle the challenges of long-context tasks in real-world applications. Furthermore, the strong performance on RULER tasks reflects the generalizability of our approach, extending its impact across diverse long-context benchmarks beyond Retrieve.KV.\\n\\nWe hope this provides clarity on the broader implications of our advancements and the critical role of Retrieve.KV results in long-context scenarios. We are happy to provide additional clarifications or conduct further experiments if needed.\"}", "{\"title\": \"Follow-up Response to Reviewer L4p6 (Updated PDF--Discussion Deadline Approaching in 2 Days)\", \"comment\": \"To address your suggestions and further enhance the paper, we have updated the manuscript with the following improvements:\\n\\n- **Expanded Evaluations**: We now include results on the MMLU benchmark and additional evaluations of the Gradient AI 1M context length model. We show that our models retain its performance on the MMLU benchmark and surpasses the Gradient AI 1M model on all four benchmarks (MMLU, LongBench, InfiniteBench, RULER). \\n- **Smaller generator models**: To demonstrate that our improvements are not solely driven by the larger generator model (Qwen-2-72B-Instruct), we incorporated results using smaller generator models\\u2014Llama-3.1-8B-Instruct and Qwen-2.5-7B\\u2014for training up to 650K context lengths. These models achieved gains on InfiniteBench and RULER while maintaining strong performance on LongBench and MMLU, emphasizing the robustness and generalizability of our approach across model sizes.\\n- **Clarifications**: Key sections have been revised for greater clarity and precision, addressing prior points of ambiguity.\\n\\nOnce again, thank you for your time and invaluable feedback\\u2014it has been instrumental in refining our work!\\n\\nFollowing up on our recent exchange regarding this paper, we wanted to kindly check if there are any further concerns or feedback from you. With the discussion deadline approaching in 2 days, we are eager to address any remaining issues and ensure the paper meets the highest standards.Your insights are invaluable to us, and we greatly appreciate your time and consideration. Please feel free to share any thoughts you may have.\\n\\nLooking forward to hearing from you!\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Thanks & Initial Response to Reviewer L4p6 (W4 & Q2)\", \"comment\": \"**[W4: There is some confusing wording in the paper that the authors should clarify particularly around Section 3.2 Authors could do a better job clarifying what are N1, N2, and N3.]**\\n\\n**[Q2: What are N1, N2, and N3? Can the authors include some ablations if needed on these values?]**\\n\\nThank you for pointing out the need to clarify the definitions of N1, N2, and N3. We apologize for any confusion caused in the original submission, and we will update the main paper to ensure these definitions and their context are clearly explained.\\n\\nTo clarify, when constructing the instruction-tuning dataset with concatenated multiple documents, the process for the Nth document is as follows:\\n\\n- N1 hierarchical follow-up questions are concatenated immediately after the Nth document. For example, in our setup, N1=5, meaning five such questions are added after each document.\\n- N2 diverse questions are added next, selected from the current document and all previously visited documents where diverse questions have not already been sampled. In our setup, N2=9.\\n- For every previously visited document, there is a 60% probability of sampling and adding N3 hierarchical follow-up questions from that document. In our setup, N3=3.\\n\\nThis process is repeated for all documents to create a comprehensive instruction-tuning dataset.\\n\\nThese specific values for N1, N2, and N3 were carefully chosen to closely mimic real-world scenarios. The inclusion of hierarchical questions (N1 and N3) ensures logical and contextual continuity, while diverse questions (N2) encourage broader reasoning and retrieval capabilities. This balanced approach captures both immediate document context and cross-referencing between documents, a hallmark of real-world long-context tasks.\\n\\nTo provide additional clarity, we will update the main paper to reflect these definitions and the methodology. Further details are available in Appendix A, where we describe the role of these parameters in creating a rich and interconnected dataset, along with concrete examples provided in Appendix C.\\n\\nWe hope this clarification resolves any confusion. Please let us know if further details or experiments would be helpful.\"}", "{\"title\": \"Follow Up Response to Reviewer ZqBD (Updated PDF)\", \"comment\": \"Thank you for revisiting our work and for this insightful feedback!\\n\\n**[Could you also provide a comparison of the number of training tokens used between your work and theirs? They used 1.4B tokens, and if your approach used fewer tokens, that would be a notable advantage.]** \\n\\nOur primary dataset is the Together long books dataset, processed into approximately **1.4 billion tokens**, distributed across these stages: 2000 samples of 180K tokens, 1280 samples of 350K tokens, 600 samples of 650K tokens, and 200 samples of 1M tokens. Despite using a comparable number of tokens, our approach consistently outperforms Gradient AI's model across all four benchmarks\\u2014RULER, InfiniteBench, LongBench, and MMLU\\u2014highlighting the generalizability and effectiveness of our methodology. \\n\\nTo address your suggestions and further enhance the paper, we have updated the manuscript with the following improvements:\\n\\n- **Expanded Evaluations**: We now include results on the MMLU benchmark and additional evaluations of the Gradient AI 1M context length model. We show that our models retain its performance on the MMLU benchmark and surpasses the Gradient AI 1M model on all four benchmarks (MMLU, LongBench, InfiniteBench, RULER).\\n- **Smaller generator models**: To demonstrate that our improvements are not solely driven by the larger generator model (Qwen-2-72B-Instruct), we incorporated results using smaller generator models\\u2014Llama-3.1-8B-Instruct and Qwen-2.5-7B\\u2014for training up to 650K context lengths. These models achieved gains on InfiniteBench and RULER while maintaining strong performance on LongBench and MMLU, emphasizing the robustness and generalizability of our approach across model sizes. \\n- **Clarifications**: Key sections have been revised for greater clarity and precision, addressing prior points of ambiguity.\\n\\nOnce again, thank you for your time and invaluable feedback\\u2014it has been instrumental in refining our work!\"}", "{\"title\": \"Follow-Up: Seeking Further Feedback (Discussion Deadline Approaching in 1 Day)\", \"comment\": \"Dear Reviewer LvUc,\\n\\nWe hope this message finds you well. Thank you for your detailed feedback and valuable insights. Following up on your comments, we believe the first set of experiments already addresses the data generalization concern, demonstrating robust results across various generator models.\\n\\nAdditionally, we are currently evaluating a Mixtral-180K model to further validate our approach and provide even more comprehensive results. However, as the reviewer discussion deadline is approaching in a day, we wanted to kindly check if it would be acceptable to share the outcomes of this evaluation in 2.5 days. Your guidance here would be greatly appreciated.\\n\\nYour insights are invaluable to us, and we are eager to address any remaining concerns to ensure the paper meets the highest standards. Please feel free to share any thoughts or additional feedback you may have.\\n\\nLooking forward to hearing from you!\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Thanks & Initial Response to Reviewer ZqBD (Q1 & Q2 & Q3)\", \"comment\": \"**[Q1: Significance of Extending to 1M Context Length: The authors should provide more justification for the significance of extending to a 1M context length. A 128K context is sufficient for most real-world long-context tasks, and demonstrating practical use cases that necessitate 1M tokens would help strengthen the motivation for this work.]**\\n\\nExtending context lengths to 1M tokens is significant for use cases that go beyond what is possible with a 128K context. While 128K may be sufficient for many standard tasks, certain real-world applications demand the ability to process and reason over substantially larger contexts. For instance:\\n- Key-Value Retrieval: In scenarios like company-wide document retrieval, where entire organizational histories spanning years are stored in unstructured formats, longer contexts allow for efficient and accurate query resolution.\\n- Comprehensive Question Answering: Tasks requiring reasoning across long, multi-document histories (e.g., analyzing interconnected project timelines or extensive legal documents) necessitate processing large volumes of sequential data seamlessly within a single pass.\\n\\nBy pushing the boundaries of long-context processing, our work lays the foundation for solving these challenges, enabling practical use cases that existing models cannot handle efficiently.\\n\\n**[Q2: Figure Improvement: Consider changing the layout of figures 2 to horizontal format for better readability and comparison.]**\\n\\nThank you for this suggestion. We agree that changing the layout of Figures 2 to a horizontal format would enhance readability and comparison. We will incorporate this improvement in the revision draft.\\n\\n**[Q3: Loss Calculation During Training: Clarify whether only the answer part of the generated question-answer pairs calculate to the loss during training or if both the question and answer are involved.]**\\n\\nThank you for pointing out the need for clarification here. During training, we calculate the loss exclusively on the **answers** generated in the question-answer pairs. The questions and the long-context documents are masked out during this process. This ensures the model focuses on generating accurate and coherent answers without being directly penalized for reproducing or interpreting the questions themselves. This approach is aligned with our goal of optimizing the model\\u2019s reasoning and answer-generation capabilities.\\n\\nWe appreciate your detailed feedback and hope this explanation addresses your concerns. Please let us know if further clarification or additional experiments are required.\"}", "{\"title\": \"Thanks & Initial Response to Reviewer DSHN (W3)\", \"comment\": \"Thank you for your comment. To address your request, we have evaluated MMLU on all trained models to assess performance on shorter context tasks while also ensuring that improvements in long-context capabilities do not degrade performance on smaller contexts. The results are presented in the accompanying tables.\\n\\nAs the results show, our models retain strong performance on shorter-context benchmarks such as MMLU and way surpass the gradient 1M model. Even as we increase the context length, the MMLU performance remains stable, with only minimal regression observed for the 1M-token model. Importantly, this demonstrates that our fine-tuning process effectively balances the needs of short-context tasks and extended-context tasks, maintaining competitive accuracy across diverse use cases.\\n\\nThis stability highlights the robustness of our fine-tuning methodology, ensuring that improvements in long-context capabilities do not come at the expense of performance on tasks requiring shorter contexts. We hope this evaluation addresses your concerns and provides confidence in the generalizability of our approach.\\n\\n| Category | LLaMA-3.1-8B-Instruct | Gradient-AI-Model | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen |\\n|-------------------|------------------------|-------------------|----------------|----------------|----------------|---------------|---------------|---------------|\\n| mmlu | 68.21 \\u00b1 0.37 | 60.48 \\u00b1 0.39 | 66.99 \\u00b1 0.38 | 66.74 \\u00b1 0.38 | 65.93 \\u00b1 0.38 | 67.33 \\u00b1 0.38 | 65.78 \\u00b1 0.38 | 64.60 \\u00b1 0.38 |\\n| humanities | 64.23 \\u00b1 0.67 | 55.75 \\u00b1 0.69 | 62.32 \\u00b1 0.67 | 61.38 \\u00b1 0.68 | 60.57 \\u00b1 0.68 | 62.81 \\u00b1 0.67 | 59.68 \\u00b1 0.68 | 59.45 \\u00b1 0.68 |\\n| other | 73.03 \\u00b1 0.77 | 67.04 \\u00b1 0.82 | 72.90 \\u00b1 0.77 | 73.03 \\u00b1 0.76 | 72.87 \\u00b1 0.76 | 73.51 \\u00b1 0.76 | 73.00 \\u00b1 0.76 | 73.45 \\u00b1 0.77 |\\n| social sciences | 77.48 \\u00b1 0.74 | 70.46 \\u00b1 0.80 | 76.70 \\u00b1 0.74 | 76.93 \\u00b1 0.74 | 75.53 \\u00b1 0.75 | 76.76 \\u00b1 0.74 | 75.66 \\u00b1 0.75 | 71.87 \\u00b1 0.77 |\\n| stem | 60.36 \\u00b1 0.83 | 51.32 \\u00b1 0.86 | 58.67 \\u00b1 0.84 | 58.61 \\u00b1 0.84 | 57.72 \\u00b1 0.84 | 58.77 \\u00b1 0.84 | 58.14 \\u00b1 0.84 | 56.49 \\u00b1 0.85 |\\n\\n\\n| Category | LLaMA-3.1-8B-Instruct | Gradient-AI-Model | 350K-model | 650K-model | 1M-model |\\n|-------------------|------------------------|-------------------|-----------------|-----------------|-----------------|\\n| mmlu | 68.21 \\u00b1 0.37 | 60.48 \\u00b1 0.39 | 66.29 \\u00b1 0.38 | 65.80 \\u00b1 0.38 | 65.08 \\u00b1 0.38 |\\n| humanities | 64.23 \\u00b1 0.67 | 55.75 \\u00b1 0.69 | 61.51 \\u00b1 0.68 | 61.02 \\u00b1 0.68 | 61.02 \\u00b1 0.68 |\\n| other | 73.03 \\u00b1 0.77 | 67.04 \\u00b1 0.82 | 72.84 \\u00b1 0.77 | 71.84 \\u00b1 0.78 | 71.84 \\u00b1 0.78 |\\n| social sciences | 77.48 \\u00b1 0.74 | 70.46 \\u00b1 0.80 | 76.81 \\u00b1 0.74 | 75.27 \\u00b1 0.76 | 75.27 \\u00b1 0.76 |\\n| stem | 60.36 \\u00b1 0.83 | 51.32 \\u00b1 0.86 | 59.44 \\u00b1 0.84 | 57.72 \\u00b1 0.84 | 57.72 \\u00b1 0.84 |\"}", "{\"summary\": \"This paper introduces a novel approach to generate a high-quality, long-context instruction-tuning dataset that significantly surpasses the context length of typical raw data. It incorporates a unique hierarchical ordering strategy to ensure logical coherence while preserving the diversity and complexity of the questions. The experimental results on RULER and InfiniteBench demonstrate that the proposed method significantly enhances the performance of llama3.1 in longer contexts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a scalable long-context data generation strategy to significantly improve the long-context capacity of LLama3.1 and extend its context length to 1M.\\n2. Comprehensive ablation tests. Analyze the impact of data construction strategies from the perspectives of data complexity, diversity of questions, etc.\", \"weaknesses\": \"1. The paper uses the Qwen-2-72B model to generate the QA pairs, which may result in strong models having a distillation effect on small models. Can you provide experimental results using the Qwen2 7B or Llama3.1 8b as the generator model?\\n2. Lack of baseline models to validate data generalization . The paper only used llama 3.1 as a baseline. Can you provide more experimental results on models such as Qwen2 7B and deepSeek-V2-Lite?\", \"questions\": \"Reference to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response. I have a few points I'd like to discuss further:\\n\\n- Regarding W1, I believe the formatting could be optimized (the current response is quite lengthy). Most reviewers are likely more interested in the overall performance compare with other baseline rather than the detailed performance on each sub-task. \\n- Could you also provide a comparison of the number of training tokens used between your work and theirs? They used 1.4B tokens, and if your approach used fewer tokens, that would be a notable advantage.\\n\\n- Concerning W2, I think LongBench already qualifies as a mid-to-long context benchmark; it just appears relatively shorter given the specific task setup in your paper. It might be more fitting to refer to your task as a \\\"super-long\\\" task :)\\n\\nFor the rest of your responses, I am satisfied as they have addressed most of my concerns. I understand that time is limited, but would it be possible for you to update the PDF accordingly?\\n\\nThank you again for your efforts, and I look forward to your response.\"}", "{\"summary\": \"This paper introduces a novel post training synthetic data generation strategy for long context. The approach works by splitting a long context document into smaller documents and generating question-answer pairs from separate smaller documents as well as combinations of the documents. The generated data is used in combination with a step-by-step rotary position embedding scaling strategy to scale the model context. The proposed approach is tested in several benchmarks for longer context and overall results appear higher for the proposed finetuning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper makes contributions to improving long context reasoning of LLMs. The exploration of algorithms and fine-tuning strategies for increasing the context length is an important problem for language models and finetuning on synthetic data appears to be a promising direction for this work\", \"The proposed method appears simple to implement and should be reproducible as many of the proposed prompts are provided in the appendix, and the training strategy is similar given the generated data.\"], \"weaknesses\": [\"A main concern with the proposed work is that for many of the results (e.g. Table 1, 5) the improvements appear to primarily come from a single task Retrieve.KV. Improvements on the other datasets are smaller. While this leads to an overall increase, it's important to understand the importance of this subtask rather than simply the reported overall average increase.\", \"The scale of the datasets in experiments is rather small and there are no uncertainty estimates provided. This is important particularly as for some of the tasks, there is only 100-200 samples, where a few correct answers can increase. I would encourage the authors where possible to increase the amount of evaluation data.\", \"The authors only test a larger model for generating the synthetic data. The method would further be justified by including comparisons for models of the same size that are used to generate the data as it is unclear the dependency on model size. Further the experiments are only done for the Llama-8B model, but experiments on other models even smaller would be beneficial to see whether smaller or larger models benefit more from increasing context length.\", \"there is some confusing wording in the paper that the authors should clarify particularly around Section 3.2 Authors could do a better job clarifying what are $N_1$, $N_2$ and $N_3$.\"], \"questions\": [\"What is the Retrieve.KV task and are there hypotheses about why this task in particular has large improvements?\", \"What are $N_1$, $N_2$ and $N_3$? Can the authors include some ablations if needed on these values?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks & Initial Response to Reviewer DSHN (W1 -- Part 1)\", \"comment\": \"Thank you for this insightful comment. We agree that exploring whether smaller or weaker models can effectively generate synthetic data and assessing how the dependency on model size impacts performance is important to validate the robustness of our approach. **To address this, we conducted additional experiments using two alternative generator models: Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct**. Given the limited time, we trained models on the synthetic datasets produced by these generators, with a context length up to 650K tokens. To showcase the effectiveness of our synthetic data generation pipeline, we also introduced a stronger baseline: the Gradient AI 1M model (gradientai/Llama-3-8B-Instruct-Gradient-1048k). This model was trained directly on real long-context datasets with lengths exceeding 1 million tokens.\\n\\nHere are the evaluation results on InfiniteBench (downstream tasks with around 100K context length). As we can see, the models trained using Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct as generators surpass the baseline Llama-3.1-8b-Instruct and is way better than the gradient 1M model. \\n\\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|----------------|----------------|----------------|---------------|---------------|---------------|-------------------|-----------------------|\\n| passkey | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |\\n| number_string | 99.04 | 100.00 | 100.00 | 99.76 | 100.00 | 100.00 | 99.33 | 95.33 |\\n| kv_retrieval | 85.47 | 89.33 | 42.14 | 89.52 | 85.33 | 52.66 | 13.33 | 42.66 |\\n| longbook_sum_en | 25.68 | 26.85 | 26.64 | 26.97 | 27.70 | 26.74 | 17.02 | 27.63 |\\n| longbook_qa_en | 33.39 | 35.67 | 33.37 | 32.30 | 29.55 | 29.67 | 15.84 | 24.83 |\\n| longbook_choice | 58.00 | 60.66 | 66.00 | 63.33 | 61.33 | 64.66 | 61.33 | 68.00 |\\n| longdialogue_qa | 19.50 | 14.66 | 20.00 | 27.33 | 21.33 | 23.33 | 4.00 | 16.66 |\\n| math_find | 36.66 | 32.66 | 35.33 | 30.00 | 34.66 | 38.00 | 26.66 | 35.33 |\\n| **Average** | **57.22** | **57.48** | **52.94** | **58.65** | **57.49** | **54.38** | **42.19** | **51.31** |\\n\\n\\nHere are the evaluation results on LongBench (downstream tasks with around 10K context length). As we can see, our models' performance are preserved on short-to-medium context lengths and way surpasses the results of the gradient model. \\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|-----------------------|-----------------------|-----------------------|----------------------|----------------------|----------------------|--------------------|------------------------|\\n| single-document | 46.48 | 46.64 | 46.53 | 46.20 | 46.70 | 46.28 | 30.75 | 46.91 |\\n| multi-document | 38.69 | 38.75 | 37.54 | 40.76 | 41.90 | 39.31 | 12.45 | 41.45 |\\n| summarization | 25.28 | 25.10 | 24.68 | 25.05 | 24.83 | 24.90 | 21.72 | 26.10 |\\n| few-short learning | 61.56 | 62.79 | 60.50 | 61.92 | 61.56 | 60.69 | 59.70 | 63.48 |\\n| synthetic tasks | 66.17 | 67.75 | 66.00 | 67.11 | 67.60 | 67.10 | 55.50 | 67.48 |\\n| **Average** | **47.23** | **47.72** | **46.20** | **47.95** | **47.97** | **47.00** | **35.89** | **48.11** |\\n\\nDue to the word limit, we will present the results on MMLU and RULER benchmarks in another comment.\"}", "{\"comment\": \"Thank you for your response! As my concerns are addressed, I have increased my score to 6. On the other hand, if the number of tokens you train are comparable, you could try framing your approach from a data efficiency perspective. This might make the motivation of your paper clearer.\"}", "{\"comment\": [\"Thank you for providing additional experiments with other models for data generation, details explaining why these tasks perform much stronger, and evaluation set size.\", \"I have no outstanding concerns with data generation models.\", \"It still looks to me that much of the improvement comes from Retrieve.KV, although the authors provide some intuition as to why. I think it will be interesting in future work to look into modifying the data generation process to see if other tasks could see similar improvements.\", \"I appreciate the computational constraints on scaling up the evaluations, but this evaluation set size is still fairly small. Is it possible to provide error bars on these values and increase to the same scale in other datasets? I think that would strengthen results.\", \"Nonetheless, I will increase the score conditioned that these new results are added to the camera ready.\"]}", "{\"title\": \"Thanks & Initial Response to Reviewer ZqBD (W1)\", \"comment\": \"**[W1: Lack of Strong Baseline Comparisons: The paper lacks comparisons with suitable, strong baselines that have also achieved extensions to 1M tokens. For example, EasyContext (https://github.com/jzhang38/EasyContext) has similarly attempted to extend context lengths to 1M using long-text data. Especially since the comparisons are made against the original LLaMA-3.1-8B after additional fine-tuning on top of it.]**\\n\\nThank you for this insightful comment. To address your concern regarding strong baseline comparisons, we included the Gradient-AI 1M context model ((gradientai/Llama-3-8B-Instruct-Gradient-1048k) as a baseline in our evaluations. This model represents a competitive reference point for extended context lengths, providing a meaningful comparison for our proposed methods.\\n\\nThe evaluation results, shown in the accompanying tables, demonstrate that our model consistently outperformed the Gradient-AI 1M context model across all benchmarks, including MMLU, LongBench, InfiniteBench, and RULER. This comparison further validates the novelty and practicality of our approach in advancing long-context capabilities beyond the current state-of-the-art.\\n\\nWe appreciate your suggestion and hope that these results adequately address your concerns. Please let us know if you have further suggestions or would like additional details on these comparisons.\\n\\n| LongBench | Value |\\n|--------------------|-----------|\\n| single-document | 30.75 |\\n| multi-document | 12.45 |\\n| summarization | 21.7175 |\\n| few-short learning | 59.6975 |\\n| synthetic tasks | 55.5 |\\n| average | 35.8933 |\\n\\n| InfiniteBench | Value |\\n|---------------------|-----------|\\n| kv_retrieval | 0.1333 |\\n| passkey | 1 |\\n| number_string | 0.9933 |\\n| math_find | 0.2666 |\\n| longbook_qa_en | 0.1584 |\\n| longbook_sum_en | 0.1702 |\\n| longbook_choice | 0.6133 |\\n| longdialogue_qa | 0.04 |\\n| Average | 0.4218875 |\\n\\n| MMLU | Value |\\n|--------------------|-----------------|\\n| mmlu | 0.6048\\u00b10.0039 |\\n| humanities | 0.5575\\u00b10.0069 |\\n| other | 0.6704\\u00b10.0082 |\\n| social sciences | 0.7046\\u00b10.0080 |\\n| stem | 0.5132\\u00b10.0086 |\\n\\n\\n| RULER | Value |\\n|------------------|--------|\\n| 8192 | 88.09 |\\n| 16384 | 86.05 |\\n| 32768 | 82.46 |\\n| 65536 | 77.73 |\\n| 131072 | 81.94 |\\n| 262144 | 75.72 |\\n| 524288 | 70.13 |\"}", "{\"metareview\": \"This paper presents a valuable contribution to the critical challenge of extending LLM context windows through a novel synthetic data generation strategy. The proposed approach stands out for its practical significance and scalability, effectively addressing the scarcity of long-context training data through a well-designed hierarchical generation pipeline that can extend to arbitrary lengths. The methodology is clearly presented and readily reproducible. While there is room for more extensive evaluation, particularly in comparing with training-free approaches (e.g., https://arxiv.org/pdf/2402.17463) and testing across different model families, the demonstrated improvements on benchmarks and the method's practical utility make a compelling case. I recommend acceptance for this paper.\", \"additional_comments_on_reviewer_discussion\": \"I have read the messages in the discussion period and my opinion has been summarized as in the metareview above. I considered these points in my recommendation.\"}", "{\"summary\": \"The paper introduces a novel post-training method for generating synthetic data designed to efficiently extend the context window of LLMs, all while preserving their task performance. This approach leverages a summarizer to reduce the context length, enabling a hierarchical pipeline for data generation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important need for long context instruction data.\", \"The proposed methods allows for diverse instruction tasks\", \"Finetuning on this data retrains performance on medium context lengths (~10k)\"], \"weaknesses\": [\"The impact of the summarizer\\u2019s quality on the data generation pipeline is unclear. The paper uses Qwen-72b, but further discussion and experimentation are needed to understand how different summarizer models could influence the method\\u2019s effectiveness. For example, different model sizes and a different model family like Llama-2 or Llama-3.\", \"RULER evaluations at extended context lengths (like 1M): It would be insightful to include evaluations with context lengths of 1M tokens (or 500k), as most evaluations in the paper are currently under 200k. This would help clarify whether the proposed method maintains performance at significantly larger context lengths.\", \"Can the author provide performance in small contexts like MMLU or hellaswag after FT'ing? This would show that the model retains performance on shorter contexts while improving long-context capabilities.\"], \"questions\": \"Would it be possible to up scale a single document? Although multiple document concatenation was considered to scale to larger context lengths, the proposed methods do up-scaling to a single document that might be 1M. For example, for a document that might only be 128k, increasing the total context to 256k. This would make the entire data synthetic pipeline a more holistic data synthetic pipeline for all long context documents/instructions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up: Seeking Further Feedback (Discussion Deadline Approaching in 1 Day)\", \"comment\": \"Dear Reviewer L4p6,\\n\\nWe hope this message finds you well. Following up on our recent exchange regarding this paper, we wanted to kindly check if there are any further concerns or feedback from you. With the discussion deadline approaching in a day, we are eager to address any remaining issues and ensure the paper meets the highest standards.\\n\\nYour insights are invaluable to us, and we greatly appreciate your time and consideration. Please feel free to share any thoughts you may have.\\n\\nLooking forward to hearing from you!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Thanks & Initial Response to Reviewer L4p6 (W2)\", \"comment\": \"**[W2: The scale of the datasets in experiments is rather small and there are no uncertainty estimates provided. This is important particularly as for some of the tasks, there is only 100-200 samples, where a few correct answers can increase. I would encourage the authors where possible to increase the amount of evaluation data.]**\\n\\nThank you for highlighting this concern. We recognize the importance of scaling up evaluation datasets, particularly for tasks with limited samples, to ensure robust results and reduce the impact of variability. While computational costs are significant for long-context evaluations, we extended the number of samples in the RULER benchmark from 130 to 260 samples to provide more reliable estimates. RULER remains a key benchmark in our analysis, as it evaluates models on the most extensive context length. We also evaluated our model against zero-shot baselines on Llama-3.1-8B-Instruct to provide a comprehensive perspective.\\n\\nThe updated results are presented in the following tables. Our model greatly outperforms the zero-shot baselines across all evaluated context lengths. \\n\\n| Tokens | 350K-model | 650K-model | 1M-model | 1M-zero-shot |\\n|----------|------------|------------|----------|--------------|\\n| 8192 | 91.89 | 91.56 | 89.85 | 91.59 |\\n| 16384 | 92.08 | 91.59 | 89.83 | 91.22 |\\n| 32768 | 87.13 | 88.17 | 86.97 | 83.83 |\\n| 65536 | 84.17 | 84.87 | 83.49 | 77.03 |\\n| 131072 | 82.44 | 81.58 | 82.22 | 75.96 |\\n| 262144 | 81.26 | 80.09 | 83.56 | 70.78 |\\n| 524288 | - | 72.74 | 77.75 | 60.96 |\\n| 1000000 | - | - | 64.33 | 49.65 |\"}", "{\"comment\": \"Thanks for your update. The response resolves my concern. I raise the score to 6\"}", "{\"comment\": \"Thank you for your response!\\n\\nAs my concerns are addressed, I have increased my score to 6. I suggest that the authors also add the Llama-3-8B-Instruct-262k [1] model from gradientai to these evaluations in a future version of the paper. \\n\\n[1] https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k\"}", "{\"title\": \"Thanks & Initial Response to Reviewer ZqBD (W3)\", \"comment\": \"**[W3: Limited Value of Ablation Study in Tables 3 and 4: The results of the ablation studies in Tables 3 and 4 align closely with expectations, as instruction-tuning data types (e.g., Diverse Questions) are more closely aligned with the evaluation benchmark used (e.g., LongBench). Therefore, these tables have limited value in demonstrating broader insights. A deeper analysis of Table 5 would be more insightful, focusing on the types of questions and the effect of different question strategies during training.]**\\n\\nThank you for this insightful comment. We appreciate the opportunity to clarify the findings from our ablation studies and expand on the observations, particularly regarding Table 5, which offers valuable insights into the effect of different question strategies during training.\\nThe purpose of Tables 3 and 4 was to show that our techniques, including hierarchical ordering, diverse questions, and multi-hop reasoning, help improve model performance. While we acknowledge that Table 3 does not show significant differences (as it focuses on LongBench with a 10K context length), Table 4 demonstrates the efficacy of these techniques for the 100K context length scenario. However, we agree that Table 5 provides the most relevant insights and deserves additional attention.\", \"key_observations_from_table_5\": [\"Hierarchical Ordering with Diverse Questions Performs Best: The configuration with hierarchical ordering followed by diverse questions and a fixed number of follow-ups (hs-hs-hs-fixed) achieves the highest average score (59.45). This highlights that combining structured question ordering with diverse reasoning significantly boosts the model's capability to handle complex long-context tasks.\", \"Fixed Number of Questions Outperforms Randomized: Configurations with a fixed number of follow-up questions (e.g., hs-hs-fixed) consistently outperform those with a randomized number (hs-hs-randomized). For example, hs-hs-fixed achieves an average score of 59.45 compared to 58.51 for hs-hs-randomized. This suggests that maintaining consistency in the number of follow-up questions allows the model to learn better patterns during training.\", \"Impact of Summarization: Adding a summarization step improves performance (hs-hs-fixed with summarization scores 58.03). Although slightly lower than the best-performing configuration, this shows that summarization can enhance the model\\u2019s ability to condense and contextualize information in extremely long contexts.\", \"Trade-offs Between Specificity and Generalization: The results also demonstrate that targeting specific, diverse questions that reference previous documents enables the model to balance comprehension of current and past contexts. This balance is critical for improving performance on tasks requiring logical consistency and reasoning over long contexts.\", \"We will revise the analysis in the paper to better highlight the significance of these findings from Table 5. Specifically, we will expand on the benefits of hierarchical ordering with diverse and complex reasoning, the role of fixed versus randomized follow-ups, and the potential of summarization strategies to support comprehension across long-context tasks.\", \"Thank you again for pointing this out, and we hope this additional detail clarifies the broader insights gained from our ablation studies. Please let us know if further explanations or additional experiments are required.\"]}", "{\"title\": \"Thanks & Initial Response to Reviewer DSHN (Q1)\", \"comment\": \"**[Q1: Would it be possible to up scale a single document? Although multiple document concatenation was considered to scale to larger context lengths, the proposed methods do up-scaling to a single document that might be 1M. For example, for a document that might only be 128k, increasing the total context to 256k. This would make the entire data synthetic pipeline a more holistic data synthetic pipeline for all long context documents/instructions.]**\\n\\nThank you for your insightful comment. This is indeed an interesting direction for future work. Specifically, scaling a single document from a context length of 128K to 256K or beyond offers potential for creating a more holistic synthetic data pipeline. While our current approach leverages multiple document concatenation to achieve longer context lengths, we explored using QA pairs directly tied to a single long document to scale its context length. However, this approach did not yield satisfactory results, as it struggled to maintain logical coherence and diversity within the extended context.\\n\\nA promising alternative could involve splitting a single long document into sections and treating these sections in a way similar to multi-document concatenation. This approach could help preserve context while enabling the generation of meaningful and diverse QA pairs across the extended length. We believe this could make the pipeline more robust and holistic for single-document scenarios and would be an exciting avenue for future investigation.\\n\\nWe appreciate this suggestion and will consider it in future iterations of our work. Please let us know if further clarifications or discussions are needed.\"}", "{\"title\": \"Follow-Up Response for Reviewer LvUc\", \"comment\": \"Thank you for your updated evaluation and for raising the score, which affirms the innovation and contributions of our work. We are especially grateful for your recognition of the improvements in our presentation and the value of our approach. Your feedback has been instrumental in enhancing the quality of our manuscript, and we deeply appreciate your support!\"}", "{\"title\": \"Thanks & Initial Response to Reviewer ZqBD (W2 & Q4)\", \"comment\": \"**[W2: Decreasing Performance on LongBench: The results on LongBench suggest that the model\\u2019s performance decreases as context length increases. This is in contrast to what one would expect after training on long-context data, raising questions about the effectiveness of the proposed methods in maintaining or improving performance across all types of tasks. An explanation should be provided as to why LongBench performance degrades with extended context length training.]**\\n\\n**[Q4: Potential Explanation for Decreased LongBench Performance: Consider discussing whether Section 3.2 of \\\"How to Train Long-Context Language Models (Effectively)\\\" could provide insights into why LongBench performance decreases as training progresses with long-context data, addressing the concerns raised in Weakness 2.]**\\n\\nThank you for this insightful comment. We acknowledge that the results on LongBench may raise questions about performance at shorter context lengths as the model is scaled to handle longer contexts. LongBench serves as an evaluation tool specifically designed for short to medium context tasks, with samples containing contexts up to 10K tokens. In contrast, our model is trained to handle contexts up to 1M tokens. Despite this difference, our results show that performance on LongBench remains comparable to the baseline model (Llama-3.1-8B-Instruct), with only minor regressions observed for the 1M context length model.\\n\\nThe observed decrease in LongBench performance can be attributed to the inherent trade-offs in training for extended context lengths. As the model learns to handle extremely long contexts, there may be slight shifts in its ability to optimize for shorter contexts due to the differing characteristics of long and short context tasks. However, this regression is minimal, demonstrating that our method effectively balances performance across a wide range of context lengths.\\n\\nWe hope this explanation addresses your concerns. Please let us know if further clarification or additional experiments are needed.\"}", "{\"title\": \"Thanks & Initial Response to Reviewer L4p6 (W3 -- Part 2)\", \"comment\": \"Here're the RULER results. Compared to baselines where we zero-shot rope scale Llama-3.1-8b-instruct to 350K and 650K context length, our models outperformed the baselines on long context tasks.\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 350K-qwen-generator | 350K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.05 | 92.65 |\\n| 16384 | 87.36 | 89.44 | 88.23 |\\n| 32768 | 84.01 | 86.85 | 85.41 |\\n| 65536 | 77.81 | 84.87 | 83.14 |\\n| 131072 | 72.68 | 81.99 | 83.04 |\\n| 262144 | 66.44 | 78.73 | 77.99 |\\n\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 650K-qwen-generator | 650K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.01 | 92.35 |\\n| 16384 | 87.36 | 90.29 | 90.79 |\\n| 32768 | 84.01 | 87.60 | 86.15 |\\n| 65536 | 77.81 | 83.33 | 84.42 |\\n| 131072 | 72.68 | 79.10 | 80.49 |\\n| 262144 | 66.44 | 77.56 | 78.56 |\\n| 524288 | 62.53 | 72.72 | 70.65 |\\n\\n\\nHere are the evaluation results on MMLU -- we can see that our model's general knowledge and short context capabilities are preserved. \\n\\n| Category | LLaMA-3.1-8B-Instruct | Gradient-AI-Model | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen |\\n|-------------------|------------------------|-------------------|----------------|----------------|----------------|---------------|---------------|---------------|\\n| mmlu | 68.21 \\u00b1 0.37 | 60.48 \\u00b1 0.39 | 66.99 \\u00b1 0.38 | 66.74 \\u00b1 0.38 | 65.93 \\u00b1 0.38 | 67.33 \\u00b1 0.38 | 65.78 \\u00b1 0.38 | 64.60 \\u00b1 0.38 |\\n| humanities | 64.23 \\u00b1 0.67 | 55.75 \\u00b1 0.69 | 62.32 \\u00b1 0.67 | 61.38 \\u00b1 0.68 | 60.57 \\u00b1 0.68 | 62.81 \\u00b1 0.67 | 59.68 \\u00b1 0.68 | 59.45 \\u00b1 0.68 |\\n| other | 73.03 \\u00b1 0.77 | 67.04 \\u00b1 0.82 | 72.90 \\u00b1 0.77 | 73.03 \\u00b1 0.76 | 72.87 \\u00b1 0.76 | 73.51 \\u00b1 0.76 | 73.00 \\u00b1 0.76 | 73.45 \\u00b1 0.77 |\\n| social sciences | 77.48 \\u00b1 0.74 | 70.46 \\u00b1 0.80 | 76.70 \\u00b1 0.74 | 76.93 \\u00b1 0.74 | 75.53 \\u00b1 0.75 | 76.76 \\u00b1 0.74 | 75.66 \\u00b1 0.75 | 71.87 \\u00b1 0.77 |\\n| stem | 60.36 \\u00b1 0.83 | 51.32 \\u00b1 0.86 | 58.67 \\u00b1 0.84 | 58.61 \\u00b1 0.84 | 57.72 \\u00b1 0.84 | 58.77 \\u00b1 0.84 | 58.14 \\u00b1 0.84 | 56.49 \\u00b1 0.85 |\\n\\n\\nThese findings demonstrate the effectiveness of our approach even with generator models that are smaller or similar in size to the base model, underscoring the method's general applicability.\"}", "{\"title\": \"Follow up Response to Reviewer ZqBD\", \"comment\": \"Thank you for your updated evaluation and for raising the score, which affirms the innovation and contributions of our work. We are especially grateful for your recognition of the improvements in our presentation and the value of our approach. Your feedback has been instrumental in enhancing the quality of our manuscript, and we deeply appreciate your support!\"}", "{\"summary\": \"The paper \\\"Scaling Instruction-Tuned LLMs to Million-Token Contexts via Hierarchical Synthetic Data Generation\\\" aims to address the challenges of scaling large language models (LLMs) to handle extended context lengths, particularly up to one million tokens, without compromising general task performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors propose a synthetic data generation pipeline for instruction tuning, which allows LLMs to effectively extend their context lengths without the need for vast, annotated long-context datasets.\\n\\n2. Hierarchical Approach: The use of hierarchical splitting and question generation is a significant advancement. By logically structuring questions across multiple levels\\u2014hierarchical-aware, local-specific, and multi-hop\\u2014the authors ensure coherent instruction generation and improve the model's ability to reason over long contexts.\\n\\n3. Comprehensive Evaluation: The authors extensively evaluate the proposed approach on multiple benchmarks such as RULER, InfiniteBench, and LongBench. The detailed ablation studies show the importance of different components in their data generation strategy, highlighting that hierarchical order and diverse question generation are critical to achieving better long-context performance.\", \"weaknesses\": \"1. **Lack of Strong Baseline Comparisons**: The paper lacks comparisons with suitable, strong baselines that have also achieved extensions to 1M tokens. For example, EasyContext (https://github.com/jzhang38/EasyContext) has similarly attempted to extend context lengths to 1M using long-text data. Especially since the comparisons are made against the original LLaMA-3.1-8B after additional fine-tuning on top of it.\\n\\n2. **Decreasing Performance on LongBench**: The results on LongBench suggest that the model\\u2019s performance decreases as context length increases. This is in contrast to what one would expect after training on long-context data, raising questions about the effectiveness of the proposed methods in maintaining or improving performance across all types of tasks. An explanation should be provided as to why LongBench performance degrades with extended context length training.\\n\\n3. **Limited Value of Ablation Study in Tables 3 and 4**: The results of the ablation studies in Tables 3 and 4 align closely with expectations, as instruction-tuning data types (e.g., Diverse Questions) are more closely aligned with the evaluation benchmark used (e.g., LongBench). Therefore, these tables have limited value in demonstrating broader insights. A deeper analysis of Table 5 would be more insightful, focusing on the types of questions and the effect of different question strategies during training.\", \"questions\": \"1. **Significance of Extending to 1M Context Length**: The authors should provide more justification for the significance of extending to a 1M context length. A 128K context is sufficient for most real-world long-context tasks, and demonstrating practical use cases that necessitate 1M tokens would help strengthen the motivation for this work.\\n\\n2. **Figure Improvement**: Consider changing the layout of figures 2 to horizontal format for better readability and comparison.\\n\\n3. **Loss Calculation During Training**: Clarify whether only the answer part of the generated question-answer pairs calculate to the loss during training or if both the question and answer are involved.\\n\\n4. **Potential Explanation for Decreased LongBench Performance**: Consider discussing whether Section 3.2 of \\\"How to Train Long-Context Language Models (Effectively)\\\" could provide insights into why LongBench performance decreases as training progresses with long-context data, addressing the concerns raised in Weakness 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks & Initial Response to Reviewer DSHN (W1 -- Part 2)\", \"comment\": \"Here're the RULER results. Compared to baselines where we zero-shot rope scale Llama-3.1-8b-instruct to 350K and 650K context length, our models outperformed the baselines on long context tasks.\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 350K-qwen-generator | 350K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.05 | 92.65 |\\n| 16384 | 87.36 | 89.44 | 88.23 |\\n| 32768 | 84.01 | 86.85 | 85.41 |\\n| 65536 | 77.81 | 84.87 | 83.14 |\\n| 131072 | 72.68 | 81.99 | 83.04 |\\n| 262144 | 66.44 | 78.73 | 77.99 |\\n\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 650K-qwen-generator | 650K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.01 | 92.35 |\\n| 16384 | 87.36 | 90.29 | 90.79 |\\n| 32768 | 84.01 | 87.60 | 86.15 |\\n| 65536 | 77.81 | 83.33 | 84.42 |\\n| 131072 | 72.68 | 79.10 | 80.49 |\\n| 262144 | 66.44 | 77.56 | 78.56 |\\n| 524288 | 62.53 | 72.72 | 70.65 |\\n\\n\\nHere are the evaluation results on MMLU -- we can see that our model's general knowledge and short context capabilities are preserved. \\n\\n| Category | LLaMA-3.1-8B-Instruct | Gradient-AI-Model | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen |\\n|-------------------|------------------------|-------------------|----------------|----------------|----------------|---------------|---------------|---------------|\\n| mmlu | 68.21 \\u00b1 0.37 | 60.48 \\u00b1 0.39 | 66.99 \\u00b1 0.38 | 66.74 \\u00b1 0.38 | 65.93 \\u00b1 0.38 | 67.33 \\u00b1 0.38 | 65.78 \\u00b1 0.38 | 64.60 \\u00b1 0.38 |\\n| humanities | 64.23 \\u00b1 0.67 | 55.75 \\u00b1 0.69 | 62.32 \\u00b1 0.67 | 61.38 \\u00b1 0.68 | 60.57 \\u00b1 0.68 | 62.81 \\u00b1 0.67 | 59.68 \\u00b1 0.68 | 59.45 \\u00b1 0.68 |\\n| other | 73.03 \\u00b1 0.77 | 67.04 \\u00b1 0.82 | 72.90 \\u00b1 0.77 | 73.03 \\u00b1 0.76 | 72.87 \\u00b1 0.76 | 73.51 \\u00b1 0.76 | 73.00 \\u00b1 0.76 | 73.45 \\u00b1 0.77 |\\n| social sciences | 77.48 \\u00b1 0.74 | 70.46 \\u00b1 0.80 | 76.70 \\u00b1 0.74 | 76.93 \\u00b1 0.74 | 75.53 \\u00b1 0.75 | 76.76 \\u00b1 0.74 | 75.66 \\u00b1 0.75 | 71.87 \\u00b1 0.77 |\\n| stem | 60.36 \\u00b1 0.83 | 51.32 \\u00b1 0.86 | 58.67 \\u00b1 0.84 | 58.61 \\u00b1 0.84 | 57.72 \\u00b1 0.84 | 58.77 \\u00b1 0.84 | 58.14 \\u00b1 0.84 | 56.49 \\u00b1 0.85 |\\n\\nThese findings demonstrate the effectiveness of our approach even with generator models that are smaller or similar in size to the base model, underscoring the method's general applicability.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks & Initial Response to Reviewer LvUc (W1 -- Part 2)\", \"comment\": \"Here're the RULER results. Compared to baselines where we zero-shot rope scale Llama-3.1-8b-instruct to 350K and 650K context length, our models outperformed the baselines on long context tasks.\\n\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 350K-qwen-generator | 350K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.05 | 92.65 |\\n| 16384 | 87.36 | 89.44 | 88.23 |\\n| 32768 | 84.01 | 86.85 | 85.41 |\\n| 65536 | 77.81 | 84.87 | 83.14 |\\n| 131072 | 72.68 | 81.99 | 83.04 |\\n| 262144 | 66.44 | 78.73 | 77.99 |\\n\\n| Tokens | Llama-3.1-8b-instruct-zero-shot | 650K-qwen-generator | 650K-llama-generator |\\n|---------|----------------|---------------------|-----------------------|\\n| 8192 | 90.73 | 91.01 | 92.35 |\\n| 16384 | 87.36 | 90.29 | 90.79 |\\n| 32768 | 84.01 | 87.60 | 86.15 |\\n| 65536 | 77.81 | 83.33 | 84.42 |\\n| 131072 | 72.68 | 79.10 | 80.49 |\\n| 262144 | 66.44 | 77.56 | 78.56 |\\n| 524288 | 62.53 | 72.72 | 70.65 |\\n\\n\\n\\n\\nHere are the evaluation results on MMLU -- we can see that our model's general knowledge and short context capabilities are preserved. \\n\\n| Category | LLaMA-3.1-8B-Instruct | Gradient-AI-Model | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen |\\n|-------------------|------------------------|-------------------|----------------|----------------|----------------|---------------|---------------|---------------|\\n| mmlu | 68.21 \\u00b1 0.37 | 60.48 \\u00b1 0.39 | 66.99 \\u00b1 0.38 | 66.74 \\u00b1 0.38 | 65.93 \\u00b1 0.38 | 67.33 \\u00b1 0.38 | 65.78 \\u00b1 0.38 | 64.60 \\u00b1 0.38 |\\n| humanities | 64.23 \\u00b1 0.67 | 55.75 \\u00b1 0.69 | 62.32 \\u00b1 0.67 | 61.38 \\u00b1 0.68 | 60.57 \\u00b1 0.68 | 62.81 \\u00b1 0.67 | 59.68 \\u00b1 0.68 | 59.45 \\u00b1 0.68 |\\n| other | 73.03 \\u00b1 0.77 | 67.04 \\u00b1 0.82 | 72.90 \\u00b1 0.77 | 73.03 \\u00b1 0.76 | 72.87 \\u00b1 0.76 | 73.51 \\u00b1 0.76 | 73.00 \\u00b1 0.76 | 73.45 \\u00b1 0.77 |\\n| social sciences | 77.48 \\u00b1 0.74 | 70.46 \\u00b1 0.80 | 76.70 \\u00b1 0.74 | 76.93 \\u00b1 0.74 | 75.53 \\u00b1 0.75 | 76.76 \\u00b1 0.74 | 75.66 \\u00b1 0.75 | 71.87 \\u00b1 0.77 |\\n| stem | 60.36 \\u00b1 0.83 | 51.32 \\u00b1 0.86 | 58.67 \\u00b1 0.84 | 58.61 \\u00b1 0.84 | 57.72 \\u00b1 0.84 | 58.77 \\u00b1 0.84 | 58.14 \\u00b1 0.84 | 56.49 \\u00b1 0.85 |\\n\\n\\nThese findings demonstrate the effectiveness of our approach even with generator models that are smaller or similar in size to the base model, underscoring the method's general applicability.\"}", "{\"title\": \"Thanks & Initial Response to Reviewer L4p6 (W3 -- Part 1 )\", \"comment\": \"Thank you for this insightful comment. We agree that exploring whether smaller or weaker models can effectively generate synthetic data and assessing how the dependency on model size impacts performance is important to validate the robustness of our approach. **To address this, we conducted additional experiments using two alternative generator models: Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct**. Given the limited time, we trained models on the synthetic datasets produced by these generators, with a context length up to 650K tokens. To showcase the effectiveness of our synthetic data generation pipeline, we also introduced a stronger baseline: the Gradient AI 1M model (gradientai/Llama-3-8B-Instruct-Gradient-1048k). This model was trained directly on real long-context datasets with lengths exceeding 1 million tokens.\\n\\nHere are the evaluation results on InfiniteBench (downstream tasks with around 100K context length). As we can see, the models trained using Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct as generators surpass the baseline Llama-3.1-8b-Instruct and is way better than the gradient 1M model. \\n\\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|----------------|----------------|----------------|---------------|---------------|---------------|-------------------|-----------------------|\\n| passkey | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |\\n| number_string | 99.04 | 100.00 | 100.00 | 99.76 | 100.00 | 100.00 | 99.33 | 95.33 |\\n| kv_retrieval | 85.47 | 89.33 | 42.14 | 89.52 | 85.33 | 52.66 | 13.33 | 42.66 |\\n| longbook_sum_en | 25.68 | 26.85 | 26.64 | 26.97 | 27.70 | 26.74 | 17.02 | 27.63 |\\n| longbook_qa_en | 33.39 | 35.67 | 33.37 | 32.30 | 29.55 | 29.67 | 15.84 | 24.83 |\\n| longbook_choice | 58.00 | 60.66 | 66.00 | 63.33 | 61.33 | 64.66 | 61.33 | 68.00 |\\n| longdialogue_qa | 19.50 | 14.66 | 20.00 | 27.33 | 21.33 | 23.33 | 4.00 | 16.66 |\\n| math_find | 36.66 | 32.66 | 35.33 | 30.00 | 34.66 | 38.00 | 26.66 | 35.33 |\\n| **Average** | **57.22** | **57.48** | **52.94** | **58.65** | **57.49** | **54.38** | **42.19** | **51.31** |\\n\\n\\nHere are the evaluation results on LongBench (downstream tasks with around 10K context length). As we can see, our models' performance are preserved on short-to-medium context lengths and way surpasses the results of the gradient model. \\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|-----------------------|-----------------------|-----------------------|----------------------|----------------------|----------------------|--------------------|------------------------|\\n| single-document | 46.48 | 46.64 | 46.53 | 46.20 | 46.70 | 46.28 | 30.75 | 46.91 |\\n| multi-document | 38.69 | 38.75 | 37.54 | 40.76 | 41.90 | 39.31 | 12.45 | 41.45 |\\n| summarization | 25.28 | 25.10 | 24.68 | 25.05 | 24.83 | 24.90 | 21.72 | 26.10 |\\n| few-short learning | 61.56 | 62.79 | 60.50 | 61.92 | 61.56 | 60.69 | 59.70 | 63.48 |\\n| synthetic tasks | 66.17 | 67.75 | 66.00 | 67.11 | 67.60 | 67.10 | 55.50 | 67.48 |\\n| **Average** | **47.23** | **47.72** | **46.20** | **47.95** | **47.97** | **47.00** | **35.89** | **48.11** |\\n\\nDue to the word limit, we will present the results on MMLU and RULER benchmarks in another comment.\"}", "{\"title\": \"Follow-Up Response to Reviewer L4p6\", \"comment\": \"Thank you for your updated evaluation and for raising the score, which affirms the innovation and contributions of our work. We are especially grateful for your recognition of the improvements in our presentation and the value of our approach. Your feedback has been instrumental in enhancing the quality of our manuscript, and we deeply appreciate your support!\"}", "{\"title\": \"Thanks & Initial Response to Reviewer LvUc (W1 -- Part 1)\", \"comment\": \"Thank you for this insightful comment. We agree that exploring whether smaller or weaker models can effectively generate synthetic data and assessing how the dependency on model size impacts performance is important to validate the robustness of our approach. **To address this, we conducted additional experiments using two alternative generator models: Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct**. Given the limited time, we trained models on the synthetic datasets produced by these generators, with a context length up to 650K tokens. To showcase the effectiveness of our synthetic data generation pipeline, we also introduced a stronger baseline: the Gradient AI 1M model (gradientai/Llama-3-8B-Instruct-Gradient-1048k). This model was trained directly on real long-context datasets with lengths exceeding 1 million tokens.\\n\\nHere are the evaluation results on InfiniteBench (downstream tasks with around 100K context length). As we can see, the models trained using Llama-3.1-8b-Instruct and Qwen-2.5-7b-Instruct as generators surpass the baseline Llama-3.1-8b-Instruct and is way better than the gradient 1M model. \\n\\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|----------------|----------------|----------------|---------------|---------------|---------------|-------------------|-----------------------|\\n| passkey | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |\\n| number_string | 99.04 | 100.00 | 100.00 | 99.76 | 100.00 | 100.00 | 99.33 | 95.33 |\\n| kv_retrieval | 85.47 | 89.33 | 42.14 | 89.52 | 85.33 | 52.66 | 13.33 | 42.66 |\\n| longbook_sum_en | 25.68 | 26.85 | 26.64 | 26.97 | 27.70 | 26.74 | 17.02 | 27.63 |\\n| longbook_qa_en | 33.39 | 35.67 | 33.37 | 32.30 | 29.55 | 29.67 | 15.84 | 24.83 |\\n| longbook_choice | 58.00 | 60.66 | 66.00 | 63.33 | 61.33 | 64.66 | 61.33 | 68.00 |\\n| longdialogue_qa | 19.50 | 14.66 | 20.00 | 27.33 | 21.33 | 23.33 | 4.00 | 16.66 |\\n| math_find | 36.66 | 32.66 | 35.33 | 30.00 | 34.66 | 38.00 | 26.66 | 35.33 |\\n| **Average** | **57.22** | **57.48** | **52.94** | **58.65** | **57.49** | **54.38** | **42.19** | **51.31** |\\n\\n\\nHere are the evaluation results on LongBench (downstream tasks with around 10K context length). As we can see, our models' performance are preserved on short-to-medium context lengths and way surpasses the results of the gradient model. \\n| Task | 180K-llama-gen | 350K-llama-gen | 650K-llama-gen | 180K-qwen-gen | 350K-qwen-gen | 650K-qwen-gen | gradient-ai-model | llama-3.1-8b-instruct |\\n|--------------------|-----------------------|-----------------------|-----------------------|----------------------|----------------------|----------------------|--------------------|------------------------|\\n| single-document | 46.48 | 46.64 | 46.53 | 46.20 | 46.70 | 46.28 | 30.75 | 46.91 |\\n| multi-document | 38.69 | 38.75 | 37.54 | 40.76 | 41.90 | 39.31 | 12.45 | 41.45 |\\n| summarization | 25.28 | 25.10 | 24.68 | 25.05 | 24.83 | 24.90 | 21.72 | 26.10 |\\n| few-short learning | 61.56 | 62.79 | 60.50 | 61.92 | 61.56 | 60.69 | 59.70 | 63.48 |\\n| synthetic tasks | 66.17 | 67.75 | 66.00 | 67.11 | 67.60 | 67.10 | 55.50 | 67.48 |\\n| **Average** | **47.23** | **47.72** | **46.20** | **47.95** | **47.97** | **47.00** | **35.89** | **48.11** |\\n\\nDue to the word limit, we will present the results on MMLU and RULER benchmarks in another comment.\"}" ] }
BkvjVqk461
A Modified Proximal-Perturbed Lagrangian for Non-Convex Non-Smooth Representatives of Fairness Constraints
[ "Sang Bin Moon", "Jong Gwang Kim", "Andrés C. Castillo J.", "Christopher Brinton", "Abolfazl Hashemi" ]
We study classification problems under fairness constraints and introduce an algorithmic framework designed to prevent discrimination against different groups. These problems are often reformulated as continuous constrained optimization problems and are typically solved using continuous relaxations (surrogates) of the fairness constraints. However, many current algorithms do not provide theoretical guarantees, which possibly is due to the resulting fairness constraints being both non-convex and non-smooth. We propose a novel primal-dual algorithm, based on a newly developed Lagrangian, that converges to a stationary solution of the reformulated problem. Our algorithm is not only efficient and robust, but it also enjoys strong performance guarantees on the fairness of its solutions. Furthermore, experimental results demonstrate that our algorithm is highly effective in terms of computational cost and fairness guarantees, outperforming related algorithms that use regularization (penalization) techniques and/or standard Lagrangian relaxation.
[ "fairness constraints", "non-convexity", "non-smoothness", "primal-dual method" ]
https://openreview.net/pdf?id=BkvjVqk461
https://openreview.net/forum?id=BkvjVqk461
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nH65UpNMkb", "kG5aGOMzH1", "EI1GcrCHs7", "BmrvBNEmao", "8ystBRn58M", "56kyHaxzX5" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730597138638, 1730583882251, 1730779591190, 1730285401133, 1732569813698, 1730399625695 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2204/Reviewer_3rEU" ], [ "ICLR.cc/2025/Conference/Submission2204/Reviewer_mPD8" ], [ "ICLR.cc/2025/Conference/Submission2204/Reviewer_3yQ8" ], [ "ICLR.cc/2025/Conference/Submission2204/Reviewer_ykaY" ], [ "ICLR.cc/2025/Conference/Submission2204/Authors" ], [ "ICLR.cc/2025/Conference/Submission2204/Reviewer_A6Qa" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, a proximal-perturbed Lagrangian formulation was introduced to solve the problem of classification under relaxed fairness constraints. The method comes with a set of convergence and fairness guarantees provable under proper conditions. The numerical performance of the proposed algorithm has been evaluated on several non-convex non-smooth fairness constrained logistic regression tasks using benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea of applying the proximal-perturbed Lagrangian formulation of Kim (2021) to fairness constrained classification is novel and interesting. The paper is generally well organized and clearly presented.\", \"weaknesses\": \"1. Concerning Theorem 1, the running-average stationarity residual is claimed to converge asymptotically. However, it was commented below the theorem that its proof actually suggests an $O(1/T)$ sublinear rate of convergence under the same conditions. I did not have a chance to check the proof in full details, but if this is really the case, then it is suggested to revise Theorem 1 to state the $O(1/T)$ convergence rate directly, as this would provide a stronger and more informative result. Also, this would help align the theorem statement with both the proof and the comments that follow.\\n\\n2. In regard with Assumption 1 which requires the existence of primal-dual solutions satisfying KKT condition, it is not quite clear to me why such a condition should be reasonable for the non-convex and non-smooth Lagrangian formulation considered in this work. It is suggested to provide more detailed discussions on the validness of this assumption in the context.\", \"questions\": \"1. Can the result in Theorem 1 be presented in a non-asymptotic way as commented below the theorem (and as claimed in the concluding remarks as well)?\\n\\n2. How to verify Assumption 1 (or strong duality) for the considered non-convex and non-smooth proximal-perturbed Lagrangian formulation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies classification problems involving the formulation of fairness-constrained empirical risk (loss) minimization. The authors propose a modified proximal-perturbed Lagrangian based alternating direction algorithm (Algorithm 1) to the formulation, and present the convergence analysis. In the numerical experiments, when given a linear model, the authors conduct experiments that minimize the logistic empirical loss under demographic parity and equalized odds constraints. When given a neural network with RELU activations in the hidden layers, the authors conduct experiments that minimize the hinge empirical loss under the intersectional group fairness constraints.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The authors present plenty of fairness notions in Section 2 and correspondingly conduct multiple numerical experiments in Section 5.\", \"weaknesses\": \"The algorithmic framework proposed in Section 3 is weak. See Questions for the details.\", \"questions\": \"i) According to line 165-167 on page 4, $G=(G_1,\\\\dots,G_m):\\\\mathbb{R}^d\\\\rightarrow\\\\mathbb{R}^m$ is a non-convex non-smooth mapping. In Line 3 in Algorithm 3 from line 225-226 on page 5, the objective function of the subproblem on $\\\\theta_{k+1}$ directly involves $\\\\langle\\\\lambda_k,G(\\\\theta)\\\\rangle$, which makes the subproblem itself a possible non-convex non-smooth problem. How to obtain $\\\\theta_{k+1}$ in every iteration of Algorithm 1?\\n\\nii) $\\\\theta_{k+1}$ in (9) on page 5 is used in (22) and (24) on page 15 in the convergence analysis. If we replace $\\\\langle\\\\lambda_k,G(\\\\theta)\\\\rangle$ by $\\\\langle\\\\nabla G(\\\\theta_k)^\\\\top\\\\lambda_k,\\\\theta\\\\rangle$ in the subproblem, then (9) changes to\\n$$\\\\theta_{k+1}=\\\\text{arg}\\\\min_{\\\\theta\\\\in\\\\Theta}\\\\Big(\\\\langle\\\\nabla_{\\\\theta}\\\\mathcal{L}_{\\\\alpha\\\\beta}(\\\\theta_k,u_k,z_k,\\\\lambda_k,\\\\mu_k),\\\\theta\\\\rangle+\\\\frac{1}{2\\\\eta}||\\\\theta-\\\\theta_k||^2\\\\Big)$$\\n\\n$$\\\\quad\\\\quad=\\\\Pi_\\\\Theta\\\\Big(\\\\theta_k-\\\\eta\\\\nabla_{\\\\theta}\\\\mathcal{L}_{\\\\alpha\\\\beta}(\\\\theta_k,u_k,z_k,\\\\lambda_k,\\\\mu_k)\\\\Big).$$\\n\\nWe then need a trivial bound on the sequence of $\\\\lambda_k$ to obtain the Lipschitz constant of $\\\\nabla_{\\\\theta}\\\\mathcal{L}_{\\\\alpha\\\\beta}(\\\\theta_k,\\\\cdots)$ for all $k$ and adopt the constant step-size $\\\\eta$ on $\\\\theta$. The authors directly assume the boundedness of the sequence of $\\\\lambda_k$ in Assumption 5 and say that this is standard in the optimization literature (line 271-274 on page 6). However, the boundedness of the dual sequence is proved as a result by assuming certain constraint qualifications, e.g., Theorem 5 (b) in Boob et al. (2023). How to justify Assumption 5?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors investigate classification problems subject to fairness constraints and propose an algorithmic framework. They specifically focus on continuous constraints with bounded subgradients. The authors begin by reformulating the tractable continuous constrained optimization problem using perturbation variables and slack variables, drawing inspiration from the work of Bertsekas. They then introduce a variant of the corresponding proximal-perturbed Lagrangian, substituting the perturbation variable\\nz with a function of the dual variables. In practice, this approach addresses a specific case of the original optimization problem.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"They propose a proximal-perturbed Lagrangian frame to solve a constrained optimization and provide convergence guarantees.\", \"weaknesses\": \"1. $z(\\\\lambda,\\\\mu) =\\\\frac{\\\\lambda-\\\\mu}{\\\\alpha}$ is a special case.\\n2. In Equations (9) and (10), 2 second-order Taylor expansions are used to replaced the differential functions. Why you do not use the the differential functions themselves.\\n3. In this paper, the authors try to solve the tractable continuous constrained optimization problem (3). The result provided in Theorem 2 seems not so meaningful.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a new algorithmic approach for training fair classifiers, leveraging a general primal-dual optimization algorithm designed to approximate the saddle points of a Lagrangian function.\\n\\nThey demonstrate that common fair classification problems can be formulated such that an optimal fair classifier can be approximated using their optimisation algorithm. \\n\\nMoreover, they provide theoretical guarantees for the proposed algorithms and present numerical experiments that illustrate the competitiveness of their approach compared to existing methods in the literature.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Up to my knowledge, the proposed approach is novel.\\nMoreover, it is backed by both theoretical and empirical evidence.\", \"weaknesses\": [\"The paper would benefit from additional explanations of the proposed approach in Section 3 (see Questions). The current description does not provide enough clarity to help the reader develop an intuitive understanding of the framework.\", \"I found the Main Results section difficult to follow, and I am not yet convinced of the soundness of the claims (see Questions below).\", \"The positioning of the paper is unclear: while the proposed approach has applications in fairness, it is not specific to fairness problems but rather a general optimisation framework. In this regard, I find the title, abstract, and introduction somewhat misleading.\"], \"questions\": [\"The authors claim that the assumptions are standard in the optimization literature; however, after reviewing some of the cited papers, I could not find the exact same assumptions. Could the authors provide more precise references to the relevant literature? In particular for Assumption 5.\", \"Why is the problem in Eq. (3) tractable? As stated some lines above the constraint is non-convex and non-smooth. What do the authors mean by tractable?\", \"Could the authors explain the different terms in the Proximal-Perturbed Lagrangian in Eq. (6) ? In particular, why are the penalty and proximal terms added to the regular Lagrangian?\", \"Could the authors explain the different choices for the update rules (and the chosen order)? For instance, why is the parameter $\\\\theta_k$ updated as in Eq. (9)?\", \"The authors state that they \\\"show that the generated primal-dual iterates converge to a KKT point of problem (3)\\\". The KKT conditions for problem (3) are given in (4) but I don't see the link with the obtained results in Theorem 1 and 2. Could the authors clarify this point?\", \"Unlike claimed by the authors, there is no rate of convergence in the statement of Theorem 1 (though one can be obtained from the proof). The authors should be more rigorous when describing (or stating) Theorem 1. Same comment applies to Remark 1.\", \"How does Theorem 2 guarantee the feasability guarantees for Algorithm 1?\", \"I cannot make the link between Lemma 4, Theorem 1 and Remark 1. Could the authors explain how they go form a bound on the norm of the sub-gradients to a bound on the iterates using Lemma 4?\", \"There is a running typo in the paper: the iteration number of the algorithm is sometimes denoted by $k$ and other times by $t$ (e.g., when the authors define $\\\\delta_k = \\\\kappa \\\\cdot (t+1)^{-1}$.\"], \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"The main results of the paper already appear in a different paper on arXiv that was pre-published some months ago.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a primal-dual algorithm for fair classification based on a modified proximal-perturbed Lagrangian. The main change from previous approaches is that they apply the algorithm from Kim (2021; 2023) to fair classification. They claimed higher computational efficiency. Experiments were provided on UCI datasets.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Pros:\\n\\nThe context and literature of fairness are well-referenced.\\n\\nExperiments were provided including real data.\", \"weaknesses\": \"Cons:\\n1. The main difference between this work and previous work is that they reformulate the fair classification in (1) into their reformulated equation (3) by using a surrogate differentiable $\\\\epsilon$-fairness instead of the indicator function in Definition 1. This step is fine, but the fairness classification is stemming from the equations in demographic parity/equalized odds and is replaced by the weaker surrogate in (2) because it can be solved by gradient-based method. Even if the proposed algorithm can solve their reformulation (3), there are no theoretical guarantees wrt to the original parity of the resulting solution.\\n2. Their main motivation is that this approach can directly handle the original non-smooth non-convex fairness-constrained problem. However, the reviewer is not convinced by this claim, as the paper also uses surrogate functions for the non-differentiable fairness constraints. If the proposed algorithm cannot solve the original non-convex problem exactly (due to the use of surrogates), it is unclear what theoretical or practical advantages it offers compared to the convex relaxations proposed in the literature, e.g., Donini et al., Celis et al., Goel et al.\\n3. The empirical evaluation is lacking. They benchmark with very few baselines compared to the rich literature on fair classification, notably, a solid comparison with the convex relaxation approaches is lacking, which is a central point given the paper's focus on non-convex optimization. More experimental details are needed for fair comparisons and reproducibility, e.g., the paper does not specify the hyperparameter tuning process for baselines, the values of\\u00a0\\u03b5. The absence of tables and standard deviation makes precise comparisons difficult. Furthermore, most of the empirical claims are only wrt CPU time but no thorough evaluation of convergence rates.\", \"questions\": \"1. The proposed algorithm can only handle two protected groups, what about multi-group problems?\\n2. In Eq. (2), the sum is over $s$ but no $s$ is found inside the sum.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BksqWM8737
ProteinBench: A Holistic Evaluation of Protein Foundation Models
[ "Fei YE", "Zaixiang Zheng", "Dongyu Xue", "Yuning Shen", "Lihao Wang", "Yiming Ma", "Yan Wang", "Xinyou Wang", "Xiangxin Zhou", "Quanquan Gu" ]
Recent years have witnessed a surge in the development of protein foundation models, significantly improving performance in protein prediction and generative tasks ranging from 3D structure prediction and protein design to conformational dynamics. However, the capabilities and limitations associated with these models remain poorly understood due to the absence of a unified evaluation framework. To fill this gap, we introduce ProteinBench, a holistic evaluation framework designed to enhance the transparency of protein foundation models. Our approach consists of three key components: (i) A taxonomic classification of tasks that broadly encompass the main challenges in the protein domain, based on the relationships between different protein modalities; (ii) A multi-metric evaluation approach that assesses performance across four key dimensions: quality, novelty, diversity, and robustness; and (iii) In-depth analyses from various user objectives, providing a holistic view of model performance. Our comprehensive evaluation of protein foundation models reveals several key findings that shed light on their current capabilities and limitations. To promote transparency and facilitate further research, we release the evaluation dataset, code, and a public leaderboard publicly for further analysis and a general modular toolkit. We intend for ProteinBench to be a living benchmark for establishing a standardized, in-depth evaluation framework for protein foundation models, driving their development and application while fostering collaboration within the field.
[ "Protein foundation model", "benchmark", "protein design", "protein conformation prediction" ]
Accept (Poster)
https://openreview.net/pdf?id=BksqWM8737
https://openreview.net/forum?id=BksqWM8737
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yNi0C8b3RG", "xyfkcdLxdE", "xiLZ9GPdxx", "wlmbam5Kp0", "tZB4tp5RXM", "tQ1aqMNLDr", "rP0rgnrpyo", "rMc2U51PPD", "oX7QzQhiVj", "kJRCqIt2Tk", "jixEr9yltC", "jSVsSv4oPD", "eTvyLRsED6", "eP23qID2jj", "dx7W8Lt7VG", "bphSXMhedC", "bHbBO6uiQk", "aVzch5GXhv", "ZrVncE5l6V", "YTdbQQxHGY", "VYWMKXK81V", "TJBWWBqLKD", "PMLCsZrKZU", "LOff7Mn53p", "GmikK4s2Yx", "GWoQLhKyB0", "GVOKvrCnb1", "G8s4rgjZf4", "FSrjfam8MN", "E91Z1rf4cn", "DhsXNNZQ6G", "DdomvLEjLo", "Bjc6egFKQf", "BGESFHQnkL", "9hRoMKEv5x", "88RU4qsZun", "7fIuSA6Ha5", "75Giq0twmY", "5dc8WnGr2r", "5Ip40otUwC", "1AV5Z1VD00" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision" ], "note_created": [ 1732174691777, 1730587739459, 1732461376041, 1732462551690, 1732178238090, 1732179778544, 1732761448942, 1732656474050, 1732480687278, 1732179649999, 1732174616345, 1732175720400, 1732174255537, 1732180379315, 1732659897351, 1732177464466, 1730465258694, 1732179123916, 1732177210485, 1732178600798, 1732710068185, 1732179427086, 1732177031356, 1732614807645, 1732380250648, 1734682405785, 1730364520382, 1732176202263, 1732175558520, 1732178514020, 1732179732178, 1732179855010, 1732177346832, 1732585739242, 1732177415642, 1732574299258, 1732176150559, 1732694730844, 1732528898650, 1731102645251, 1737524229379 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_bamz" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_hosV" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_bamz" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_jXTi" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_hosV" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_d7D6" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_d7D6" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_d7D6" ], [ "ICLR.cc/2025/Conference/Submission13006/Area_Chair_febm" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_d7D6" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Authors" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_d7D6" ], [ "ICLR.cc/2025/Conference/Submission13006/Reviewer_jXTi" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by authors\", \"comment\": \"**[Q3]** Lack of consistency in the training data across models is a limitation that undercuts the one of the main promises of the proposed framework which is standardization of the evaluation of protein foundation evaluation. This may not be an issue in the future as the framework is further developed and more mature.\\n\\n**[A3]** We thank the reviewer for bringing this to our attention. \\n\\n1. In our manuscript, all models for the antibody design task were retrained using the same dataset. This consistency enables a direct comparison of their underlying technical approaches. However, for the other tasks, the training datasets varied, which may affect the comparability of the results.\\n2. Our current benchmarking approach focuses on evaluating existing methods and models at the model layer rather than the technique layer, where training data is considered an integral part of each method's strategy. This approach aligns with other established benchmarks of foundation models that standardize model evaluation rather than isolating technical components. We believe this better serves users by providing insights into real-world model performance.\\n3. we recognize the importance of controlling for training data differences. We envision ProteinBench as an evolving benchmark, and in future iterations, we plan to implement more rigorous controls to account for these differences in training data. This will help provide deeper insights into the impact of training data on model performance.\"}", "{\"summary\": \"I am not sure how to review this manuscript. I have been in the field for over 20 years, so I am an experienced ML/Deep learning researcher, but I do not know much about deep learning for protein design/generation beyond having read blog posts about AlphaFold, ESM, and similar advances. I do know a fair bit about biology (probably more than most ML researchers), but I am still far from an expert in this area. I thus have some high-level opinions about the work, but cannot evaluate the vast majority of the content. I would like my high-level thoughts to be considered, but I cannot evaluate any of the details in the paper, so I have to defer to other expert reviewers to do that. Here are examples of the questions I cannot evaluate: Are the challenges in the benchmark the right ones? Are they comprehensive enough? Are the models evaluated the right ones? Are major ones missing? Were the evals done fairly and correctly? All of that I\\u2019ll have to leave for subject matter experts.\\n\\nThat said, here are my high-level thoughts:\\n\\nThe field benefits from benchmarks, and we should reward people who take the time to make them. I thus support publication because from what I can tell, this is a new, needed, helpful benchmark. However, I would not know if other similar benchmarks already exist. \\nIt is also helpful to have a set of leading models tested on such benchmarks. Assuming the set of models is a good one and the evals are well done, that is another contribution worthy of sharing with the community via a publication. That said, I do not know how novel such a comparison is. \\nThe paper does a poor job of explaining almost anything in a way that a non-domain expert could understand. The worst example of this is the challenges in the benchmark. They are extremely superficially described, with a reference to the Appendix presumably doing all the work of explaining what these challenges really are, how they are evaluated, why they matter, why they were chosen, etc. I think more of that belongs in the paper. I\\u2019d rather see a paper arguing for and introducing a benchmark do MUCH more of that motivation and explanation than a slog through the performance on the benchmark of a bunch of models. After all, why do we care how these models perform on problem x before we know what problem x is and why it matters? That said, perhaps domain experts already know these challenges so well that the current level of text is sufficient? I am not in a position to judge, but I think the paper would likely be higher-impact and better if it was more readable by non-insiders. I was hoping to learn a LOT about this area by reading this manuscript, and instead, sadly, I learned basically nothing (except there is a new benchmark). The reason I am still voting for publication despite that is I imagine this paper is quite valuable for insiders, but great papers do better at offering *something* to non-insiders.\", \"minor\": \"You say ProteinBench provides a holistic view. That is too strong. It is virtually impossible to provide a holistic view of such a complex topic. Please switch such language to \\u201cmore holistic\\u201d or similar. \\nYou say benchmarks are crucial or critical for progress. This is a strong claim. I think they are helpful, but progress can be made without them, and they can actually hurt progress too (see Why Greatness Cannot Be Planned), so I recommend a more nuanced statement.\", \"line_181\": \"You say this finding has significant implications for the field, but it is drawn from 2 data points only! Please properly soften the claims.\", \"line_192\": \"\\u201cWe have noticed that\\u2026.\\u201d That\\u2019s a weird way to describe your own work/choices. You sound surprised you didn\\u2019t include more methods? I suggest finding clearer, less confusing language.\", \"note\": \"I am rating this a 6 and not an 8 simply because I do not know enough to evaluate the vast majority of the work. If the other reviewers think it is good, then this 6 should support publication. But I do not want to set a super high score that would override experts with doubts. If all experts agree the technical work and novelty are solid, I'd be happy with an 8.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"See main review.\", \"weaknesses\": \"See main review.\", \"questions\": \"See main review.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official comment by Reviewer hosV\", \"comment\": \"I believe you have addressed my concerns on novelty, training stage, protein tasks and metrics, so I have decided to increase my rating.\"}", "{\"title\": \"Agreed. The paper (especially the main text) needs more and better motivation of it's choices\", \"comment\": \"I agree with this reviewer, who made similar points to those I made. To stand the test of time this benchmark needs to motivate and explain the major tests it includes, rather than focus so much energy on the results of current models, which will quickly become outdated as a new better models emerge. The authors seem to have understood the point, but only made most of the changes in the appendix. Thus this reviewer and I agree that most of the important and best parts of this paper are buried in the appendix, rather than being the main text where they should be. For this reason I am leaving my store where it was. This reviewer's comments actually made me think my original score probably was too high, as the paper really isn't doing a very good job of explaining the motivation for each of the challenges in this benchmark.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We greatly appreciate your constructive feedback regarding the consistency of the training dataset and the design rationale of the benchmark. In response, we have provided additional information for your major concerns: First, we recognize the importance of controlling for training data differences and plan to implement more rigorous controls to account for differences in training data. Second, we restructured the manuscript and provided additional discussion about the design rationale of the benchmark.\\n\\n**[Q1]** Lack of Standardized Training Data: Differences in training datasets among models hinder direct comparison. Standardizing datasets would improve the ability to compare model architectures and may be essential for achieving fairer assessments within ProteinBench. Could you plan to implement controls to account for differences in training data?\\n\\n**[A1]** We thank the reviewer for bringing this to our attention.\\n\\n1. **Our current benchmarking approach focuses on evaluating existing methods and models at the model layer rather than the technique layer**, where training data is considered an integral part of each method's strategy. This approach aligns with other established foundation model benchmarks that standardize model evaluation rather than isolating technical components. We believe this better serves users by providing insights into real-world model performance.\\n\\n2. In our manuscript, **all models for the antibody design task were retrained using the same dataset**. This consistency enables a direct comparison of their underlying technical approaches. However, for the other tasks, the training datasets varied, which may affect the comparability of the techniques.\\n\\n3. We recognize the importance of controlling for training data differences. We envision ProteinBench as an evolving benchmark, and in future iterations, **we plan to implement more rigorous controls to account for these differences in training data.** This will help provide deeper insights into the impact of training data on model performance.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q5]** Motif Scaffolding: For the Motif Scaffolding task, which evaluation metrics are being used? The reviewer is confused if RMSD metrics are being used or also designability metrics are being used.\\n\\n**[A5]** Thanks for the question. We measure motif-scaffolding in terms of both motif accuracy and overall designability. For motif accuracy, we calculate RMSD between the input motif structure and the corresponding region of the designed protein to assess whether the motif structure is preserved (motifRMSD < 1.0). As for the overall designability, we use scTM score > 0.8 as being designable. We have accordingly elaborated on motif-scalffolding evaluation in the appendix.\"}", "{\"title\": \"Further response to reviewer d7D6\", \"comment\": \"We sincerely thank the reviewer for raising the score. Your perceptive remarks and thoughtful feedback have been invaluable, not only in improving the quality of our work but also in guiding the future optimization of the protein benchmark.\\n\\nAs previously discussed, reader preferences are diverse. Protein conformation is an emerging field of research that currently lacks comprehensive comparative analysis. Our work holds the potential to be beneficial to researchers engaged in this area at the present stage. \\n\\nTo better meet the needs of diverse readers, we are committed to preparing an extended version of the paper, which will fully incorporate your suggestions, integrating task definitions, technical details, and performance evaluations directly into the main text for greater clarity and accessibility.\\n\\nWe are truly appreciative of the reviewer's comment regarding the sampling temperature. Future evaluations and in-depth analyses of the optimal sampling temperatures for different methods will be conducted.\"}", "{\"comment\": \"The authors provided appropriate, and detailed responses and addressed this reviewer\\u2019s concerns. I'll update my rating appropriately. Thanks\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"First of all, many thanks to the reviewer for increasing the score. We also understand the reviewer's concern about the structure of the manuscript. We rationalize the organization of the paper for the following reasons.\\n\\n1. As highlighted in Appendix A, one of the most important contributions of ProteinBench is the **first comprehensive benchmark for protein foundation models**, differentiating this work from the existing benchmarks targeting specific tasks. We have so much content that should be presented in the limited pages of the manuscript. In our revised version, we aim to present the **general comprehensive landscape** for all 8 different protein tasks. The main manuscript is organized to deliver the most important general information for all tasks by providing high-level design logic summarized in Table 1 and comparative studies of existing state-of-the-art models' performance. **To improve the reader experience, we provide the links in Table 1 in the updated version to quickly navigate to valuable details in the appendix. The general landscape allows the users of the benchmark, who are familiar with some of the tasks, to quickly find the methods they would like to use and compare with.**\\n\\n2. We acknowledge the importance of providing detailed information about datasets and evaluation metrics and upholding such standards for testing future models. **At the same time, we recognize that comparative studies among models are a valuable contribution at the current stage of the field and should be included in the main text. These insights could be essential for guiding the current and future development of models. This is especially critical in emerging areas like protein conformation prediction, where no standardized comparisons of existing models have been conducted to evaluate their performance across different tasks.**\\n\\n3. Due to the **page limitations**, we have to put all the details in the appendix. **We understand the details are important to the users, especially the users who are not protein experts to understand the tasks. To help those users to grasp each task from scratch, we provided as much detailed information as we could.** If we put the information for all protein tasks in the main part, users may get lost in the details. \\n\\n4. We thank the reviewer for providing a new protein interaction benchmark PINDER for our reference (https://www.biorxiv.org/content/10.1101/2024.07.17.603980v2) and we recruited this paper in the revised version. This benchmark provides a new dataset specifically targeting the evaluation of complex structure prediction. We want to emphasize the contribution of our benchmark is comprehensively assessing protein foundation models and their ability to understand multiple modalities.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q3]** My main concern is related to how the paper is structured and the information it contains. From Page 3 the task results are presented with a small definition and comparison between methods. I see that the authors presented the creation of unified datasets as a current limitation and future work by the community, but the reviewer thinks that the manuscript should contain more information about the thought process about creating the benchmark, with the thought process about the metrics, the impact of methods using different datasets, tasks in which this is critical, etc. These are crucial for the community to adopt the benchmark and trust the results that are being presented and compared. In its current form, it is hard to understand these intrinsic details that are important for protein-related tasks.\\n\\n**[A3]** In our revised version, we provide a manuscript structure where the main manuscript presents key evaluation results, enabling readers to quickly grasp the overall landscape of tasks and performance of protein foundation models. For researchers interested in detailed task motivations, thoughts on metric selections, dataset considerations, implementation information, and result analysis, we have placed the detailed information in the appendix.\\n\\nSpecifically, we have reorganized the appendix (Section B) using a task-centric approach. For each task, we now provide comprehensive details including:\\n1. Task definitions\\n2. Metric justification and descriptions (including the thought process of metrics)\\n4. Dataset specifications\\n5. Supplementary results\\n6. Potential discussions to provide more insights\\n\\nTo facilitate navigation, we added direct links in each task section that connect to their corresponding detailed explanations in the appendix. The reorganized information is now highlighted in blue for improved visibility. Additionally, we will also release a task-centered arxiv paper. This format allows researchers to access task-specific information efficiently.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2]** The relevance and insights of the results could replace the explanations of the results. For example, Section 2.2.6 Antibody Design, instead of listing the outperforming models for evaluation, which is provided in Table 6, authors could discuss the relevance of these metrics along with the insights gained from the results similar to the one that they provided in the last paragraph.\\n\\n**[A2]** We recognize the importance of providing insights for the results in our benchmark. In the revised manuscript, we have expanded the further analysis and insights gained from the results, adding this information in Appendix B for each task in the bulletin titled **[Extended Explanations and Discussion on Model Performance]**. Two examples of inverse folding and antibody design is attached here.\", \"inverse_folding\": \"1. Dataset Characteristics Impact Performance: Our evaluation spans two dataset types: high-quality experimental structures (CASP/CAMEO) and computational de novo structures containing inherent noise. Models performing well on the more challenging de novo structures demonstrate superior robustness, as they must overcome structural uncertainties while maintaining design accuracy.\\n2. Training Strategy Influences Robustness: ProteinMPNN's approach of incorporating backbone noise during training proves highly effective. Our results confirm their findings that increased training noise correlates with improved model robustness. This is evidenced by ProteinMPNN's superior performance in de novo backbone-based sequence design, validating backbone noise augmentation as an effective strategy for enhancing model resilience.\", \"antibody_analysis\": \"The involved models differ mainly in modeling methods and initialization methods. \\n\\nHERN stands out as the only autoregressive generative model, excelling in sequence naturalness by effectively capturing amino acid dependencies. Unlike the non-autoregressive methods, like MEAN and dyMEAN, which fail in modeling dependencies between residues (reducing to focus on marginal distributions at each CDR position and thus lost the sequence specificity towards different antigens), HERN\\u2019s explicit modeling of inter-residue relationships within CDR-CDR and CDR-FR contributes significantly to sequence rationality;\\n\\nMEAN achieved the best RMSD among all methods, which we attribute to its unique structural initialization method. Unlike diffusion-based methods that use noise from N(0,I) for structure initialization, MEAN performs a linear structural initialization between FR residues that connect the CDR regions. This initialization method potentially provides a better starting point for structure generation, and also ensures that the residues at both ends of the CDR are not too far from their actual positions;\\n The diffusion-based methods (DiffAb & AbDPO) generally perform better in generating more reasonable structures, with better C-N bond length, less clashes and lower energy, which demonstrate the advantage of diffusion models in structural modeling.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q5]** Minor Issues:\\n- The description of Table 12 does not align with the data presented in the table\\n- Items (3) and (4) in the conclusion are the same.\\n- Figure 2 is too small.\\n\\n**[A5]** We thank the reviewer for bringing this concern to our attention. We corrected these typo errors in the revised manuscript.\\n1. We have corrected the description of Table 12. \\n2. We removed the repeated conclusion.\\n3. Figure 2 is enlarged.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We greatly appreciate your constructive feedback regarding the manuscript's organization and depth of analysis. In response, we have implemented two major improvements: First, we have thoroughly restructured the manuscript and appendix using a task-centric approach, ensuring clearer organization and logical flow of information. Second, we enhanced our analytical depth by providing comprehensive discussions and deeper insights for each protein design task. We believe these revisions substantially strengthen the manuscript and welcome your assessment of these changes.\\n\\n**[Q1]** Given that the authors have made an extensive amount of experimental study, some reorganization of the paper could strengthen the delivery of the contributions of the paper. Including clear and complete definitions, explanations, and relevance of the metrics would be helpful. \\n\\n**[A1]** We thank the reviewer for their constructive feedback on the manuscript's organization. To enhance clarity and accessibility, we have implemented several structural improvements:\\n1. We have reorganized the appendix (Section B) using a task-centric approach. For each task, we now provide comprehensive details including:\\n - Task definitions\\n - Metrics justification and descriptions \\n - Dataset specifications\\n - Supplementary results \\n - Extended discussions to provide more insights\\n2. To facilitate navigation, we have added direct links in each task section that connect to their corresponding detailed explanations in the appendix.\\n3. The reorganized information is now highlighted in blue for improved visibility.\\n4. We can offer a task-centered organization of the manuscript for our leaderboard and arXiv versions, which are not subject to page limits.\"}", "{\"title\": \"We are pleased to hear any feedback from all reviewers before discussion deadline\", \"comment\": \"Dear Reviewers, ACs, and SACs,\\n\\nWe would like to sincerely thank all the reviewers for their efforts and valuable suggestions. We have made every effort to address the reviewers' concerns, including the following:\\n\\n1. Reorganized the structure of the manuscript.\\n2. Provided implementation details for the benchmark.\\n3. Expanded the discussion and insights for each task.\\n4. Included rational explanations for non-expert readers.\\n5. Corrected minor presentation issues.\\n\\nWe appreciate everyone's time and effort in providing insightful feedback, which has greatly helped us improve our manuscript. We have revised the paper to incorporate many of the reviewers' suggestions and comments, and we are truly grateful for your contributions.\\n\\nWe welcome any further feedback during the discussion phase!\\n\\nThank you once again!\\n\\nBest regards,\\nAuthors\"}", "{\"comment\": \"Thank you for raising your rating. We're glad that our rebuttal has fully addressed your concerns. Your thoughtful comments and feedback have been invaluable in helping us improve the paper.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q5]** Note: I am rating this a 6 and not an 8 simply because I do not know enough to evaluate the vast majority of the work. If the other reviewers think it is good, then this 6 should support publication. But I do not want to set a super high score that would override experts with doubts. If all experts agree the technical work and novelty are solid, I'd be happy with an 8.\\n\\n**[A5]** Many thanks for the reviewer's feedback. We appreciate all the comments and understand the perspective. We hope that our revisions address the concerns and provide clarity on the technical work and novelty of our study. We look forward to the insights of the other reviewers and hope for a positive evaluation.\"}", "{\"summary\": \"The paper introduces ProteinBench, an evaluation framework aimed at standardizing and broadening the assessment of protein foundation models. These models have gained prominence due to advancements in protein prediction and design, covering tasks from structural prediction to conformational dynamics. ProteinBench aims to address gaps in current evaluation practices by introducing a comprehensive, multi-dimensional approach that evaluates models based on quality, novelty, diversity, and robustness. The authors aim for ProteinBench to become a continually updated benchmark to guide research and collaboration in protein modeling.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Novel Evaluation Framework: The paper proposes a well-structured framework that standardizes evaluation for protein foundation models, addressing a significant need in the field. By evaluating on multiple fronts\\u2014quality, novelty, diversity, and robustness\\u2014ProteinBench gives a well-rounded assessment of model performance.\", \"task_diversity_and_practical_relevance\": \"ProteinBench is inclusive of various protein modeling tasks, including antibody design and multi-state prediction, which are highly relevant to real-world applications in pharmaceuticals and bioengineering.\", \"user_centered_analysis\": \"The framework is flexible, accommodating different user needs (e.g., evolutionary fidelity vs. novelty), which makes the tool versatile for diverse research goals. This feature improves the applicability of model results to specific scientific or engineering contexts.\", \"weaknesses\": \"Lack of Standardized Training Data: Differences in training datasets among models hinder direct comparison. Standardizing datasets would improve the ability to compare model architectures and may be essential for achieving fairer assessments within ProteinBench.\", \"questions\": \"1. I am not very familiar with AI for Protein. Could you provide with the reason why you separate whole protein tasks to these 8 parts?\\n\\n2. Could you give some additional metric about tasks? While ProteinBench is a strong foundation, additional tasks and metrics (especially regarding dynamics and multi-modal integrations) could improve its scope, making it more universally applicable in protein science.\\n\\n3. Could you plan to implement controls to account for differences in training data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We greatly appreciate the reviewer's constructive feedback regarding the paper structure and implementation details of the benchmark. In response, we have implemented two major improvements: First, we have thoroughly restructured the manuscript and appendix using a task-centric approach, ensuring clearer organization and logical flow of information. Second, we enhanced our analytical depth by providing comprehensive discussions and deeper insights for each protein design task. We believe these revisions substantially strengthen the manuscript and welcome your assessment of these changes.\\n\\n**[Q1]** The evaluation of methods for protein-related tasks is challenging and, usually, the decision of dataset curation and splits, metrics used for evaluation, and other small details are very important so the results can be trusted by the community. The reviewer thinks that the manuscript could have more information on all these details, instead of presenting results and discussing the performance of the methods. Some of these details are presented in the Appendix while others are missing.\\n\\n**[A1]** We thank the reviewer's thoughtful comment regarding evaluation transparency. In our revised manuscript, we have expanded the implementation details for each method to ensure reproducibility and trustworthiness of our results. We have ensured that all critical details regarding dataset curation, data splits, evaluation metrics, and implementation specifications are clearly presented. These details are now comprehensively documented in the appendix Section B lighted in blue.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2]** The paper does a poor job of explaining almost anything in a way that a non-domain expert could understand. The worst example of this is the challenges in the benchmark. They are extremely superficially described, with a reference to the Appendix presumably doing all the work of explaining what these challenges really are, how they are evaluated, why they matter, why they were chosen, etc. I think more of that belongs in the paper. I\\u2019d rather see a paper arguing for and introducing a benchmark do MUCH more of that motivation and explanation than a slog through the performance on the benchmark of a bunch of models. After all, why do we care how these models perform on problem x before we know what problem x is and why it matters? That said, perhaps domain experts already know these challenges so well that the current level of text is sufficient? I am not in a position to judge, but I think the paper would likely be higher-impact and better if it was more readable by non-insiders. I was hoping to learn a LOT about this area by reading this manuscript, and instead, sadly, I learned basically nothing (except there is a new benchmark). The reason I am still voting for publication despite that is I imagine this paper is quite valuable for insiders, but great papers do better at offering something to non-insiders.\\n\\n**[A2]** We thank the reviewer for the valuable feedback regarding the paper's structure and content. In our revised version, we provide a manuscript structure where the main manuscript presents key evaluation results, enabling readers to quickly grasp the overall landscape of tasks and performance of protein foundation models. For researchers interested in detailed task motivations, thoughts on metric selections, dataset considerations, implementation information, and result analysis, we have placed the detailed information in the appendix.\\n1. Specifically, we have reorganized the appendix (Section B) using a task-centric approach. For each task, we now provide comprehensive details including:\\n - Task definitions\\n - Metric justifications and descriptions\\n - Dataset specifications\\n - Supplementary results \\n - Potential discussions to provide more insights\\n2. To facilitate navigation, we added direct links in each task section that connect to their corresponding detailed explanations in the appendix.\\n3. The reorganized information is now highlighted in blue for improved visibility.\\n\\nAdditionally, we will also release a task-centered arxiv paper. This format allows researchers to access task-specific information efficiently.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q3]** Could you give some additional metric about tasks? While ProteinBench is a strong foundation, additional tasks and metrics (especially regarding dynamics and multi-modal integrations) could improve its scope, making it more universally applicable in protein science.\\n\\n**[A3]** We thank the reviewer for highlighting the importance of task breadth and evaluation metrics. In curating the benchmark, we thoroughly reviewed the current literature in each domain to ensure comprehensive coverage of tasks and metrics using publicly accessible datasets. **We collected the most representative datasets and tasks to the best of our knowledge.** However, in emerging areas like protein conformational dynamics, such datasets remain limited. That said, we anticipate that future developments in this area will provide more datasets, allowing us to evaluate models with additional tasks and perspectives. We remain committed to the continuous development and maintenance of ProteinBench, incorporating new integrations, updates, and revisions of tasks and metrics.\\n**As the whole field keeps evolving. We hope proteinBench is an evolving benchmark, and we will include more tasks and metrics in the future.**\"}", "{\"title\": \"Rebuttal Feedback 3\", \"comment\": \"Thank you for the attempt to address my comments on the paper structure.\\n\\n1. I tend to disagree with the sentence by the authors that readers \\\"with extensive experience in protein-related tasks would prefer a greater emphasis on the comparative performance analysis of standardized benchmarks\\\". As an interdisciplinary field with readers comprising computer scientists, computational biologists, and biologists it is important to have a benchmark that is trusted by biologists while guiding model development by computer scientists and computational biologists. \\n\\n2. The authors created the benchmark comprising both protein design and protein conformation prediction tasks. I understand that having more tasks is an effort by the authors to have a unified benchmark. But it also made me think that if the focus was only on protein design tasks the delivery of the contents of the manuscript would probably be stronger (addressing my comments and comments from reviewer bamz without the need for an extended version on Arxiv).\\n\\n3. Regarding the sampling temperature, thanks for adding a comment on this point. I think this is critical because methods evaluated in the benchmark might think it is unfair to use a temperature value that is optimized for a specific method. In the future, it would be better to use the optimal values from each reference paper or evaluate these models for different temperature values.\\n\\nI will increase my score to 6 however I will keep my concerns regarding the presentation of this manuscript as I think that a robust presentation of this benchmark can have a big impact on the protein research community, especially in the protein design field.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2]** Some metrics used for the benchmark, even though used before by previous references, might need more discussion as they can be misleading for checking the quality of designs, e.g. Antibody Design: Antibody metrics are usually very challenging to trust. As a benchmark, some of the metrics such as the scRMSD from structure prediction networks like IgFold can be misleading, as we are more interested in antigen-antibody complex structures. I understand current structure prediction networks accuracy for antibodies is limited, but it would be interesting to discuss and choose reliable metrics, even at a reduced number. As the evaluation metrics evolved, they could be added to the benchmark.\\n\\n**[A2]** Thanks for your suggestions. \\n\\nThere are some misunderstandings here. We totally agree with the point that what should be focused on is the complex structure instead of the isolated antibody structure and this is why we obtain the reference structure used for scRMSD calculation in a two-stage way. \\n\\nIn the first stage, IgFold is utilized in the first stage to get an initial antibody CDR structure (the predicted isolated antibody structure). The initial CDR structure then undergoes a structure optimization for lower energy with the condition of the antigen-antibody complex (the simulated antigen-bound antibody structure). Thus, the first stage is antigen irrelevant while the second stage is antigen dependent. \\n\\nWe have also tried to use AF2-multimer to obtain the reference structure but finally gave up because of the huge time consumption in the MSA building for antibody sequence. \\n\\nThe two-stage strategy is inspired by the slight interface structural changes in antigen-antibody binding [1] for lower energy. RMSD on reference antibodies also proves our strategy, which is reduced from 1.9* to 1.7* after the second stage. We will also keep updating our benchmarks using more reliable methods, like we will switch the calculation of scRMSD based on AF3 once it is completely open access.\", \"reference\": \"[1] Guest, Johnathan D., Thom Vreven, Jing Zhou, Iain Moal, Jeliazko R. Jeliazkov, Jeffrey J. Gray, Zhiping Weng, and Brian G. Pierce. \\\"An expanded benchmark for antibody-antigen docking and affinity prediction reveals insights into antibody recognition determinants.\\\" Structure 29, no. 6 (2021): 606-621.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We greatly appreciate your constructive feedback regarding the manuscript's organization, lack of clarity for non-experts, and novelty of the benchmark. In response, we have implemented two major improvements: First, we have thoroughly restructured the manuscript and appendix using a task-centric approach, ensuring clearer organization and logical flow of information. Second, we enhanced our analytical depth by providing comprehensive discussions and deeper insights for each protein design task. We believe these revisions substantially strengthen the manuscript and welcome your assessment of these changes.\\n\\n**[Q1]** The field benefits from benchmarks, and we should reward people who take the time to make them. I thus support publication because from what I can tell, this is a new, needed, helpful benchmark. However, I would not know if other similar benchmarks already exist. It is also helpful to have a set of leading models tested on such benchmarks. Assuming the set of models is a good one and the evals are well done, that is another contribution worthy of sharing with the community via a publication. That said, I do not know how novel such a comparison is.\\n\\n**[A1]** We thank the reviewer for raising this important question. In the revised manuscript, we outline the rationale behind our benchmark and provide an overview of existing benchmarks in Appendix A. Existing benchmarks for protein design and conformation prediction are summarized in Table 11. Our analysis indicates that current benchmarks primarily focus on specific tasks, underscoring the urgent need for a comprehensive benchmark that addresses a wider range of protein tasks. In this study, we present **the first extensive benchmark for protein foundation models, covering a broad spectrum of protein tasks**.\"}", "{\"title\": \"Rebuttal Feedback 2\", \"comment\": \"Thanks for addressing my comments and discussing the paper's content and organization.\\n\\nRegarding the organization, my specific suggestion would be to not focus too much on describing individual results, but also add important information currently in the appendix. For example, trying to answer questions from readers like: (i) Which datasets and splits should I use if I want to add my algorithm to the benchmark or compare with the results for other methods?; (ii) Which metrics and how should I use them for these tasks?; (iii) Training Strategy influences Robustness; etc.\\n\\nAs these comments can be biased by a personal preference in the organization/presentation of the manuscript and I recognize the efforts by the authors to improve the completeness of Appendix B, **I am changing my score to 5.5 (in the acceptance threshold)**. In summary, I recognize the strengths of this paper and its importance for the community, even though I think the presentation could be improved as a manuscript.\", \"minor_comments\": \"1. There are still a few typos and problems with symbols to be corrected after the additions during the Discussion process.\\n2. The sampling temperature was fixed at 0.1 for all inverse folding methods, but the optimal value can vary for different methods.\"}", "{\"title\": \"Rebuttal Feedback\", \"comment\": \"The reviewer thanks the authors for addressing part of my concerns and comments.\\n\\nI have increased my score.\\n\\nI still do think that the most important content of the manuscript is in Appendix B. I understand the page limitations of the manuscript and the possibility of uploading an Arxiv version with the re-structured content, however, I feel that as a part of the protein community and as a user of the benchmark for both comparing novel proposed algorithms and choosing the best algorithm for a specific application, it is very important that the most important details and reasoning for each task regarding evaluation/fairness/datasets are in the manuscript. The benchmark results will change when new algorithms are proposed, but, in the reviewer's opinion, the benchmark will be strong and pass the test of time if the metrics and dataset preprocessing remain the most stable over time, especially in the protein domain, where many works are currently being developed for benchmarks/datasets to address limitations in the field, e.g. https://www.biorxiv.org/content/10.1101/2024.07.17.603980v2 .\"}", "{\"metareview\": [\"This paper claims that so far, there have been numerous trained foundation models for protein prediction, but there haven't been standardized benchmarks to evaluate their performance on downstream tasks. This paper proposes a comprehensive list of over 9 benchmarks to assess notions of (Accuracy, Functionality, Specificity, Rationality, Quality, Novelty, Diversity), and also experimented with 20+ official models from the field to list their performances.\", \"## Strengths\", \"The authors seemed to have spent a great deal of effort in providing a unified benchmark set, and have diligently evaluated many of the most recent and SOTA protein models so far. Appendix B showed that they used the official codebases from different models, which must've taken a large amount of work trying to get every model working.\", \"From my (admittedly limited understanding), this may be one of the first large-scale efforts to setup an official benchmarking and leaderboard for protein tasks.\", \"## Weaknesses\", \"For a non-expert in this domain, the paper is definitely hard to learn and read from. It's understandable that as a benchmarking paper, there are naturally going to be lots of scores and results involved, but it's impossible for me to decipher the meanings behind the results on \\\"peptide bond breaking\\\" or \\\"ligand binding\\\"\", \"It appears that even domain-expert reviewers feel like this paper doesn't do a great job at providing the significance and insights of the results, other than blanket conclusions such as \\\"no model does the best at everything\\\".\"], \"additional_comments_on_reviewer_discussion\": \"Due to the subject being quite specific to biology and protein design, I must admit I'm also not an expert in this field, and therefore I needed to very carefully read the reviews of those who are in the field (Reviewers jXTi, d7D6), while I and Reviewers (bamz, hosV) can be considered general machine learning researchers.\", \"domain_expert_reviewers\": [\"Specific details on dataset curation + splitting are extremely important and required for the benchmarking to be trustworthy. These details need to be laid out explicitly.\", \"Certain metrics such as scRMSD in antibody design can be very misleading and not the true objective which should be optimized.\", \"Details such as temperature sampling may have been slightly unfair to use.\", \"This paper contains benchmarks about both \\\"protein design and protein conformation prediction tasks\\\", while it may have been better to scope it only for protein design tasks.\"], \"general_ml_reviewers\": [\"The paper doesn't do a good job of explaining why it's important to outsiders (after all, this is still ICLR and not e.g. Nature).\", \"Post-rebuttal, most of the explanations were only added to the Appendix rather than main body, which still doesn't resolve the outsider's readability issue.\"], \"common_issues\": [\"The paper is written as a \\\"laundry list\\\" of results without explaining the importance and significance of the metrics. Additional insights into why \\\"model X underperformed on benchmark Y\\\" should also be provided.\", \"Foundational model Training data should be standardized\", \"I + the authors would consider this a moot point, since most models train on different large-scale datasets in general, and indeed, they can be seen as part of the model recipe itself.\", \"Post-rebuttal, most of these issues haven't been truly resolved (judging by the blue text indicating updates and edits), and thus for now, the paper definitely can be accepted as a poster, but I'm not confident about moving to anything higher, e.g. spotlight or oral.\"]}", "{\"summary\": \"The paper introduces a unified protein benchmark to evaluate various methods for different tasks such as protein structure prediction, sequence design, structure design, sequence-structure design, and molecular dynamics. For each task, results are presented for various state-of-the-art methods and a discussion is followed on the strengths/weaknesses of these methods. The evaluation is performed using multiple metrics commonly used for these tasks, and, that sometimes incorporate different objectives in protein design. The benchmark is planned to be shared in an open-source manner for the community to compare methods for the presented tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tA benchmark with multiple metrics is proposed for protein-related tasks. The metrics incorporate different objectives for different problems encountered in protein design and molecular dynamics tasks.\\n2.\\tThe paper tackles an important and complex problem in the protein community which is the creation of benchmarks and datasets for training AI methods.\\n3.\\tThe benchmark incorporates recent challenges like evaluating sequence-structure compatibility in recent co-design methods.\", \"weaknesses\": \"1.\\tThe evaluation of methods for protein-related tasks is challenging and, usually, the decision of dataset curation and splits, metrics used for evaluation, and other small details are very important so the results can be trusted by the community. The reviewer thinks that the manuscript could have more information on all these details, instead of presenting results and discussing the performance of the methods. Some of these details are presented in the Appendix while others are missing.\\n2.\\tSome metrics used for the benchmark, even though used before by previous references, might need more discussion as they can be misleading for checking the quality of designs, e.g. scRMSD in antibody design.\", \"questions\": \"Comments:\\n\\n1.\\tMy main concern is related to how the paper is structured and the information it contains. From Page 3 the task results are presented with a small definition and comparison between methods. I see that the authors presented the creation of unified datasets as a current limitation and future work by the community, but the reviewer thinks that the manuscript should contain more information about the thought process about creating the benchmark, with the thought process about the metrics, the impact of methods using different datasets, tasks in which this is critical, etc. These are crucial for the community to adopt the benchmark and trust the results that are being presented and compared. In its current form, it is hard to understand these intrinsic details that are important for protein-related tasks.\\n2.\\tThe authors define the methods as \\u201cprotein foundation models\\u201d and add their own explanation of how this definition is being used. From the reviewer's understanding, usually, foundation models are defined for methods that can be applied, e.g. using their latent space, for many different tasks. Any additional reasoning for using this new definition of protein foundation models?\\n3.\\tMotif Scaffolding: For the Motif Scaffolding task, which evaluation metrics are being used? The reviewer is confused if RMSD metrics are being used or also designability metrics are being used.\\n4.\\tAntibody Design: Antibody metrics are usually very challenging to trust. As a benchmark, some of the metrics such as the scRMSD from structure prediction networks like IgFold can be misleading, as we are more interested in antigen-antibody complex structures. I understand current structure prediction networks accuracy for antibodies is limited, but it would be interesting to discuss and choose reliable metrics, even at a reduced number. As the evaluation metrics evolved, they could be added to the benchmark.\", \"minor_comments\": \"1.\\tLine 81: \\u201cwe aims\\u201d\\n2.\\tLines 193-195: \\u201cWe have noticed\\u2026\\u201d and Lines 214-215: \\u201cWe will soon\\u2026\\u201d: These sentences can be re-written to mention or list just current state-of-the-art methods that are currently not evaluated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q7]** Given that no model excels across all metrics, how should the models be ranked on the leaderboard, given the trade-offs across different metrics?\\n\\n**[A7]** We rank the models on the leaderboard using the mean score across all metrics. This approach provides a balanced overview of model performance, acknowledging the trade-offs between different metrics while allowing for a comprehensive comparison.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q4]** The authors mention that no model performs optimally across all metrics. It does not, however, fully explore the causes and potential trade-offs. Insights into why certain models perform better for certain tasks would provide better guidance on choosing task-appropriate models.\\n\\n**[A4]** We thank the reviewer for highlighting this concern. In the revised manuscript, we have expanded our analysis and insights based on the results, incorporating this information in Appendix B for each task under the section titled [Extended Explanations and Discussion on Model Performance].\\n\\nFor instance, we discuss the trade-offs between quality and diversity in the backbone design task.\", \"a_notable_observation_across_various_backbone_design_methods_is_the_inverse_relationship_between_structural_quality_and_diversity\": \"as methods produce structures with less quality, the diversity and novelty of the generated backbones tend to increase. We emphasize that structural quality should be considered the primary metric, as diversity and novelty are meaningful only when the generated structures maintain sufficient quality. Without adequate structural quality, high diversity or novelty scores may merely indicate the generation of unrealistic or physically implausible conformations.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q2]** I am not very familiar with AI for Protein. Could you provide with the reason why you separate whole protein tasks to these 8 parts?\\n\\n**[A2]** Many thanks for the question.\\n\\n**1. Why do we focus on protein design and conformation prediction?**\\n\\nProtein three-dimensional structure prediction has become a pivotal area of research, leading to established benchmarks such as CASP and CAMEO, along with significant methodological advancements, including the AlphaFold series, RosettaFold, ESMFold, OmegaFold, and others. While structure prediction addresses the challenge of determining a protein's structure from known sequences, protein design represents the inverse problem: predicting sequences that will fold into specific structures or fulfill designated functions.\\n\\nDespite the increasing interest in protein design, the field currently lacks a comprehensive benchmark, which has limited community progress. Existing benchmarks, as documented in Appendix Table 1, primarily focus on specialized tasks. A similar gap exists in the realm of conformational dynamics research. To address these critical issues, we present the first comprehensive benchmark that emphasizes two fundamental tasks: protein design and conformation prediction.\\n\\n**2. Why separate protein design into five parts?**\", \"the_scientific_scope_allows_us_to_naturally_divide_protein_design_into_five_key_categories_following_the_sequence_structure_function_hierarchy\": \"- Sequence design: Optimizing amino acid sequences for stable folding.\\n- Backbone design: Engineering the overall architecture of the protein.\\n- Sequence-structure co-design: A challenging task to simultaneously generate sequence and structure\\n- Motif scaffolding (Function design): Incorporating functional motifs into stable scaffolds\\n- Antibody design (Function design): Specialized design of antibody structure and sequences for antigen binding. An important application in therapeutic antibody development.\\n\\n**3. Why separate protein conformation prediction into three parts?**\\n\\nAgain, the scientific scope allows us to naturally divide conformation prediction into three parts based on their biological reality and complexity:\\n\\n**- Single conformation prediction:** Proteins existing in different conformations. Predict the single dominant state is to identify the lowest energy conformation.\\n\\n**- Multiple conformation prediction:** A more complicated task to predict discrete conformational states.\\n\\n**- Conformational Distribution Prediction:** This task is more challenging focusing on the prediction of probability distribution of conformations.\\n\\n**To benefit non-experts, we have added the description of the rationale in our revised manuscript in Appendix Section A1, as follows:**\\n\\n'The field of protein three-dimensional structure prediction has witnessed remarkable progress, exemplified by established benchmarks like CASP and CAMEO, and breakthrough methodologies including AlphaFold series, RosettaFold, ESMFold, and OmegaFold. While structure prediction focuses on determining protein structures from known sequences, protein design addresses the inverse challenge: creating sequences that will fold into desired structures or achieve specific functions. Despite growing interest in protein design, the field has been hampered by the absence of a comprehensive benchmark, with existing evaluations primarily targeting specialized tasks, as documented in Appendix Table 1. A similar limitation exists in conformational dynamics research. Our work addresses these gaps by introducing the first comprehensive benchmark focusing on protein design and conformation prediction.\\n\\nIn our benchmark, protein design is categorized into five distinct areas, following the natural sequence-structure-function hierarchy. This begins with sequence design, focusing on optimizing amino acid sequences for stable folding, and progresses to backbone design, which involves engineering the overall protein architecture. The more complex sequence-structure co-design task requires simultaneous optimization of both sequence and structure. At the functional level, motif scaffolding involves incorporating functional motifs into stable scaffolds, while antibody design represents a specialized application focusing on engineering antibody structures and sequences for antigen binding, particularly crucial for therapeutic development.\\n\\nThe conformation prediction component is similarly structured into three distinct categories, reflecting increasing levels of complexity in protein dynamics. Single conformation prediction focuses on identifying the lowest energy state among possible conformations. Multiple conformation prediction addresses the more complicated challenge of predicting discrete conformational states. The most sophisticated category, conformational distribution prediction, tackles the complex task of predicting probability distributions of conformations, essential for understanding proteins with dynamic structural ensembles.'\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q4]** The authors define the methods as \\u201cprotein foundation models\\u201d and add their own explanation of how this definition is being used. From the reviewer's understanding, usually, foundation models are defined for methods that can be applied, e.g. using their latent space, for many different tasks. Any additional reasoning for using this new definition of protein foundation models?\\n\\n**[A4]** Foundation models are traditionally defined as models capable of performing multiple diverse tasks. In our study, we offer a broad definition of \\\"protein foundation models,\\\" and our reasoning is grounded in two key observations from the protein science domain:\\n\\n1. protein-related tasks are inherently diverse, flexible, and complex. This is evident in cases like inverse folding, where a single task can serve multiple distinct applications, each with different metrics priorities. Similarly, in protein sequence-structure co-design, methods developed for this task demonstrate remarkable versatility. They can generate sequences, predict structures, or simultaneously accomplish both objectives. **This intrinsic task flexibility means that even methods initially designed for specific applications often demonstrate broader utility across multiple tasks.**\\n\\n2. Our benchmark includes established foundation models like AlphaFold, which has proven capabilities in both protein structure prediction and conformation prediction. While many early methods were indeed specialized and achieved exceptional performance in specific tasks, their practical applications often extend beyond their original scope. **To create a comprehensive and practical benchmark, we have chosen to include all relevant methods regardless of their original design intent.** This inclusive approach better reflects the current state and practical utility of protein modeling methods.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q6]** Minor Comments:\\n1. Line 81: \\u201cwe aims\\u201d\\n2. Lines 193-195: \\u201cWe have noticed\\u2026\\u201d and Lines 214-215: \\u201cWe will soon\\u2026\\u201d: These sentences can be re-written to mention or list just current state-of-the-art methods that are currently not evaluated.\\nWe thank the reviewer for bringing this to our attention. In the revised manuscript, we have corrected the typo errors in line81, and polished statements in lines193-195 and lines 214-215. \\n\\n**[A6]** In the revised manuscript, we have expanded our evaluation to include Proteous's performance across multiple protein lengths (100, 200, 300, and 500 residues). Our analysis reveals that Proteous demonstrates superior design quality for long-chain backbone design (500 residues), achieving an scTM score of 0.90 compared to RFdiffusion's 0.79. However, we observed a significant decline in structural diversity for Proteous when designing longer chains:\\n\\n- At 300 residues: Proteous diversity score 0.34 vs. RFdiffusion 0.65\\n- At 500 residues: Proteous diversity score 0.34 vs. RFdiffusion 0.89\\n\\n*Note: We are still doing the evaluation and expect to finish all the result in next few days.*\\n\\nCase analysis revealed that Proteous tends to generate structures limited to three categories, predominantly characterized by helical tandem repeats, confirming our diversity metric findings. Detailed discussion of these results is provided in Appendix B.1.2-PROTEIN BACKBONE DESIGN-[Extended Explanations and Discussion on Model Performance].\\n\\nWe have also incorporated performance evaluation results for CarbonNovo in the structure-sequence co-design section. Given the rapid pace of new publications in this field, maintaining completely up-to-date evaluations presents a challenge. Nevertheless, we are committed to continuous updates of our benchmark to support advancement in the field.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q3]** Minor: You say ProteinBench provides a holistic view. That is too strong. It is virtually impossible to provide a holistic view of such a complex topic. Please switch such language to \\u201cmore holistic\\u201d or similar. You say benchmarks are crucial or critical for progress. This is a strong claim. I think they are helpful, but progress can be made without them, and they can actually hurt progress too (see Why Greatness Cannot Be Planned), so I recommend a more nuanced statement.\\n\\n**[A3]** We appreciate the reviewer\\u2019s feedback on the term \\\"holistic benchmark.\\\" Our goal is to convey that we aim for a more comprehensive benchmark. As demonstrated in Table 11, our benchmark is the most extensive study in the field, addressing an urgent need by covering multiple tasks. In the discussion section of the manuscript, we acknowledge that the current version is limited by the evaluation of a restricted number of methods. We envision our benchmark as an evolving tool and are committed to its ongoing optimization in the future.\"}", "{\"title\": \"Looking forward to hearing your feedback!\", \"comment\": \"Thank you for taking the time to review our paper. We have polished our paper following your suggestions. We hope our responses have addressed your concerns raised so far. In case of any unresolved questions or further concerns, please let us know.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q4]** Line 181: You say this finding has significant implications for the field, but it is drawn from 2 data points only! Please properly soften the claims. Line 192: \\u201cWe have noticed that\\u2026.\\u201d That\\u2019s a weird way to describe your own work/choices. You sound surprised you didn\\u2019t include more methods? I suggest finding clearer, less confusing language.\\n\\n**[A4]** We thank the reviewer for bringing this to our attention. We polish the statement in the revised manuscript by softening the claims into \\\"This finding suggests no single model currently excels across all inverse folding objectives. The choice of model should be carefully aligned with the intended applications.\\\"\"}", "{\"title\": \"Rebuttal by Authors 2\", \"comment\": \"Thanks for further discussion. To address the reviewer's concerns, we provide more facts to rationalize our detailed implementation of datasets and metrics and paper organization.\\n\\n1. We acknowledge the importance of providing discussion and reasoning of datasets and metrics, and **we have provided all the detailed information in Appendix B.** \\n\\n2. **We selected datasets and metrics that have been tested in the field of each task to ensure the quality of our benchmark.** Many of the datasets and metrics we used are standardized datasets and standardized metrics have been widely used in previous studies. For example, CASP and CAMEO datasets are well-recognized datasets widely used in the protein structure prediction field. The released-date-based data split is well-accepted in the field to avoid data leakage. Self-consistency TMscore/RMSD are widely used for protein design. Ensemble TMscore/RMSD for accuracy in the multiple-sate conformation or Pairwise RMSD/RMSF. We carefully followed the standardized processing procedures in our implementation. All the details are provided, and reference papers are cited in the appendix. \\n**However, although standardized datasets and metrics have been introduced, the field still lacks a comprehensive multi-metric evaluation approach that assesses performance across different datasets for protein foundation models. Thus, we focused on the comparative study in the main manuscript.**\\n\\n3. Another fact is **many of the protein foundation models evaluated in this study are generative models that do not rely on the usage of test datasets for evaluation.** These models are included in three protein design tasks (backbone design, sequence design, and co-design). \\n\\n4. For the antibody design task, new metrics are specifically introduced in detail in the appendix. Also, to avoid the risk of data leakage, we carefully trained and tested all the models using a unified dataset with implementation details carefully introduced. \\n\\n5. **If the reviewer has specific suggestions regarding the reorganization of the paper, please let us know, and we will consider adopting them.**\\n\\nWe hope these facts can release the reviewer's concerns about the datasets and metrics.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"**[Q6]** The paper highlights four key dimensions (quality, novelty, diversity, and robustness), but the result tables, including Table 1, emphasize only the first three dimensions. Why are robustness metrics omitted from the tables? Additionally, some dimensions in Table 1, such as those in antibody design, do not align with these four key categories.\\n\\n**[A6]** We thank the reviewer for the response. \\n1. We updated Table 1 in the revised manuscript by adding robustness. \\n2. Antibody design represents a specialized case requiring task-specific evaluation criteria. We maintain separate evaluation dimensions for antibody design due to its unique therapeutic applications and specialized requirements. These specialized metrics better capture the essential characteristics needed for therapeutic antibody development.\"}", "{\"title\": \"Further response to reviewer d7D6\", \"comment\": \"Thank you for your specific suggestions and for raising the rating to 5.5. We now fully understand your feedback regarding the reorganization of the paper. We agree that the task specifications (e.g., datasets, splits, metrics, etc.) in the appendix are essential for readers interested in these low-level design details, while those with extensive experience in protein-related tasks would prefer a greater emphasis on the comparative performance analysis of standardized benchmarks to guide model development. As you noted, this preference largely depends on the background and personal preferences of the readers.\\n\\nFollowing the reviewer\\u2019s suggestions, we attempted to move the task specifications into the main text. However, to adhere to the 10-page limit, we had to relocate 2 of the 8 tasks to the appendix, which we believe compromises the overall comprehensiveness of the benchmark. Therefore, we decided to retain the current organization of the paper. This decision should not be interpreted as disregarding your suggestion; it is solely a compromise due to the page limit. Our intent was to present a holistic view of our protein benchmark within the main paper. \\n\\nWith that said, to better serve the needs of different readers, we will prepare an extended version of the paper that exceeds the 10-page main text limit. This version will strictly follow your suggestion to integrate task definitions, technical details, and performance evaluations into the main text. We will release this extended version on arXiv and our GitHub repository as a supplement to the ICLR camera-ready version.\", \"regarding_your_minor_comments\": \"1. The typo errors have been corrected in this revision, and we will conduct a final thorough review for the camera-ready version.\\n\\n2. Regarding sampling temperature, we acknowledge the diversity-quality trade-off and chose 0.1 as a default value, while recognizing that optimal values may vary depending on the specific inverse folding method and the intended design goals (e.g., prioritizing diversity versus quality). We have added a short comment on this point in the revision.\\n\\nWe sincerely appreciate the reviewer\\u2019s detailed suggestions and feedback. We kindly request the reviewer to consider raising the score to a solid 6 (as 5.5 is not a valid rating in ICLR), which would provide us the opportunity to serve this benchmark to the broader protein research community.\"}", "{\"title\": \"Rebuttal Feedback 2\", \"comment\": \"Thanks for providing the rationale behind the organization of the manuscript.\\n\\nI understand and agree with the contributions mentioned by the authors that the manuscript provides an important benchmark for various protein-related tasks.\\n\\nI still keep my opinion that for protein-related benchmarks, especially for tasks with no standardized comparisons, having clear discussion and reasoning of datasets and metrics is more critical than providing only comparative studies. For example, if the datasets contain data leakage or if the metrics do not have any correlation to wet lab experiments, these comparative studies might mislead readers.\\n\\nI am open to discussing with other reviewers with diverging opinions regarding the paper structure. At this moment I decide to keep my score.\"}", "{\"summary\": \"The authors introduce a standardized benchmarking framework to evaluate the performance of protein foundation models. The framework includes 1) task taxonomy, categorizing key tasks in protein modeling (protein design, confirmation prediction); \\u00a02) multi-metric evaluation across quality, novelty, diversity, and robustness dimensions. 3) In-depth analyses based on different user objectives. The study finds that thorough evaluation metrics are crucial for adequately validating protein models and that no single model is optimal across all protein design tasks, which underlines the need to match models to specific applications. The authors intend the framework to serve as a collaboratively developed comprehensive and transparent benchmark for evaluating protein foundation models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The framework\\u2019s taxonomy of tasks within the domain of protein foundation models is insightful. It makes it easier to evaluate where each model excels or falls short.\", \"The multi-dimensional metrics aims to capture various aspects of model performance which is appropriate given the complexity of the protein modeling.\", \"The authors conduct a large number of experiments, demonstrating the breadth of the evaluation and ensuring the results' validity across various models and tasks.\", \"Leaderboard and open-source code can potentially facilitate more fair comparison and promote transparency.\"], \"weaknesses\": [\"Given that the authors have made an extensive amount of experimental study, some reorganization of the paper could strengthen the delivery of the contributions of the paper. Including clear and complete definitions, explanations, and relevance of the metrics would be helpful. The relevance and insights of the results could replace the explanations of the results. For example, Section 2.2.6 Antibody Design, instead of listing the outperforming models for evaluation, which is provided in Table 6, authors could discuss the relevance of these metrics along with the insights gained from the results similar to the one that they provided in the last paragraph.\", \"Lack of consistency in the training data across models is a limitation that undercuts the one of the main promises of the proposed framework which is standardization of the evaluation of protein foundation evaluation. This may not be an issue in the future as the framework is further developed and more mature.\", \"The authors mention that no model performs optimally across all metrics. It does not, however, fully explore the causes and potential trade-offs. Insights into why certain models perform better for certain tasks would provide better guidance on choosing task-appropriate models.\"], \"minor_issues\": [\"The description of Table 12 does not align with the data presented in the table\", \"Items (3) and (4) in the conclusion are the same.\", \"Figure 2 is too small.\"], \"questions\": [\"The paper highlights four key dimensions (quality, novelty, diversity, and robustness), but the result tables, including Table 1, emphasize only the first three dimensions. Why are robustness metrics omitted from the tables? Additionally, some dimensions in Table 1, such as those in antibody design, do not align with these four key categories.\", \"Given that no model excels across all metrics, how should the models be ranked on the leaderboard, given the trade-offs across different metrics?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
BkftcwIVmR
S4M: S4 for multivariate time series forecasting with Missing values
[ "Peng Jing", "Meiqi Yang", "Qiong Zhang", "Xiaoxiao Li" ]
Multivariate time series data play a pivotal role in a wide range of real-world applications, such as finance, healthcare, and meteorology, where accurate forecasting is critical for informed decision-making and proactive interventions. However, the presence of block missing data introduces significant challenges, often compromising the performance of predictive models. Traditional two-step approaches, which first impute missing values and then perform forecasting, are prone to error accumulation, particularly in complex multivariate settings characterized by high missing ratios and intricate dependency structures. In this work, we introduce S4M, an end-to-end time series forecasting framework that seamlessly integrates missing data handling into the Structured State Space Sequence (S4) model architecture. Unlike conventional methods that treat imputation as a separate preprocessing step, S4M leverages the latent space of S4 models to directly recognize and represent missing data patterns, thereby more effectively capturing the underlying temporal and multivariate dependencies. Our framework comprises two key components: the Adaptive Temporal Prototype Mapper (ATPM) and the Missing-Aware Dual Stream S4 (MDS-S4). The ATPM employs a prototype bank to derive robust and informative representations from historical data patterns, while the MDS-S4 processes these representations alongside missingness masks as dual input streams to enable accurate forecasting. Through extensive empirical evaluations on diverse real-world datasets, we demonstrate that S4M consistently achieves state-of-the-art performance. These results underscore the efficacy of our integrated approach in handling missing data, showcasing its robustness and superiority over traditional imputation-based methods. Our findings highlight the potential of S4M to advance reliable time series forecasting in practical applications, offering a promising direction for future research and deployment. Code is available at https://github.com/WINTERWEEL/S4M.git.
[ "S4 Models", "Multivariate Time Series Forecasting", "Missing Value", "Prototype Bank" ]
Accept (Poster)
https://openreview.net/pdf?id=BkftcwIVmR
https://openreview.net/forum?id=BkftcwIVmR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zipQEUeGbB", "yQHDIBZXhM", "trKYzQmcff", "tRbWZwkRmg", "roDxECSeKZ", "reqMmYk6a2", "qh9vNw3wyX", "pOJ8nSEsst", "fDfrhzwb2P", "c2wRoumRjM", "bcEimHMmTq", "bRwHF9fTgo", "b0Hv6JWeYh", "YpjBaOmVaM", "YlDukDsTB6", "WOZdxS77yO", "R8l0bHL67R", "QlEJboSUbw", "PdW1fUmVTx", "NI7iUxH6ag", "MJ4MSPOHeL", "MFOcNg5Cqp", "KOFdhAvqvb", "FI1O8vZMHD", "Elj4yMG1xv", "8bvRmBgNby", "8TefCTqULs", "55TflsgA59", "0bU5Au0o1V" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732557111312, 1732555853158, 1732701470583, 1730645810104, 1730383569660, 1732551305362, 1732551104049, 1732554673110, 1732557257512, 1732545552285, 1732555402930, 1732800049772, 1733111826052, 1730468631473, 1732804470664, 1734753875168, 1737524069087, 1732551656901, 1733111952236, 1732804494939, 1732553182364, 1732550906638, 1732799452329, 1731518539777, 1732549325820, 1733111654458, 1732545449424, 1732643942176, 1732804548584 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_iDLq" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_EZj4" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Area_Chair_Dm2o" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_iyXX" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Area_Chair_Dm2o" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Area_Chair_Dm2o" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_iyXX" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_UFK6" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Area_Chair_Dm2o" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ], [ "ICLR.cc/2025/Conference/Submission10665/Reviewer_iyXX" ], [ "ICLR.cc/2025/Conference/Submission10665/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response Part 3\", \"comment\": \"***W2.2. Justification on MethodsComparison.***\\n\\nThank you for your comments on the comparison between PatchTST and our method. There might be a misunderstanding of the comparison. PatchTST does not include straightforward forecasting with missing values. Masking is only used in the context of self-supervised learning. Therefore, the results with 0.06 missing values (~40% missing ratio) are not directly comparable. In our initial submission, we did not include iTransformer and PatchTST because these methods are not specifically designed for time series with missing values. Comparing them directly with our method might lead to an unfair evaluation. Instead, we focused on comparisons with SOTA methods tailored for time series prediction with missing values, such as BiTGraph, as well as S4-based methods. For transformer-based baselines, we selected two representative architectures: Transformer and Autoformer.\\n\\nTo address your request , we have now included PatchTST, iTransformer, and CARD in our experiments. The results on four benchmark datasets are presented in the table below, with additional results on a real-world dataset(https://openreview.net/forum?id=BkftcwIVmR&noteId=qh9vNw3wyX). Furthermore, we analyze the computational cost (https://openreview.net/forum?id=BkftcwIVmR&noteId=8TefCTqULs) and the performance across different horizon windows (https://openreview.net/forum?id=BkftcwIVmR&noteId=bcEimHMmTq).\\n\\nAmong the three additional methods, PatchTST exhibits strong performance in handling missing values, particularly on the Electricity dataset, and also performs well in scenarios without missing values. However, in most of the settings, S4M achieves consistently superior performance. Additionally, as the table on computational cost(https://openreview.net/forum?id=BkftcwIVmR&noteId=8TefCTqULs) shows, S4M is significantly more efficient than these three suggested methods..\\n\\n***Q1. Have you tested S4M when there are no missing values at all? I would be curious whether your prototype bank is also useful if no values are missing.***\\n\\nThank you for the question. By design, our method is suitable for time series with block-based missing data. The historical features stored in the prototype bank are particularly helpful when the missing ratio is high. If there is no missing data i, as expected, our method will not show a significant advantage over the other methods but will still maintain a very competitive performance. For this experiment, we report the results using a horizon window of 96 and a lookback window of 96, with no missing values in the original dataset.\\n\\n| Dataset | Metric | BRITS | GRUD | Transformer | Autoformer | S4 | BiTGraph | S4M(Ours) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Electricity | MAE | 0.398 | 0.413 | 0.411 | 0.352 | 0.386 | 0.348 | 0.383 |\\n| | MSE | 0.318 | 0.332 | 0.321 | 0.242 | 0.301 | 0.254 | 0.295 |\\n| ETTh1 | MAE | 0.676 | 0.571 | 0.604 | 0.556 | 0.538 | 0.530 | 0.538 |\\n| | MSE | 0.867 | 0.636 | 0.677 | 0.588 | 0.560 | 0.571 | 0.560 |\\n| Weather | MAE | 0.373 | 0.370 | 0.383 | 0.306 | 0.363 | 0.504 | 0.332 |\\n| | MSE | 0.29136 | 0.301 | 0.298 | 0.235 | 0.301 | 0.494 | 0.259 |\\n| Traffic | MAE | 0.428 | 0.446 | 0.405 | 0.454 | 0.425 | 0.504 | 0.414 |\\n| | MSE | 0.770 | 0.840 | 0.707 | 0.705 | 0.425 | 0.879 | 0.761 |\"}", "{\"title\": \"Response Part 3\", \"comment\": \"***Q2.1 How did the baseline models adapted for partially observed data.***\\n\\nThank you for your question. In our experiments, the missing values are imputed using the mean value, after which the Transformer and Autoformer models were applied. The same procedure is used for the additional experiments with iTransformer, CARD, and PatchTST.\\n\\n***Q2.2 Justification on deliberately omitted methods not designed for non-missing data. Besides, the experimental analysis should include SOTA baselines specifically designed for long-term forecasting tasks, such as iTransformer [1], CARD [2], and Crossformer [3].***\\n\\nThank you for raising this issue. In our initial submission, we did not include iTransformer and CARD because these methods are not specifically designed for time series with missing values. Comparing them directly with our method might lead to an unfair evaluation. Instead, we focused on comparisons with SOTA methods tailored for time series prediction with missing values, such as BiTGraph, as well as S4-based methods. For transformer-based baselines, we selected two classic architectures: Transformer and Autoformer.\\n\\nTo provide a more comprehensive evaluationaddress your request, we have now included PatchTST, iTransformer, and CARD in our experiments. The results on four benchmark datasets are presented in the table below, with [additional results](https://openreview.net/forum?id=BkftcwIVmR&noteId=qh9vNw3wyX) on a real-world dataset. Furthermore, we analyze the [computational cost](https://openreview.net/forum?id=BkftcwIVmR&noteId=Elj4yMG1xv) and [the performance](https://openreview.net/forum?id=BkftcwIVmR&noteId=bcEimHMmTq) across different horizon windows.\\n\\nAmong the three additional methods, PatchTST exhibits strong performance in handling missing values, particularly on the Electricity dataset, and also performs well in scenarios without missing values. However, in other settings, its results are less competitive compared to S4M. Additionally, as shown in Table X, PatchTST incurs significantly higher training and inference times than S4M.\\n\\n| Dataset | Metric | S4M | PatchTST | iTransformer | CARD |\\n| --- | --- | --- | --- | --- | --- |\\n| Electricity | MAE | 0.418 | 0.420 | 0.452 | 0.440 |\\n| | MSE | 0.359 | 0.344 | 0.389 | 0.366 |\\n| ETTh1 | MSE | 0.627 | 0.583 | 0.668 | 0.780 |\\n| | MAE | 0.742 | 0.650 | 0.786 | 1.041 |\\n| Weather | MSE | 0.370 | 0.399 | 0.510 | 0.422 |\\n| | MAE | 0.294 | 0.327 | 0.459 | 0.376 |\\n| Traffic | MSE | 0.499 | 0.530 | 0.519 | 0.554 |\\n| | MAE | 0.943 | 0.927 | 0.897 | 0.965 |\"}", "{\"comment\": \"Dear reviwer, thank you for acknowledging that our responses have addressed your concerns in part. We are delighted to engage further and address your remaining questions.\\n\\n**Clarifications on Time-Series Setting**: In our revision, we have made explicit the distinction between irregularly sampled time-series and our focus on the missing data setting. This distinction is now emphasized both in the abstract and the instructions, ensuring clarity for all readers. Additionally, to guide understanding, we have highlighted Figure 4, which comprehensively illustrates this setting. Thanks for your suggestions. We hope this addresses your concern.\\n\\n**Focus on S4-based Models**:\\nWe respectfully request the reviewer to consider that the primary focus of our study is on S4-based models, as stated in our introduction. S4 was selected due to its efficiency, which has been well-documented in the literature. We further demonstrated this efficiency in response to your W2.2 (Part 3), where we compared S4M against the suggested methods and many others. For your convenience, we have included the table from our response, which underscores that **S4M is significantly more efficient than the three suggested methods**.\\n\\n| Method | S4(Mean) | S4(Ffill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | CRUD | Grafiti | S4M(Our) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Flops(M) | 12463.39 | 12463.39 | 12618.52 | 9091.16 | 3813.82 | 17627.87 | 18734.88 | 3185.64 | 565.36 | 392299.02 | 219.57 | 265118.32 | 139191.88 |\\n| Training Time(s) | 0.11282 | 0.11498 | 0.08325 | 0.46920 | 0.19958 | 0.09035 | 0.09613 | 0.24546 | 0.06744 | 0.46017 | 49.40020 | OOM | 0.219381 |\\n| Inference Time(s) | 0.07416 | 0.07983 | 0.06152 | 0.21126 | 0.08756 | 0.06088 | 0.07662 | 0.08122 | 0.04009 | 0.16896 | 4.76765 | OOM | 0.099314 |\\n\\nWe hope the reviewer agrees with the trade-offs inherent in various backbone architectures (no free lunch) and recognizes improving S4 is central to this study, reflected in the title and justified in our introduction.\\n\\n\\n**Response to Interpretation of Reviewer Comments**: We appreciate your clarification regarding your request. Initially, we interpreted your comments as two distinct questions, leading us to provide detailed responses in W2.1 (Part 2) and W2.2 (Part 3) separately. \\n\\n**Experimental Comparison with Your Suggested Methods**: We appreciate your thoughtful feedback. We conducted experiments incorporating your suggested methods with the mean interpolation approach (specified in W2.2, Part 3). Furthermore, we conducted experiments combining PatchTST with linear interpolation. The results are shown below. \\nIn most settings, S4M consistently demonstrated superior performance. Both linear and mean interpolation with PatchTST do not work well. Linear interpolation outperformed mean interpolation only on the ETTh1 dataset. For datasets exhibiting clear seasonality like electricity and traffic, linear interpolation may perform worse than mean interpolation.\\n\\n| Dataset | Metric | S4M (Ours) | PatchTST (Linear Interpolation) | PatchTST (Mean Interpolation) |\\n| --- | --- | --- | --- | --- |\\n| Electricity | MAE | 0.418 | 0.501 | 0.420 |\\n| | MSE | 0.359 | 0.466 | 0.344 |\\n| ETTh1 | MAE | 0.627 | 0.587 | 0.583 |\\n| | MSE | 0.742 | 0.649 | 0.650 |\\n| Weather | MAE | 0.37 | 0.348 | 0.399 |\\n| | MSE | 0.294 | 0.309 | 0.327 |\\n| Traffic | MAE | 0.499 | 0.655 | 0.530 |\\n| | MSE | 0.943 | 0.929 | 0.927 |\"}", "{\"summary\": \"The paper introduces S4M, an extension of the S4 framework to multivariate time series forecasting with missing values. It combines a prototype-based representation learning module (ATPM) with a dual-stream S4 architecture (MDS-S4) to handle missing values directly rather than through preprocessing. The method is evaluated on four datasets under various missing data scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper addresses a practical and relevant problem. Real-world data often contains missing values (or mis-recorded values) which makes developing principled methods to handle them in forecasting models a worthwhile endeavor. The paper is well written and relatively easy to follow. The empirical evaluation does employ a set of strong baselines for comparison. The use of a \\\"prototype bank\\\" for backfilling is new to this reviewer and represents an interesting practical way to tackle missing values.\", \"weaknesses\": \"The paper introduces a few new key components, notably the prototype bank and the MDS-S4 architecture which, while well described, are only subjected to limited analysis and theoretical justification. For instance, complexity analysis is missing and ablation studies are partial. Some architectural choices seem arbitrary. The datasets chosen in the empirical evaluation (Traffic, Electricity, ETTh1, Weather) are all fairly simple datasets. Given that there are many more publicly available time-series evaluation datasets, I would like to see a more comprehensive evaluation.\", \"questions\": \"What is the computational complexity of ATPM vs traditional approaches?\\nCan you provide theoretical justification for the prototype bank design?\\nHow sensitive is performance to prototype bank initialization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explicitly models missing patterns within the Structured State Space Sequence (S4) architecture, developing two key modules: the Adaptive Temporal Prototype Mapper and the Missing-Aware Dual Stream. The experiments demonstrate that S4M achieves state-of-the-art performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper explores the integration of State Space Models with missing patterns for long-term time series forecasting.\\n\\n2. The proposed adaptive temporal prototype mapper and missing aware dual stream S4 modules effectively capture rich historical patterns and learn robust representations.\\n\\n3. Experimental results illustrate that the proposed model achieves state-of-the-art performance in handling missing data.\", \"weaknesses\": \"1. The authors should further explain the motivation for introducing $\\\\mathbf{\\\\bar{E}}E_m(m_t;\\\\theta_m)$ to the SSM model in Equation 4.\\n\\n2. This paper lacks a discussion between the S4M and existing methods designed for handling missing values, which diminishes the significance of the proposed model.\\n\\n3. The settings of hyperparameters $K_1$, $K_2$, $\\\\tau_1$, and $\\\\tau_2$ are emprical. The authors should provide guidance on how to set these hyperparameters across different datasets with varying characteristics.\", \"questions\": \"1. The proposed model struggles when the input length is shorter than the output length. In addition, it is general to fix the lookback length and adapt to various prediction lengths [1]. Thus, it would strengthen this paper to add experimental results with various horizons.\\n\\n2. Baselines such as Transformer and Autoformer are designed for complete data. The authors should clarify how these models can be adapted for partially observed data. Besides, the experimental analysis should include state-of-the-art baselines specifically designed for long-term forecasting tasks, such as iTransformer [1], CARD [2], and Crossformer [3].\\n\\n[1] Liu Y, Hu T, Zhang H, et al. itransformer: Inverted transformers are effective for time series forecasting[C]. The eleventh international conference on learning representations. 2023.\\n\\n[2] Wang X, Zhou T, Wen Q, et al. CARD: Channel aligned robust blend transformer for time series forecasting[C]. The Twelfth International Conference on Learning Representations. 2024.\\n\\n[3] Zhang Y, Yan J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting[C]. The eleventh international conference on learning representations. 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for the thoughtful and detailed review. Also, we appreciate that the reviewer acknowledges our prototype bank presents an interesting practical way to tackle missing values. We address the raised concerns below.\\n\\n ***W1.1 Justification on the comparison baseline selections.***\\n\\nWe thank the reviewer for the question. The reasons that we did not directly compare S4M with irregularly sampled time-series methods are multifold: First, here, we focus on **block-based missing** patterns where the observed values occur at consecutive time points (see our Fig. 4). The irregularly sampled time series problems typically don\\u2019t directly consider the properties of such missing patterns. Second, **efficiency is an essential consideration** for our work, as stated in **lines 51-52** in our original submission. Compared with S4, the suggested ODE and graph-based irregularly sampled methods are computationally costly. For instance, experiments with Grafiti on the Traffic and Electricity datasets often result in out-of-memory (OOM) errors in most settings. Similarly, CRU is extremely slow due to its iterative computations over variable dimensions. Detailed comparisons and computational costs for these methods are provided in the table below \\n\\n| Method | S4(Mean) | S4(Ffill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | CRUD | Grafiti | S4M(Our) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Flops(M) | 12463.39 | 12463.39 | 12618.52 | 9091.16 | 3813.82 | 17627.87 | 18734.88 | 3185.64 | 565.36 | 392299.02 | 219.57 | 265118.32 | 139191.88 |\\n| Training Time(s) | 0.11282 | 0.11498 | 0.08325 | 0.46920 | 0.19958 | 0.09035 | 0.09613 | 0.24546 | 0.06744 | 0.46017 | 49.40020 | OOM | 0.219381 |\\n| Inference Time(s) | 0.07416 | 0.07983 | 0.06152 | 0.21126 | 0.08756 | 0.06088 | 0.07662 | 0.08122 | 0.04009 | 0.16896 | 4.76765 | OOM | 0.099314 |\\n\\nIn response to your comment, we have two additional SOTA methods for irregular time-series forecasting, Grafiti and CRUD, for comparison. However, due to the high computational costs and significant memory requirements of methods designed for irregularly sampled time-series data, our experiments were restricted to the smaller-scale ETTh1 dataset, as detailed below. The results show that **ODE-based models perform suboptimally** in this context. \\n\\n| Lookback Window ($L$) | Metric | S4M(Ours) | CRU | Grafiti |\\n| --- | --- | --- | --- | --- |\\n| 96 | MAE | 0.630 | 0.774 | 0.821 |\\n| | MSE | 0.779 | 1.093 | 1.162 |\\n| 192 | MAE | 0.604 | 0.802 | 0.811 |\\n| | MSE | 0.670 | 1.150 | 1.161 |\\n| 384 | MAE | 0.6281 | 0.802 | 0.805 |\\n| | MSE | 0.748 | 1.184 | 1.162 |\\n| 768 | MAE | 0.619 | 0.774 | 0.821 |\\n| | MSE | 0.693 | 1.093 | 1.060 |\"}", "{\"title\": \"Response Part 3\", \"comment\": \"***4. More comprehensive evaluation.***\\n\\nThe datasets for empirical evaluation are widely used in time series forecasting literature [2,3]. We selected these four datasets because they exhibit significant variation in size, number of variables, and the presence or absence of seasonality. We consider cases with block missing patterns in regularly sampled time series, where the observed values occur at consecutive time points ((see our Fig. 4). This structure enables the design of an informative representation $o_t$, which is crucial for capturing temporal dependencies effectively.\\n\\nFollowing your comment, we included the real-world USHCN climate dataset [1] in our analysis. We set the lookback window of size 96 and a horizon window of size 96. The results further confirm that **S4M outperforms** other methods on the real-world dataset.\\n \\n\\n| r | Metric | S4(Mean) | S4(Ffill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | CARD | S4M(Ours) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| 0.12 | MAE | 0.477 | 0.489 | 0.466 | 0.644 | 0.477 | 0.461 | 0.511 | 0.474 | 0.478 | 0.494 | 0.451 | 0.447 |\\n| | MSE | 0.455 | 0.414 | 0.447 | 0.668 | 0.452 | 0.406 | 0.499 | 0.439 | 0.460 | 0.457 | 0.411 | 0.417 |\\n| 0.24 | MAE | 0.507 | 0.522 | 0.502 | 0.644 | 0.499 | 0.475 | 0.534 | 0.495 | 0.504 | 0.528 | 0.477 | 0.473 |\\n| | MSE | 0.503 | 0.517 | 0.503 | 0.689 | 0.484 | 0.403 | 0.530 | 0.469 | 0.502 | 0.502 | 0.444 | 0.433 |\", \"reference\": \"[1] Long-term daily climate records from stations across the contiguous united states, 2015.\\n\\n[2] Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In Advances in Neural Information Processing Systems, 2021\\n\\n[3] CARD: Channel aligned robust blend transformer for time series forecasting[C]. The Twelfth International Conference on Learning Representations. 2024.\\n\\n***5. How sensitive is performance to prototype bank initialization?***\\n \\nIn our experiment, we found the performance is **insensitive** to the initial cluster configuration, as the clusters are updated continuously throughout the training process. To provide evidence, we presented the experimental results on four datasets using different cluster numbers for initialization, as shown below. In practice, we recommend using 3 to 5 clusters for initialization or determining the optimal number of clusters based on the within-cluster sum of squares.\\n| | num_cluster | 1 | 2 | 3 | 4 | 8 | 12 | 16 |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Electricity | MAE | 0.415 | 0.415 | 0.415 | 0.415 | 0.415 | 0.415 | 0.415 |\\n| | MSE | 0.356 | 0.356 | 0.356 | 0.356 | 0.357 | 0.358 | 0.356 |\\n| ETTh1 | MAE | 0.647 | 0.647 | 0.648 | 0.648 | 0.647 | 0.648 | 0.650 |\\n| | MSE | 0.768 | 0.767 | 0.770 | 0.770 | 0.767 | 0.767 | 0.773 |\\n| Weather | MAE | 0.386 | 0.390 | 0.387 | 0.385 | 0.388 | 0.388 | 0.385 |\\n| | MSE | 0.307 | 0.310 | 0.308 | 0.306 | 0.310 | 0.310 | 0.307 |\\n| Traffic | MAE | 0.510 | 0.505 | 0.504 | 0.515 | 0.509 | 0.513 | 0.509 |\\n| | MSE | 0.966 | 0.954 | 0.944 | 0.999 | 0.974 | 0.992 | 0.985 |\"}", "{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for the thoughtful and detailed review. Also, we appreciate that the reviewer acknowledges our proposed model achieves SOTA performance in handling missing data. We address the raised concerns below.\\n\\n***W1. The authors should further explain the motivation for introducing $\\\\overline{\\\\mathbf{E}} E_m(m_{t};\\\\theta_m)$ \\u00a0to the SSM model in Equation 4.***\\n\\nThank you for your question about our motivation for incorporating $\\\\overline{\\\\mathbf{E}} E_m(m_{t};\\\\theta_m)$ into the model. \\n\\nTo address the missing data problem in S4 models, we aim to (1) distinguish the missing time points, enabling the model to treat them differently from the observed data (e.g., by referring to data in the prototype bank), and (2) ensure that the core properties of the S4 model are preserved. To this end, we seek a term that can flag missing values while preserving the HiPPO structure of S4. We found that integrating additional masking terms $M$, inspired by literature [1], to serve as a simple yet effective indicator for the model to recognize missing values. However, since the elements of $M$ take binary values (0 or 1), they are not naturally on the same scale as the other terms in (4). To address this, we designed an encoder to transform the mask information to an appropriate scale. Incorporating this term still preserves the HiPPO structure of S4, thereby enriching the model with additional information while maintaining its core advantages.\\n\\n***W2. Highlighting the existing discussion between the S4M and existing methods designed for handling missing values***\\n\\nIn fact, we discussed the differences between our methods v.s. the other existing methods for handling missing values in both Introduction (see lines 62-73) and Appendix A.2. To recap, traditional approaches for handling missing values use a two-step process: imputing missing values first and then performing standard analysis. This can lead to errors and suboptimal results, especially in multivariate time series with complex missing patterns and high missing ratios. We also refer to methods that directly forecast with missing data. RNN-based methods, such as BRITS and GRUD, typically require long training times and exhibit inferior forecasting performance. Graph network-based models, like BiTGraph, are effective at navigating temporal dependencies and spatial structures but often suffer from high memory usage. ODE-based methods, such as Neural ODE, generally incur high computational costs.\\n\\nIn contrast, we propose S4M, which combines a prototype bank with a structured state space model (S4). Our approach focuses on recognizing and representing missing data patterns in the latent space, thereby enhancing model performance by better capturing underlying dependencies while maintaining the high performance of S4.\"}", "{\"title\": \"Response Part 4\", \"comment\": \"***Q2. I would also like to see Only Masking, i.e. having the mask m_t in (4) but no prototype bank, i.e. replacing o_t with X_t. Is your prototype-bank really needed for irregular time-series forecasting?***\\n\\nThank you for your valuable feedback. We have included an ablation study for the first module, ATPM. In this study, we compare S4M with and without ATPM, highlighting the improvements brought by ATPM, particularly on the Traffic and ETTh1 datasets. The following experiments were conducted under the same settings as those in the ablation studies presented in the paper.\\n\\n| Dataset | | Electricity | | ETTh1 | | Weather | | Traffic | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| $\\\\ell_L$ | Metrics | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) |\\n| | | | | Variable Missing | | | | | |\\n| 96 | MAE | 0.369 | +0.011 | 0.571 | +0.044 | 0.336 | +0.020 | 0.442 | +0.024 |\\n| | MSE | 0.282 | +0.010 | 0.624 | +0.091 | 0.267 | +0.206 | 0.786 | +0.125 |\\n| 192 | MAE | 0.357 | +0.010 | 0.568 | +0.045 | 0.320 | +0.600 | 0.381 | +0.030 |\\n| | MSE | 0.261 | +0.009 | 0.598 | +0.090 | 0.261 | +0.002 | 0.685 | +0.092 |\\n| 384 | MAE | 0.359 | +0.009 | 0.584 | +0.029 | 0.334 | +0.006 | 0.383 | +0.026 |\\n| | MSE | 0.264 | +0.009 | 0.613 | +0.064 | 0.256 | +0.008 | 0.700 | +0.065 |\\n| 768 | MAE | 0.362 | +0.020 | 0.599 | +0.028 | 0.341 | +0.016 | 0.383 | +0.026 |\\n| | MSE | 0.269 | +0.002 | 0.649 | +0.058 | 0.266 | +0.011 | 0.697 | +0.074 |\\n| | | | | Timepoint Missing | | | | | |\\n| 96 | MAE | 0.372 | +0.025 | 0.571 | +0.049 | 0.313 | +0.021 | 0.428 | +0.045 |\\n| | MSE | 0.287 | +0.030 | 0.624 | +0.110 | 0.237 | +0.017 | 0.809 | +0.116 |\\n| 192 | MAE | 0.367 | +0.004 | 0.574 | +0.039 | 0.305 | +0.006 | 0.385 | +0.005 |\\n| | MSE | 0.274 | +0.004 | 0.593 | +0.110 | 0.225 | +0.001 | 0.687 | +0.023 |\\n| 384 | MAE | 0.370 | +0.014 | 0.571 | +0.057 | 0.306 | +0.012 | 0.385 | +0.013 |\\n| | MSE | 0.277 | +0.004 | 0.624 | +0.112 | 0.220 | +0.015 | 0.702 | +0.047 |\\n| 768 | MAE | 0.373 | +0.013 | 0.588 | +0.048 | 0.316 | +0.005 | 0.388 | +0.000 |\\n| | MSE | 0.282 | +0.016 | 0.647 | +0.079 | 0.232 | +0.004 | 0.699 | +0.024 |\\n\\n# *R*\"}", "{\"title\": \"Response Part 2\", \"comment\": \"***Q2. How does the dual stream processing impact the model's ability to capture temporal dependencies?***\\n\\nThank you for the question. The motivation behind the dual-stream processing is to take advantage of the strengths of S4\\u2014namely, its ability to capture long-term temporal dependencies and its computational efficiency\\u2014when addressing block missing patterns in time series. The long-term dependency in S4 is achieved through the use of the HiPPO matrix $A$ as shown in (1). In our dual-stream processing, we build on this structure by incorporating $o_t$ (the representation from a shorter look-back window) instead of $u_t$ (the observation of only at the current time point) into the model. The term $o_t$ inherently captures additional temporal dependency information. \\n\\n***Q3. Can the authors confirm whether they implemented these baseline methods using official code or leveraged existing unified Python libraries.***\\n\\nThanks for checking the experiment details. For a fair comparison, we used the official implementation for all baselines.\"}", "{\"title\": \"Response Part 2\", \"comment\": \"***W3. Highlighting the existing discussion on hyperparameters K1, K2, tao1, tao2..***\\n\\n We agree on the importance of discussing hyperparameters. Therefore, in our original submission, we provided extensive sensitivity analysis on these hyperparameters in Appendix D. Specifically, we discussed $K_1$ in, $K_2$ in Appendix D.4.1. According to our results, a suggested $K_2$ is between 5 and 10, as this range is effective and relatively insensitive to performance. The recommended ranges for $\\\\tau_1$ and $\\\\tau_2$ are [0.3, 0.6] and [0.8, 1.0), respectively. Both parameters can be selected using the validation set. Most of the experiments are robust to the choice of $K_1=30$ or $K_1=50$. If one finds the dataset exists large amout of clusters over 100 (by running a preliminary experiment with a large value of $K_1$ and let the data adaptively tells the number of clusters), they can also adjust this number accordingly. \\n\\n*Q1. **Adding experimental results with various horizons.***\\n\\nThank you for your question. In response to your comment, we have included new results with a fixed lookback window of size 192 and various prediction lengths. The results show that our model **retains top performance** across most horizons for the Traffic, ETTh1, and Weather datasets. Specifically, for the Electricity dataset, our proposed model ranks among the top two performers in most cases.\\n\\n| | Horizon Window | Metric | S4(Mean) | S4(FFill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | S4M(Ours) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Electricity | 24 | MAE | 0.392 | 0.406 | 0.396 | 0.624 | 0.442 | 0.412 | 0.457 | 0.361 | 0.415 | 0.349 | 0.369 |\\n| | | MSE | 0.312 | 0.322 | 0.303 | 0.621 | 0.372 | 0.323 | 0.390 | 0.256 | 0.326 | 0.243 | 0.277 |\\n| | 48 | MAE | 0.389 | 0.400 | 0.397 | 0.658 | 0.393 | 0.424 | 0.437 | 0.370 | 0.421 | 0.357 | 0.377 |\\n| | | MSE | 0.310 | 0.311 | 0.310 | 0.669 | 0.452 | 0.344 | 0.357 | 0.272 | 0.336 | 0.254 | 0.290 |\\n| | 96 | MAE | 0.464 | 0.410 | 0.420 | 0.681 | 0.456 | 0.434 | 0.453 | 0.410 | 0.427 | 0.365 | 0.381 |\\n| | | MSE | 0.409 | 0.324 | 0.336 | 0.713 | 0.394 | 0.356 | 0.396 | 0.322 | 0.344 | 0.266 | 0.305 |\\n| | 192 | MAE | 0.424 | 0.462 | 0.465 | 0.776 | 0.508 | 0.451 | 0.443 | 0.444 | 0.430 | 0.375 | 0.405 |\\n| | | MSE | 0.363 | 0.414 | 0.411 | 1.047 | 0.472 | 0.378 | 0.366 | 0.365 | 0.348 | 0.278 | 0.331 |\\n| ETTh1 | 24 | MAE | 0.529 | 0.538 | 0.585 | 0.750 | 0.600 | 0.617 | 0.656 | 0.558 | 0.586 | 0.575 | 0.554 |\\n| | | MSE | 0.532 | 0.574 | 0.681 | 0.979 | 0.708 | 0.687 | 0.740 | 0.591 | 0.621 | 0.617 | 0.585 |\\n| | 48 | MAE | 0.577 | 0.553 | 0.605 | 0.756 | 0.646 | 0.623 | 0.699 | 0.639 | 0.630 | 0.574 | 0.573 |\\n| | | MSE | 0.630 | 0.588 | 0.701 | 1.040 | 0.807 | 0.680 | 0.844 | 0.831 | 0.591 | 0.623 | 0.633 |\\n| | 96 | MAE | 0.644 | 0.659 | 0.640 | 0.776 | 0.739 | 0.685 | 0.707 | 0.650 | 0.689 | 0.578 | 0.604 |\\n| | | MSE | 0.739 | 0.792 | 0.782 | 1.047 | 1.004 | 0.893 | 0.856 | 0.815 | 0.781 | 0.622 | 0.672 |\\n| | 192 | MSE | 0.691 | 0.672 | 0.692 | 0.770 | 0.750 | 0.743 | 0.715 | 0.691 | 0.661 | 0.598 | 0.655 |\\n| | | MAE | 0.864 | 0.833 | 0.894 | 1.040 | 1.011 | 0.975 | 0.874 | 0.935 | 0.754 | 0.662 | 0.801 |\\n| Weather | 24 | MAE | 0.360 | 0.312 | 0.310 | 0.494 | 0.394 | 0.449 | 1.020 | 0.308 | 0.511 | 0.314 | 0.304 |\\n| | | MSE | 0.288 | 0.224 | 0.222 | 0.438 | 0.314 | 0.376 | 1.565 | 0.234 | 0.468 | 0.231 | 0.212 |\\n| | 48 | MSE | 0.400 | 0.347 | 0.352 | 0.490 | 0.431 | 0.535 | 1.032 | 0.356 | 0.511 | 0.347 | 0.339 |\\n| | | MAE | 0.340 | 0.264 | 0.271 | 0.440 | 0.358 | 0.485 | 1.604 | 0.277 | 0.467 | 0.268 | 0.250 |\\n| | 96 | MAE | 0.386 | 0.357 | 0.353 | 0.459 | 0.372 | 0.593 | 1.034 | 0.488 | 0.515 | 0.377 | 0.357 |\\n| | | MSE | 0.324 | 0.283 | 0.282 | 0.413 | 0.296 | 0.604 | 1.615 | 0.473 | 0.470 | 0.303 | 0.276 |\\n| | 192 | MAE | 0.532 | 0.503 | 0.505 | 0.519 | 0.559 | 0.588 | 1.035 | 0.628 | 0.521 | 0.416 | 0.410 |\\n| | | MSE | 0.538 | 0.485 | 0.489 | 0.489 | 0.561 | 0.586 | 1.626 | 0.664 | 0.479 | 0.351 | 0.386 |\\n| Traffic | 24 | MAE | 0.441 | 0.459 | 0.435 | 0.672 | 0.569 | 0.472 | 0.561 | 0.496 | 0.463 | 0.461 | 0.420 |\\n| | | MSE | 0.787 | 0.833 | 0.788 | 1.207 | 1.082 | 0.821 | 0.966 | 0.496 | 0.696 | 0.713 | 0.762 |\\n| | 48 | MAE | 0.442 | 0.472 | 0.449 | 0.682 | 0.600 | 0.485 | 0.519 | 0.527 | 0.471 | 0.472 | 0.420 |\\n| | | MSE | 0.825 | 0.831 | 0.806 | 1.220 | 1.104 | 0.871 | 0.889 | 0.930 | 0.718 | 0.739 | 0.709 |\\n| | 96 | MAE | 0.442 | 0.480 | 0.452 | 0.695 | 0.617 | 0.512 | 0.472 | 0.533 | 0.480 | 0.478 | 0.434 |\\n| | | MSE | 0.826 | 0.870 | 0.812 | 1.267 | 1.110 | 0.950 | 0.804 | 0.949 | 0.746 | 0.763 | 0.810 |\\n| | 192 | MAE | 0.486 | 0.547 | 0.498 | 0.695 | 0.566 | 0.478 | 0.512 | 0.550 | 0.488 | 0.483 | 0.478 |\\n| | | MSE | 0.869 | 0.992 | 0.901 | 0.695 | 1.037 | 0.861 | 0.866 | 0.997 | 0.766 | 0.770 | 0.886 |\"}", "{\"comment\": \"Dear reviewer, we really appreciate the time and effort that you have dedicated to providing your valuable feedback on improving our manuscript. We are grateful for your insightful comments. Thank you.\"}", "{\"comment\": \"Dear Reviewer iDLq,\\n\\nYou have indicated that submission 10665 is marginally below acceptance. The authors have provided a detailed response.\\n\\nPlease indicate the extend to which their response addresses your concerns and explain your decision to update (or not update) your score.\\n\\nAll the best,\\n\\nThe AC\"}", "{\"summary\": \"In this paper, the authors propose S4M. S4M is an adaption of S4 which can handle missing values by:\\n- Using prototype clusters, look-back information and an encoder to find representations also for time-points where values are missing\\n- By explicitly also incorporating the masking matrix M into the S4-Layers.\\n \\n They evaluate S4M on the standard data for regular time-series forecasting when some data is hidden and show\\n that i) S4M is competitive or often outperforms other S4 and Missing-Value approaches.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The idea of the prototype bank is very compelling and thoughtful, I like it a lot.\", \"The presentation is very good. The paper is written in a manner which makes it comfortable to follow.\", \"Especially having an algorithm for each of the crucial parts helped me a lot.\", \"The results are not looking like fundamental break-throughs, but they are very promising for such a novel approach and there are a lot of ablations studies/hyperparameter experiments.\"], \"weaknesses\": \"The two main weaknesses I identified:\\n\\n- My largest critique point is, that the authors are not comparing at all with recent results from the \\\"Irregular Sampled Time-Series wit Missing Values\\\" Literature. There is a plentitude of recent works solving irregular time-series forecasting in an end-to-end manner via ODEs, modelling latent dynamics or graph modelling.[1-5]. Furthermore, these papers provide a set of standard datasets for time-series forecasting with missing values, thus no need to synthetically make the normal regular datasets irregular.\\n\\n- There are a lot of standard methods missing, where one could do simply linear interpolation to use them in the experiments on which S4 is tested. For example, this work is not referring to important forecasting works like PatchTST or iTransformer at all. The results of S4M in Table 1, are way worse then the results of PatchTST (see Table at https://github.com/yuqinie98/PatchTST?tab=readme-ov-file) that it may be the case that PatchTST with 0.06 missing values indeed outperforms S4M.\\n\\n[1] De Brouwer, E., Simm, J., Arany, A., & Moreau, Y. (2019). GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series. Advances in neural information processing systems, 32.\\n\\n[2] Yalavarthi, Vijaya Krishna, et al. \\\"GraFITi: Graphs for Forecasting Irregularly Sampled Time Series.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 15. 2024.\\n\\n[3] Schirmer, Mona, et al. \\\"Modeling irregular time series with continuous recurrent units.\\\" International conference on machine learning. PMLR, 2022.\\n\\n[4] Bilo\\u0161, Marin, et al. \\\"Neural flows: Efficient alternative to neural ODEs.\\\" Advances in neural information processing systems 34 (2021): 21325-21337.\\n\\n[5] Kl\\u00f6tergens, Christian, et al. \\\"Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting.\\\" Joint European Conference on Machine Learning and Knowledge Discovery in Databases.\\n\\n## My Current Rating\\nI do really like the idea and think that it has potential, even beyond S4 models for irregular time-series forecasting. However, for a top conference like ICLR, the amount of missing comparison to important related work is too high for recommending acceptance.\", \"questions\": [\"Additionally to the critique points mentioned above, I have the following comments/question:\", \"Have you tested S4M when there are no missing values at all? I would be curious whether your prototype bank is also useful if no values are missing.\", \"Table 3: Do I understand correctly, that you are comparing: Prototyping Bank + Masking (i.e. having m_t in (4)) against only having the prototype bank? I would also like to see Only Masking, i.e. having the mask m_t in (4) but no prototype bank, i.e. replacing o_t with X_t. Is your prototype-bank really needed for irregular time-series forecasting?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer UFK6,\\n\\nWe sincerely thank you for your thoughtful review and valuable feedback. We have carefully addressed each of your questions and provided detailed responses in the rebuttal. We hope to have resolved all your concerns. If you have any further comments, we would be glad to address them before the rebuttal period ends. If our responses address your concerns, we would deeply appreciate it if you could consider raising your score. Your recognition for our novel work means a lot. Thanks again for your time and effort in reviewing our work.\\n\\nRegards,\\nS4M authors\"}", "{\"metareview\": \"Most of the reviewers have appreciated the novelty of the method and its practical relevance, as well as the baselines chosen. Some design choices (such as the prototype bank) were deemed to be new solutions to the problem and of potential interest to the community.\\n\\nThere were concerns about the reproducibility of the method and its computational efficiency, raised by reviewer UFK6. The authors have shared their code and conducted experiments, showing their method does not introduce a large computational overhead compared to S4, and is competitive against baselines from the transformer family. The reviewer (who gave a score of borderline reject) did not participate in the discussion, even when prompted. However, I consider the issues they raised as having been addressed by the authors. There were no other reasons stated in the review as arguments to reject the paper.\\n\\nAn issue raised by Reviewer iDLq was the simplicity of the datasets and the need for more ablation studies. The authors have included a more complex dataset and additional ablation studies, which seemed to have convinced the reviewer since he raised his score. I also find these experiments a good addition to the paper.\\n\\nReviewer iyXX also appreciated the new ideas put forward in the paper as well as the authors\\u2019 response with additional experiments comparing against PatchTST and other models, which the reviewer found convincing.\\n\\nReviewer EZj4 raised some questions about modeling choices and hyperparameters, as well as the addition of experiments with various horizons. The authors have, in my option, addressed these issues, though the reviewer did not respond, even when prompted to do so.\\n\\nAll in all, there is sufficient novelty in this method to make it a valuable contribution to ICLR. Reviewers have requested additional experiments, which the authors provided, and which will also strengthen the paper. Thus, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The meta-review contains a summary of the issues raised, and the author responses. The two reviewers who opted to marginally reject the paper did not participate in the discussion. I used my own judgement and determined that the authors addressed their comments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response Part 2\", \"comment\": \"***W1.2 Justification on Datasets Selection.***\\n\\nThere appears to be a misunderstanding on the type of data missing problem we are focusing on in this study. Our work considers cases with block missing patterns in regularly sampled time series, where the observed values occur at consecutive time points (see our Fig. 4). This structure enables the design of an informative representation $o_t$, which is crucial for capturing temporal dependencies effectively. In contrast, standard irregularly sampled time series, like MIMIC and Physionet, do not contain such patterns of consecutive observations in the non-missing time points, making them outside the scope of our study. \\n\\nAlthough our current design does not consider the general irregular sampled data, we appreciate your encouraging recognition that \\\"I do really like the idea and think that it has potential, even beyond S4 models for irregular time-series forecasting.\\\" We also believe in the benefits of introducing a prototype bank beyond the S4 model, and we hope our work lays the foundation to inspire future work along this valuable future direction.\\n\\n***W2.1 Comparison with the linear interpolation method.***\\n\\nThank you for your advice. We have included simple and standard imputation methods such as mean, forward fill (Ffill), and linear decay interpolation in their original forms. To incorporate your suggestion, we have added linear interpolation methods to the following table, which presents experiments conducted on four datasets with $r = 0.24$ and a horizon window of 96. The results indicate that, linear interpolation\\u2019s performance is **inferior** to both our proposed approach and the decay method in most cases.\\n\\n| Dataset | Horizon Length | Metric | S4(Mean) | S4(Ffill) | S4(Decay) | S4(Linear) | S4M(Ours) |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| Electricity | 96 | MAE | 0.556 | 0.501 | 0.460 | 0.468 | 0.418 |\\n| | | MSE | 0.570 | 0.479 | 0.409 | 0.425 | 0.366 |\\n| | 192 | MAE | 0.464 | 0.410 | 0.420 | 0.395 | 0.391 |\\n| | | MSE | 0.409 | 0.324 | 0.336 | 0.306 | 0.305 |\\n| | 384 | MAE | 0.472 | 0.420 | 0.424 | 0.403 | 0.389 |\\n| | | MSE | 0.417 | 0.334 | 0.341 | 0.311 | 0.304 |\\n| | 768 | MAE | 0.469 | 0.413 | 0.415 | 0.402 | 0.399 |\\n| | | MSE | 0.413 | 0.328 | 0.331 | 0.315 | 0.318 |\\n| ETTh1 | 96 | MAE | 0.710 | 0.717 | 0.681 | 0.696 | 0.627 |\\n| | | MSE | 0.908 | 0.946 | 0.879 | 0.943 | 0.742 |\\n| | 192 | MAE | 0.644 | 0.659 | 0.640 | 0.671 | 0.609 |\\n| | | MSE | 0.739 | 0.792 | 0.782 | 0.872 | 0.703 |\\n| | 384 | MAE | 0.632 | 0.648 | 0.648 | 0.646 | 0.628 |\\n| | | MSE | 0.710 | 0.768 | 0.779 | 0.782 | 0.710 |\\n| | 768 | MAE | 0.639 | 0.661 | 0.672 | 0.659 | 0.632 |\\n| | | MSE | 0.714 | 0.800 | 0.827 | 0.823 | 0.744 |\\n| Weather | 96 | MAE | 0.421 | 0.381 | 0.378 | 0.399 | 0.362 |\\n| | | MSE | 0.379 | 0.321 | 0.317 | 0.339 | 0.286 |\\n| | 192 | MAE | 0.386 | 0.357 | 0.353 | 0.354 | 0.350 |\\n| | | MSE | 0.324 | 0.283 | 0.282 | 0.276 | 0.269 |\\n| | 384 | MAE | 0.381 | 0.349 | 0.343 | 0.349 | 0.358 |\\n| | | MSE | 0.315 | 0.273 | 0.270 | 0.272 | 0.276 |\\n| | 768 | MAE | 0.381 | 0.351 | 0.342 | 0.399 | 0.375 |\\n| | | MSE | 0.312 | 0.276 | 0.268 | 0.339 | 0.300 |\\n| Traffic | 96 | MAE | 0.487 | 0.569 | 0.529 | 0.568 | 0.485 |\\n| | | MSE | 0.910 | 1.063 | 0.984 | 1.043 | 0.933 |\\n| | 192 | MAE | 0.442 | 0.480 | 0.452 | 0.466 | 0.433 |\\n| | | MSE | 0.826 | 0.870 | 0.812 | 0.842 | 0.787 |\\n| | 384 | MAE | 0.431 | 0.456 | 0.440 | 0.524 | 0.433 |\\n| | | MSE | 0.795 | 0.842 | 0.809 | 0.953 | 0.788 |\\n| | 768 | MAE | 0.432 | 0.449 | 0.434 | 0.439 | 0.429 |\\n| | | MSE | 0.799 | 0.823 | 0.789 | 0.790 | 0.789 |\"}", "{\"title\": \"Please respond to the authors of submission 10665\", \"comment\": \"Dear Reviewer EZj4,\\n\\nWe are at the end of the discussion period, so please take some time to read the response to your review for submission 10665\\n\\nAlso, please indicate the extent to which the response addresses your concerns and whether it changes your score - try to explain your decision.\\n\\nAll the best,\\n\\nThe AC\"}", "{\"comment\": \"Dear reviewer iDLq,\\n\\nWe sincerely thank you for your thoughtful review and valuable feedback. We have carefully addressed each of your questions and provided detailed responses in the rebuttal. We hope to have resolved all your concerns. If you have any further comments, we would be glad to address them before the rebuttal period ends. If our responses address your concerns, we would deeply appreciate it if you could consider raising your score. Your recognition for our novel work means a lot. Thanks again for your time and effort in reviewing our work.\\n\\nRegards, \\nS4M authors\"}", "{\"title\": \"Sincere thanks to all the reviewers\", \"comment\": \"We sincerely thank all reviewers for their time and valuable feedback. We are thrilled that the reviewers recognized the strengths of our work, describing our approach as \\u201cpractical\\u201d and \\u201cinnovative\\u201d and noting that the \\u201cidea of the prototype bank is very compelling and thoughtful.\\u201d We are also encouraged by the positive comments on the experimental results, which were described as \\u201cpromising\\u201d and comprehensive, with \\u201ca lot of ablation studies/hyper-parameter experiments.\\u201d Additionally, we appreciate the acknowledgment that our paper is \\u201cwell written\\u201d and \\u201ceasy to follow.\\u201d We have carefully addressed your comments point by point. We appreciate the time and effort you have put into your review, and we welcome any further questions you may have.\"}", "{\"title\": \"Response Part 2\", \"comment\": \"***2. Theoretical justification.***\\n\\nThank you for the questions. The long-term dependency in S4 is achieved using the HiPPO matrix $A$, as shown in (1). This long-term dependency is evident because the current state can be expressed as a convolution of previous states, with the convolution kernels being polynomial in the HiPPO matrix $A$. Our dual-stream processing maintains the HiPPO structure, as described in line 273 of our manuscript. Specifically, the current state remains a convolution of previous states, with the convolution kernel being polynomial in the HiPPO matrix, just as in S4. Therefore, the theoretical results for S4 remain valid in our approach.\\n\\n***3. Additional ablation studies.***\\n\\nThank you for your suggestion on additional ablation studies. We have included an ablation study for the first module, ATPM. In this study, we compare S4M with and without ATPM, highlighting the improvements brought by ATPM, particularly on the Traffic and ETTh1 datasets. The following experiments were conducted under the same settings as those in the ablation studies presented in the paper.\\n\\n| Dataset | | Electricity | | ETTh1 | | Weather | | Traffic | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| $\\\\ell_L$ | Metrics | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) | S4M (Ours) | S4M (w/o prototype) |\\n| | | | | Variable Missing | | | | | |\\n| 96 | MAE | 0.369 | +0.011 | 0.571 | +0.044 | 0.336 | +0.020 | 0.442 | +0.024 |\\n| | MSE | 0.282 | +0.010 | 0.624 | +0.091 | 0.267 | +0.206 | 0.786 | +0.125 |\\n| 192 | MAE | 0.357 | +0.010 | 0.568 | +0.045 | 0.320 | +0.600 | 0.381 | +0.030 |\\n| | MSE | 0.261 | +0.009 | 0.598 | +0.090 | 0.261 | +0.002 | 0.685 | +0.092 |\\n| 384 | MAE | 0.359 | +0.009 | 0.584 | +0.029 | 0.334 | +0.006 | 0.383 | +0.026 |\\n| | MSE | 0.264 | +0.009 | 0.613 | +0.064 | 0.256 | +0.008 | 0.700 | +0.065 |\\n| 768 | MAE | 0.362 | +0.020 | 0.599 | +0.028 | 0.341 | +0.016 | 0.383 | +0.026 |\\n| | MSE | 0.269 | +0.002 | 0.649 | +0.058 | 0.266 | +0.011 | 0.697 | +0.074 |\\n| | | | | Timepoint Missing | | | | | |\\n| 96 | MAE | 0.372 | +0.025 | 0.571 | +0.049 | 0.313 | +0.021 | 0.428 | +0.045 |\\n| | MSE | 0.287 | +0.030 | 0.624 | +0.110 | 0.237 | +0.017 | 0.809 | +0.116 |\\n| 192 | MAE | 0.367 | +0.004 | 0.574 | +0.039 | 0.305 | +0.006 | 0.385 | +0.005 |\\n| | MSE | 0.274 | +0.004 | 0.593 | +0.110 | 0.225 | +0.001 | 0.687 | +0.023 |\\n| 384 | MAE | 0.370 | +0.014 | 0.571 | +0.057 | 0.306 | +0.012 | 0.385 | +0.013 |\\n| | MSE | 0.277 | +0.004 | 0.624 | +0.112 | 0.220 | +0.015 | 0.702 | +0.047 |\\n| 768 | MAE | 0.373 | +0.013 | 0.588 | +0.048 | 0.316 | +0.005 | 0.388 | +0.000 |\\n| | MSE | 0.282 | +0.016 | 0.647 | +0.079 | 0.232 | +0.004 | 0.699 | +0.024 |\"}", "{\"title\": \"Reaction To Latest Comment\", \"comment\": \"Dear Authors,\\nthanks for the additional results and changes. I think the modifications and additional experiments strengthen the paper. The authors spend a lot of effort to incorporate the changes I proposed. i thus increased my score.\"}", "{\"summary\": \"The paper presents S4M, an innovative end-to-end framework consisting of the Adaptive Temporal Prototype Mapper (ATPM) and the Missing-Aware Dual Stream S4 (MDS-S4) for multivariate time series forecasting that addresses the challenge of missing data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed S4M model is innovative, integrating missing data handling within the model architecture.\", \"weaknesses\": \"1. Although the results are promising, the authors did not provide their source code which results in very low reproducibility;\\n2. Computational efficiency comparison is missing;\", \"questions\": \"1. What are the computational costs and scalability of the S4M model, especially when dealing with large-scale multivariate time series data with high missing ratios? How does it compare to the baseline models in terms of training and inference time?\\n2. How does the dual stream processing impact the model's ability to capture temporal dependencies?\\n3. Can the authors confirm whether they implemented these baseline methods using official code or leveraged existing unified Python libraries, such as the Time-Series-Library [1] or PyPOTS [2]? It's important to note that data processing varies significantly among different imputation algorithms. Utilizing unified interfaces could help ensure that the experimental comparisons are conducted fairly. \\n\\n### References\\n[1] https://github.com/thuml/Time-Series-Library\\n\\n[2] Wenjie Du. PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series. In KDD MiLeTS Workshop, 2023. https://github.com/WenjieDu/PyPOTS\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for the thoughtful and detailed review. Also, we appreciate that the reviewer acknowledges our prototype bank presents an interesting practical way to tackle missing values. We address the raised concerns below.\\n\\n***1. What is the computational complexity of ATPM vs traditional approaches?*** \\n\\nThanks for the question, we provide the complexity analysis below. An **empirical comparison** of the computational costs of the S4M and other methods is given in the table below. S4M demonstrates **superior efficiency** in both training and inference compared to other baselines.\\n| Method | S4(Mean) | S4(Ffill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | CRUD | Grafiti | S4M(Our) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Flops(M) | 12463.39 | 12463.39 | 12618.52 | 9091.16 | 3813.82 | 17627.87 | 18734.88 | 3185.64 | 565.36 | 392299.02 | 219.57 | 265118.32 | 139191.88 |\\n| Training Time(s) | 0.11282 | 0.11498 | 0.08325 | 0.46920 | 0.19958 | 0.09035 | 0.09613 | 0.24546 | 0.06744 | 0.46017 | 49.40020 | OOM | 0.219381 |\\n| Inference Time(s) | 0.07416 | 0.07983 | 0.06152 | 0.21126 | 0.08756 | 0.06088 | 0.07662 | 0.08122 | 0.04009 | 0.16896 | 4.76765 | OOM | 0.099314 |\\n\\nWe also provide the **complexity analysis** for the core steps in bank writing and reading operations.\\n\\n- **Bank Writing:** Given $B$ training data points in a batch, each with $L$ segments, as we detailed in Algorithm 1 on the page, the core operations in writing the prototype bank involves four main steps: (1) randomly selecting $n$ out of $B \\\\cdot L$ representations, which has a complexity of $O(n)$; (2) computing similarity between $n$ selected representations and $s$ centroids of dimension $R$, leading to $O(n \\\\cdot s \\\\cdot R)$; (3) selecting the maximum similarity for each representation, which requires $O(n \\\\cdot s)$; and (4) updating the clustering via a FIFO-based mechanism, with a cost of $O(n)$. Assuming standard operations for similarity computation and FIFO updates, the overall computational complexity is dominated by $O(n \\\\cdot s \\\\cdot R)$, reflecting the influence of the embedding dimension $R$ and the number of centroids $s$\\n- **Bank Reading:** Given $B$ training data points, each with $l$ segments, the procedure involves the following steps: (1) for all $B \\\\cdot l$ segments, compute cosine similarity with $s$ centroids, resulting in a complexity of $O(B \\\\cdot l \\\\cdot s \\\\cdot R)$, where $R$ is the embedding dimension; (2) select the top $K$ centroids for each segment, which takes $O(B \\\\cdot l \\\\cdot s)$ using a partial sort; (3) normalize the similarity values for these $K $ centroids using an exponential function, costing $O(B \\\\cdot l \\\\cdot K)$; and (4) compute the weighted average of these $K$ centroids, which also takes $O(B \\\\cdot l \\\\cdot K \\\\cdot R)$. The overall computational complexity is dominated by $O(B \\\\cdot l \\\\cdot s \\\\cdot R)$, primarily driven by the initial cosine similarity calculations.\"}", "{\"title\": \"Please respond to the authors of submission 10665\", \"comment\": \"Dear Reviewer UFK6,\\n\\nThe discussion period is almost over, so please read the response of the authors of submission 10665 to your review.\\n\\nDoes their response address your concerns? Will you modify your score? Please explain your decision.\\n\\nAll the best,\\n\\nThe AC\"}", "{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for the thoughtful and detailed review. Also, we appreciate that the reviewer acknowledges our proposed S4M model is innovative. We address your concerns below.\\n\\n***W1. Source code.***\\n\\nThanks for the question, the code can be found at the [anonymous link](https://anonymous.4open.science/r/S4M-C3FA/README.md).\\n\\n***W2 & Q1. Computational costs of S4M model and its comparison to the baseline models in terms of training and inference time?***\\n\\nThank you for your valuable feedback. S4M demonstrates **superior efficiency** in both training and inference compared to other baselines. To evaluate the computational cost of S4M, we conducted experiments using the Electricity dataset under the highest missing ratio setting. The experiments were performed with a batch size of 16 and a hidden size of 512.\\n\\nWe observe that S4M (ours) achieves a **lower FLOPS value** compared to other SOTA transformer-based methods, including Grafiti. Also, S4M (ours) is similar to the S4-based methods. The results confirm our motivation to focus on S4-based architecture, given their efficiency (see lines 51-52 of our original submission). Furthermore, S4M demonstrates **shorter training** times than CRUD, PatchTST, BiTGraph, and BRITS. For inference, S4M also outperforms CRUD, PatchTST, and BRITS, making it a **more efficient** choice for both training and inference. (For clarification, \\\"OOM\\\" in the tables refers to \\\"Out-of-Memory.\\\")\\n| Method | S4(Mean) | S4(Ffill) | S4(Decay) | BRITS | GRUD | Transformer | Autoformer | BiTGraph | iTransformer | PatchTST | CRUD | Grafiti | S4M(Our) |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Flops(M) | 12463.39 | 12463.39 | 12618.52 | 9091.16 | 3813.82 | 17627.87 | 18734.88 | 3185.64 | 565.36 | 392299.02 | 219.57 | 265118.32 | 139191.88 |\\n| Training Time(s) | 0.11282 | 0.11498 | 0.08325 | 0.46920 | 0.19958 | 0.09035 | 0.09613 | 0.24546 | 0.06744 | 0.46017 | 49.40020 | OOM | 0.219381 |\\n| Inference Time(s) | 0.07416 | 0.07983 | 0.06152 | 0.21126 | 0.08756 | 0.06088 | 0.07662 | 0.08122 | 0.04009 | 0.16896 | 4.76765 | OOM | 0.099314 |\"}", "{\"title\": \"Answer to Rebuttal\", \"comment\": \"Dear Authors,\\nthank you for the rebuttal. My concerns are only partially adressed:\\n- I think that the differentiation to irregular sampled time-series has to be made more explicit in the paper. Furthermore, the fact that you are only considering irregular-sampled time-series with specific patterns of missingness is not clear in the current version of the paper.\\n- Your response part 2: My request was more about doing linear interpolation etc and then having models like PatchTST and iTransformer on top, not S4. Because having a look at PatchTST results without missing values, it stands to reason that it outperforms S4M.\"}", "{\"comment\": \"Dear reviewer EZj4,\\n\\nWe sincerely thank you for your thoughtful review and valuable feedback. We have carefully addressed each of your questions and provided detailed responses in the rebuttal. We hope to have resolved all your concerns. If you have any further comments, we would be glad to address them before the rebuttal period ends. If our responses address your concerns, we would deeply appreciate it if you could consider raising your score. Your recognition for our novel work means a lot. Thanks again for your time and effort in reviewing our work.\\n\\nRegards, \\nS4M authors\"}" ] }
BkeJro1xps
A simulation-heuristics dual-process model for intuitive physics
[ "Shiqian Li", "Yuxi Ma", "Bo Dai", "Yujia Peng", "Chi Zhang", "Yixin Zhu" ]
The role of mental simulation in human behavior for various physical tasks is widely acknowledged, attributed to the generality of Intuitive Physics Engine (IPE). However, it remains unclear whether mental simulation is consistently employed across scenarios of different simulation costs and where its boundary is. Moreover, cognitive strategies beyond these boundaries have not been thoroughly investigated. Here, we adopted a pouring-marble task containing various conditions to study IPE's limits and strategies beyond. A human study revealed two distinct error patterns in predicting the pouring angle, differentiated by the simulation time using a boundary. This suggests a possible switching of the underlying reasoning strategies. Our initial experiment on IPE showed that its correlation with human judgments diminished in scenarios requiring extended time of simulation. This observation prompted the exploration of an alternative mechanism based on heuristics for intuitive physics. We uncovered that a linear heuristic model, relying exclusively on empirical data, replicated human prediction more accurately when the simulation time exceeded a certain boundary. Motivated by these observations, we propose a new framework, Simulation-Heuristics Model (SHM), which conceptualizes intuitive physics as a dual process: IPE is predominant only in short-time simulation, whereas a heuristics-based approach is applied as IPE's simulation time extends beyond the simulation boundary. The SHM model aligns more precisely with human behavior across various scenarios and demonstrates superior generalization capabilities under different conditions. Crucially, SHM integrates computational methods previously viewed as separate into a unified model, quantitatively studying their switching mechanism.
[ "Intuitive physics", "physical reasoning", "mental simulation", "heuristic model" ]
https://openreview.net/pdf?id=BkeJro1xps
https://openreview.net/forum?id=BkeJro1xps
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qQLHO1bkj7", "iJKkHKnI3C", "gzlYSkcwm3", "aU03K69L3G", "IU3X4i19zn" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1729198231771, 1730708459869, 1730951368729, 1733031001502, 1731085840738 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission634/Reviewer_YZE6" ], [ "ICLR.cc/2025/Conference/Submission634/Reviewer_fRVk" ], [ "ICLR.cc/2025/Conference/Submission634/Reviewer_EBaN" ], [ "ICLR.cc/2025/Conference/Submission634/Authors" ], [ "ICLR.cc/2025/Conference/Submission634/Reviewer_u9Qr" ] ], "structured_content_str": [ "{\"summary\": \"This work investigates the role of mental simulation and heuristics in human physics prediction. Whereas previous works have studied mental simulation and heuristic-driven physics prediction in isolation, this paper hypothesizes that humans make use of both of these strategies and switch between them depending on the context and problem difficulty. They design a new \\u201cpouring marble task\\u201d with more diverse physical properties. Humans are asked to judge the tilt angle needed to pour marbles from cups under various setups.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Overall, the paper was well-written and easy to understand. The argument was very clearly laid out in the introduction, and the figures look nice.\", \"weaknesses\": [\"When I first read \\u201cWe employ a grid search method to optimize both \\u03b8 for the strategic transition and the noise parameters \\u03c3 for the IPE, in addition to a group of heuristic parameters \\u03c9 derived from linear regression.\\u201d, I wasn\\u2019t quite sure what that meant. What was the objective being maximized? Further on in the paper, the authors state \\u201cA grid search identified the boundary of 68.2 degrees in simulation time and a dynamic positional noise of 0.2 as optimal for mirroring human judgments.\\\" Does this mean that the Simulation-Heuristics Model is fit to the human data? If so, are there separate cohorts for model fitting and model comparison?\", \"I don\\u2019t understand the newly proposed heuristic model. As stated in the paper, previous work in heuristic models use predefined rules or fit to human data. In this paper, the heuristic model is fit to simulation data. How is this an accurate representation of human heuristics since humans haven\\u2019t seen the examples this model has.\", \"Considering this was only tested on a single problem (pouring), it seems premature to claim this as a general model for resource efficient physics prediction. If correct, we would see this behavior replicated across a variety of tasks using some unified notion of a computational budget. Time is also not a very good proxy for simulation difficulty.\", \"The authors report correlations and RMSE for different models (Heuristic, IPE, SHM), but I couldn't find any statistical tests that compared these models.\", \"Would the findings in this paper not be consistent with a biased heuristic or simulation model? Is there any reason a miscalibrated physics simulation that got the friction, mass, etc parameters wrong wouldn\\u2019t also lead to angle-dependent prediction error?\", \"In figure 1c, red is used for both the mean angle estimate and the A-shape points. This was confusing at first glance.\", \"No code is provided, which limits reproducibility.\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new framework - Simulation Heuristic Model (SHM) which is built on top of a linear heuristic model to replicate human prediction as opposed to time-expensive simulation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper asks original questions along the lines of how humans reason and how often they employ simulations vs heuristics to make predictions.\\nThe paper performs a user study to understand the impact and significance of the idea.\", \"weaknesses\": \"The paper is poorly written and presented. Contributions and results - which should be highlighted and form the thesis of the paper - are buried in details.\\nIf I understand correctly, the primary idea in the paper is to approximate a noisy simulation (which is computed using equation 1) using a linear model (represented in equation 2) after a certain time threshold for the simulation is met. There are two issues here - I don't think physical simulations can be represented using a quartic equation. Were other heuristic models considered? Additionally, deciding on the time threshold after which the heuristic should kick in requires having a hold out validation set. How was this done with having only 43 participants?\", \"questions\": \"1. How did other heuristic models do (for example neural nets)? What are the inputs into this heuristic model?\\n2. Are results from a user study of 43 participants statistically significant to claim that this dual process works better than IPE? Was a hold out validation set used for computing results using a time threshold deciding by grid search on a training set?\\n3. If the heuristic (equation 2) is supposed to \\\"mimic/predict\\\" the simulation (equation 1), how is it possible for SHM to outperform IPE in practice? What data was the heuristic trained with?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a methodology design for looking into intuitive physics engine. A pouring-marble task is designed with various conditions and the results show some interesting behavior in cognitive strategies. Inspired by this, a framework called SHM is proposed for human mental simulation that aligns more precisely with human behavior.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The research topic is interesting and, compared with previous work, the scenario is more complicated and the experiment shows the effectiveness of the new modeling approach.\", \"weaknesses\": \"An important contribution claimed in this paper is that, compared with previous works that mainly focused on a single task, this work provides a systematic methodology for learning heuristics. However, there is only one task in this paper although with varied conditions. I recommend adding another task with similar settings to show the general utility.\", \"questions\": \"The modeling approach aims for a systematic methodology. How general is this model? Can this model handle some scenarios that the boundary cannot be described with a single parameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a new framework, the Simulation-Heuristics Model (SHM), which conceptualizes intuitive physics as a dual process: Intuitive Physics Engine (IPE) dominates in short-term simulations, while a heuristic-based approach takes over when the IPE\\u2019s simulation extends beyond a certain time boundary.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a new framework, the Simulation-Heuristics Model (SHM), which conceptualizes intuitive physics as a dual process: Intuitive Physics Engine (IPE) dominates in short-term simulations, while a heuristic-based approach takes over when the IPE\\u2019s simulation extends beyond a certain time boundary.\", \"weaknesses\": \"-I don't think this work is suitable for submission to ICLR, as it lacks AI/ML elements, learning representation and primarily consists of human experiments. I would recommend the author consider submitting it to CogSci or another more relevant conference.\\n\\n-Too simple task scenarios, it would more convincing to see how this SHM can helped with other downstream real-world tasks?\\n\\n-How well can existing VLM do in the proposed tasks?\", \"questions\": \"refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BkR4QG4azn
GFlowNets Need Automorphism Correction for Unbiased Graph Generation
[ "Hohyun Kim", "Seunggeun Lee", "Min-hwan Oh" ]
Generative Flow Networks (GFlowNets) are generative models capable of producing graphs. While GFlowNet theory guarantees that a fully trained model samples from an unnormalized target distribution, computing state transition probabilities remains challenging due to the presence of equivalent actions that lead to the same state. In this paper, we analyze the properties of equivalent actions in the context of graph generation tasks and propose efficient solutions to address this problem. Our theoretical analysis reveals that naive implementations, which ignore equivalent actions, introduce systematic bias in the sampling distribution for both atom-based and fragment-based graph generation. This bias is directly related to the number of symmetries in a graph, a factor that is particularly critical in applications such as drug discovery, where symmetry plays a key role in molecular structure and function. Experimental results demonstrate that a simple reward-scaling technique not only enables the generation of graphs that closely match the target distribution but also facilitates the sampling of diverse and high-reward samples.
[ "GFlowNet", "graph generation", "molecule optimization" ]
Reject
https://openreview.net/pdf?id=BkR4QG4azn
https://openreview.net/forum?id=BkR4QG4azn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xBs61U3vvT", "w3DPDhVKU1", "vgDQaJYuPC", "uZCD0HW2mn", "m0xxgmi3aX", "lTX4umOXrh", "kHRuEWArm5", "i0ruCqAEAq", "ebp9sWtKfJ", "daU3Fn1mu5", "btZSYyvSwM", "aszF1tg1z5", "adIgKKe9O3", "aCiN2pX5oM", "ZP7D0CxTRF", "TYEo5ffimL", "RB0lw76NTj", "NrR1IDM7uF", "Mk4pRN7u7q", "Lm7CCJaGy8", "LT9UWfctOM", "L7NCaXbCmF", "KfqC2HB2jH", "KYHKKuQjOK", "I32v9ZoYLX", "GZ9YCXTTYD", "E9vBibYDqi", "9v7PdHckak", "9RLMo7SsOw", "5qVjcWb5lN" ], "note_type": [ "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "comment" ], "note_created": [ 1732288461465, 1732076144787, 1731994059134, 1732300854678, 1733062983382, 1732108417323, 1733063422035, 1730664734391, 1731993094594, 1732537869054, 1731993820912, 1730691169418, 1731987306477, 1732462606280, 1732248334094, 1732802550177, 1730693492520, 1733062839109, 1734690862136, 1729911285760, 1730426216897, 1731992889935, 1732625752327, 1732462823800, 1732536678975, 1731985325681, 1732499127178, 1737523936455, 1731981515911, 1732563800576 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "~Emmanuel_Bengio1" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_Rf27" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_Rf27" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_ynfw" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_6mnv" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_6MW9" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_ynfw" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Area_Chair_RmuF" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_mRgy" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_6MW9" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_6MW9" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "ICLR.cc/2025/Conference/Submission8843/Reviewer_6mnv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8843/Authors" ], [ "~Tiago_Silva4" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewers,\\n\\nWe provide a detailed comparison with the \\u201cBaking Symmetry into GFlowNets\\u201d paper.\\n\\nFirst, we were pleased to discover that the original authors of GFlowNets had previously recognized this issue and proposed a method to address it\\u2014further evidence that this problem is both significant and well-motivated.\\n\\n---\\n| | **\\u201cBaking Symmetry into GFlowNets\\u201d** | **Ours** |\\n| --- | --- | --- |\\n| **Motivation** | Incorporate internal symmetries within the generation process | Train with exact target distribution |\\n| **Method** | Approximately compute equivalent actions at each transition using positional encodings of a graph. Then, sum their probabilities. | Scale rewards by the order of automorphisms. |\\n| **Types (Generality)** | Node types | Node types, edge types, and fragments |\\n| **Theory** | No theoretical guarantees | Theoretical guarantees on exact learning is provided. The bias without corrections amounts to $\\\\|\\\\text{Aut}(s)\\\\|$. |\\n| **Experiment** | Synthetic graphs | Both molecules from real-world data and synthetic graphs |\\n| **Computation** | Multiple positional encoding computations are required at each transition. | Computation of $\\\\|\\\\text{Aut}(s)\\\\|$ is required once for each trajectory. For the approximate method, a summation operation is required over the number of fragments.|\\n---\\n\\nThe paper \\u201cBaking Symmetry into GFlowNets\\u201d does not provide sufficient context for full reproduction. However, the open-source `gflownet` code includes the function `get_idempotent_actions` , which we believe serves as an implementation of the paper\\u2019s method for exact isomorphism tests. We present the computational cost of performing isomorphism tests using `get_idempotent_actions`, measuring only the compute time of the function call and excluding any preparation code. The QM9 test data (size: 13,389) were used, with trajectories sampled using a uniform backward policy, resulting in an average of 12.72 transitions per trajectory. Although the function must be called for each transition, we report the total cost per trajectory for comparison. Compute times were measured separately for forward and backward actions, and these must be summed to allow a direct comparison with our method.\\n\\n---\\n| | `get_idempotent_actions` (\\u201cBaking Symmetry into GFlowNets\\u201d) | Our method using *bliss* |\\n| --- | --- | --- |\\n| Forward actions | 26.69 ms \\u00b1 19.93 | - |\\n| Backward actions | 4.56 ms \\u00b1 5.38 | - |\\n| Compute time per trajectory | **31.24 ms \\u00b1 21.16** | **0.010 ms \\u00b1 0.008** |\\n---\\n\\nThe analysis we provided in our paper for exact isomorphism tests assumes the use of a graph hashing algorithm. The large computational cost of `get_idempotent_actions` arises from its computationally expensive pairwise comparison of actions.\\n\\nWe also observed that the code selectively applies corrections, skipping those for backward equivalent actions when uniform backward policy is used (i.e., when `do_parameterize_p_b` is set to `False`). However, as demonstrated in our paper, equivalent actions should be accounted for regardless of the type of backward policy. We believe that misconceptions on this topic may stem from it being relatively underexplored.\\n\\nWe hope this provides sufficient context for comparisons with prior work. Our method offers an efficient solution to the automorphism problem.\"}", "{\"comment\": \"Dear authors, dear reviewers,\\n\\nI hope this comment is taken in the constructive spirit it is intended. This paper is a close analog to our NeurIPS 2023 AI for Science Workshop paper, \\\"[Baking Symmetry into GFlowNets](https://arxiv.org/abs/2406.05426)\\\", by George Ma, myself, Yoshua Bengio, & Dinghuai Zhang. In addition, the code provided in the supplementary is, by extrapolating from the appendix and simple inspection, a derivative of an open source `gflownet` repository; of which I am the main contributor and maintainer; and in which features to correct for so-called equivalent actions (idempotent actions in `gflownet`) were [merged into trunk](https://github.com/recursionpharma/gflownet/pull/42) on Jan 19, 2023. \\n\\nTo be clear, the core methodological contribution of this paper is to correct flows by counting automorphisms exactly. In contrast, our paper proposes exactly that, as well as the use of positional encoding matching as an efficient alternative to auto/isomorphism testing.\\n\\nWe understand that this could have happened unintentionally, and we appreciate the effort put into this research, which formalizes and tests this issue more thoroughly than our prior work. That being said, we believe our work provides valuable context and prior art for this submission, and we find this situation disappointing. A simple search using relevant keywords would have easily revealed our paper and the associated code.\\n\\nThank you for your understanding.\"}", "{\"comment\": \"# Computational Cost\\n\\nWhile computing the exact $|\\\\text{Aut}(s)| $ has inherent complexity, as discussed in the paper, this complexity is unavoidable for exact computation. However, irrespective of the computational cost, fixing the sampling bias due to the ignorance of equivalent actions is a fundamental issue that needs to be resolved. **This correction introduces an inherent computational cost, but it is necessary to maintain the consistency of sampling**. In practice, fast heuristic algorithms often perform well, particularly for relatively smaller graphs, and significantly reduce the computational overhead associated with calculating $ |\\\\text{Aut}(s)| $.\\n\\nWe present additional experimental results measuring the compute, as shown below. Note that the scale of the experiments in our paper corresponds to QM9 and ZINC250k.\\n\\n---\\n| Dataset | Sample size | Avg. number of atoms (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _bliss_ (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _nauty_ (mean \\u00b1 std) |\\n|-|-|-|-|-|\\n| QM9| 133885| 8.80 \\u00b1 0.51| 0.010 ms \\u00b1 0.008 | 0.019 ms \\u00b1 0.079|\\n| ZINC250k | 249455| 23.15 \\u00b1 4.51| 0.022 ms \\u00b1 0.010| 0.042 ms \\u00b1 0.032|\\n| CEP | 29978| 27.66 \\u00b1 3.41| 0.025 ms \\u00b1 0.014| 0.050 ms \\u00b1 0.076|\\n| *Large | 304414| 140.07 \\u00b1 49.38|-| 0.483 ms \\u00b1 12.600|\\n---\\n\\n*Large: the largest molecules in PubChem, data retrieved from https://github.com/danielflamshep/genmoltasks. This data is used in the paper \\u201cLanguage models can learn complex molecular distributions.\\u201d\\n\\n**Experiments were conducted on an Apple M1 processor.\\n\\n---\\n\\nWhen compared to the cost of sampling trajectories, which involves multiple forward passes through a neural network, the compute time for $|\\\\text{Aut}(s)|$ remains still negligible. Also it is important to note that our proposed method requires computing automorphisms only once per trajectory. For comparison, we report the speed of molecular parsing algorithms measured using ZINC250k: 0.06 ms \\u00b1 0.70 (SMILES \\u2192 molecule) and 0.04 ms \\u00b1 0.05 (molecule \\u2192 SMILES). The combination of two parsing steps is often used to check the validity of a given molecule in various prior works. In words, computing $ |\\\\text{Aut}(s)|$ is in an order of magnitude faster than validity checking algorithm.\\n\\nWe used the *bliss* algorithm in our paper. It is easy to use as it is included in the igraph package and is fast enough for our purposes. For large molecules, we can still count automorphisms in few milliseconds using the *nauty* package as can be seen in the table. We observed that the pynauty package does not natively support distinguishing between different edge types, requiring us to transform the input graphs by attaching virtual nodes to handle this limitation. The reported time in the table reflects these preprocessing steps.\\n\\nWhile we believe the compute time is already minimal considering current applications, we provide two more recipes to even further improve the run time. \\n\\n- Data processing tasks can be easily parallelized across multiple CPUs. Since GFlowNet is an off-policy algorithm, $ |\\\\text{Aut}(s)|$ can be computed concurrently with the policy's learning process.\\n- For large graphs, fragment-based generation is highly likely to be employed. In such cases, we can utilize an approximate correction formula, as outlined in the paper.\\n\\nIn conclusion, the computational overhead of computing automorphism in practice be minor relative to computation of the entire pipeline.\"}", "{\"comment\": \"Thanks for the clarification.\\n\\nCould you provide the running time comparison between w/ the proposed correction and w/o the proposed correction? It's hard to show that the time overhead is minor from these numbers in the table. What really matters is the relative time overhead.\"}", "{\"comment\": [\"Thank you for your valuable feedback and for recognizing the contributions of our work. We sincerely hope that we have addressed your feedback through the revisions to our paper. In particular, we have included:\", \"Comparisons to \\\"Baking Symmetry into GFlowNets\\\" in section 2 and Appendix B.\", \"\\u201cComputational Cost\\u201d section on Appendix G.\", \"New notations to remove notation overloads.\", \"Limitations in section 7.\", \"Regards,\", \"Authors\"]}", "{\"comment\": \"Dear Dr. Emmanuel Bengio,\\n\\nThank you for bringing this to our attention and for your constructive feedback. We sincerely appreciate the opportunity to clarify and address this oversight.\\n\\nFirst, we regret that we were unaware of NeurIPS 2023 AI for Science Workshop paper, *\\\"[Baking Symmetry into GFlowNets](https://arxiv.org/abs/2406.05426).*\\\" Your work on addressing isomorphic (or \\\"equivalent\\\") actions and integrating symmetry considerations into GFlowNets is directly relevant to our research. Had we been aware of your paper, we would have cited and compared it to our work to better position our research within the existing literature.\\n\\nAs you have noted, we utilized the open-source `gflownet` repository, which we referenced in our paper. While we were aware of the option to correct for \\u201cidempotent actions,\\u201d we found that the implementation enumerates isomorphic actions by performing several isomorphism tests to identify exact isomorphic actions at each transition (get_idempotent_actions function). As noted in the \\\"Baking Symmetry into GFlowNets\\\" paper and in the comments within the function implementation, while this is one of the straightforward methods for correction, it appears to be slow. We question its scalability for real world scenarios. This led us to the mistaken belief (before your comment) that no prior work addressing this issue existed.\\n\\nOnce again, we appreciate your thoughtful feedback and your acknowledgment of our contributions to the additional formalization and testing. In the remaining time, we will update our paper to include comparisons with your work. Thank you for your understanding and for creating GFlowNet.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to engage with our rebuttal. We respect your comments and would like to address your remaining concerns in more detail.\\n\\n1. **Novelty and Contributions:**\\n \\n We believe our paper presents at least three key novelties, each with significant implications for future research:\\n \\n - **Rigorous theoretical analysis:** We have identified the exact bias present in GFlowNet training. While the problem was previously noted, our work is the first to formulate the problem, theoretically justifying the motivation for employing the correction.\\n - **Novel Correction Method**: We proposed a practical solution to address this bias, making unbiased GFlowNet training feasible in practice.\\n - **Unbiased Model Likelihood Estimator**: We introduced a novel model likelihood estimator, which serves as a fundamental measure for evaluating generative models.\\n\\n\\n2. **Correctness Measures:**\\n \\n In the revised version, we included FCS [1] as an evaluation metric to measure model correctness. We agree that FCS is an intuitive and appropriate alternative for measuring correctness, but it should be used in combination with our model likelihood estimator. As such, we plan to further explore the aspects of FCS when used with our proposed estimator, as it has not been previously tested for graph generation tasks.\\n\\n [1]\\u00a0https://openreview.net/forum?id=B8KXmXFiFj\\n \\n\\n\\n3. **Empirical Distributions and Model Evaluation:**\\n \\n For synthetic graphs, exact marginal state probabilities can be computed directly, which eliminates the need for either Eq. 3 or empirical distributions for evaluation. However, we understand your concerns about using Eq. 3 for model evaluation. To address this, we will include additional experimental results comparing different model likelihood estimators in conjunction with the FCS metric in a future version of the paper.\\n \\nWe hope this clarifies our contributions and provides further insights into our work. We would greatly appreciate any reconsideration of the assigned score.\\n\\nRegards,\\n\\nAuthors\"}", "{\"summary\": \"This work first studies the properties of equivalent actions when applying GFlowNets for graph generation. Equivalent actions denote the set of actions that lead to isomorphic graphs at each step of the autoregressive generation process. This work provides a theoretical analysis on the impact of ignoring equivalent actions and points out that it would introduce bias in the sampling distribution. With this insight, this work further proposes a simple correction on the GFlowNet objectives by using the order of the automorphism group to account for equivalent actions. This can correct the reward for highly symmetric graphs.\\n\\nExperiments on small graph generation and small molecule generation are conducted to show the performance of the proposed correction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) It is really interesting and valuable to the community to identify the impact of ignoring equivalent actions in GFlowNets for graph generation. The theoretical analysis is quite sound from my reading and I think it is valuable to other readers.\\n\\n(2) The theoretical results are quite elegant, thus leading to a simple correction to the original GFlowNets objectives. It is quite enjoyable to see that the correction term is the order of the automorphism group.\\n\\n(3) The experiments can show that with such a simple correlation, the sampling bias and resulting performance are notably improved, which can support the theoretical analysis and the proposed corrected objectives straightforwardly.\\n\\n(4) The paper is well written.\", \"weaknesses\": \"(1) It looks really computationally expensive to evaluate the order of the automorphism group and the complexity could increase exponentially with the size of the graph. I understand that the paper provides some analysis on the computation. However, the experimental study on the complexity is missing, while it is very important to assess the practical usefulness of the proposed idea.\\n\\n(2) I am a bit concerned about the practicality of the method. The experiments are mainly on small graph and small molecule generation. It is unclear if this method can be scalable to generate large molecules.\", \"questions\": \"See the weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# Computational Cost\\n\\nWhile computing the exact $|\\\\text{Aut}(s)| $ has inherent complexity, as discussed in the paper, this complexity is unavoidable for exact computation. However, irrespective of the computational cost, fixing the sampling bias due to the ignorance of equivalent actions is a fundamental issue that needs to be resolved. **This correction introduces an inherent computational cost, but it is necessary to maintain the consistency of sampling**. In practice, fast heuristic algorithms often perform well, particularly for relatively smaller graphs, and significantly reduce the computational overhead associated with calculating $ |\\\\text{Aut}(s)| $.\\n\\nFurthermore, when compared to the cost of sampling trajectories, which involves multiple forward passes through a neural network, the compute time for $|\\\\text{Aut}(s)|$ remains still negligible. Also it is important to note that our proposed method requires computing automorphisms only once per trajectory. To address your comment, we provide additional experimental results measuring the compute, as shown below. Note that the scale of the experiments in our paper corresponds to QM9 and ZINC250k.\\n\\n---\\n| Dataset | Sample size | Avg. number of atoms (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _bliss_ (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _nauty_ (mean \\u00b1 std) |\\n|-|-|-|-|-|\\n| QM9| 133885| 8.80 \\u00b1 0.51| 0.010 ms \\u00b1 0.008 | 0.019 ms \\u00b1 0.079|\\n| ZINC250k | 249455| 23.15 \\u00b1 4.51| 0.022 ms \\u00b1 0.010| 0.042 ms \\u00b1 0.032|\\n| CEP | 29978| 27.66 \\u00b1 3.41| 0.025 ms \\u00b1 0.014| 0.050 ms \\u00b1 0.076|\\n| *Large | 304414| 140.07 \\u00b1 49.38|-| 0.483 ms \\u00b1 12.600|\\n---\\n\\n*Large: the largest molecules in PubChem, data retrieved from https://github.com/danielflamshep/genmoltasks. This data is used in the paper \\u201cLanguage models can learn complex molecular distributions.\\u201d\\n\\n**Experiments were conducted on an Apple M1 processor.\\n\\n---\\n\\nFor comparison, we report the speed of molecular parsing algorithms measured using ZINC250k: 0.06 ms \\u00b1 0.70 (SMILES \\u2192 molecule) and 0.04 ms \\u00b1 0.05 (molecule \\u2192 SMILES). The combination of two parsing steps is often used to check the validity of a given molecule in various prior works. In words, computing $ |\\\\text{Aut}(s)|$ is in an order of magnitude faster than validity checking algorithm. Even if we compute automorphisms for all intermediate states, this amounts to less than 20x increase for small molecules, which is less than a millisecond.\\n\\nWe used the *bliss* algorithm in our paper. It is easy to use as it is included in the igraph package and is fast enough for our purposes. For large molecules, we can still count automorphisms in few milliseconds using the *nauty* package as can be seen in the table. We observed that the pynauty package does not natively support distinguishing between different edge types, requiring us to transform the input graphs by attaching virtual nodes to handle this limitation. The reported time in the table reflects these preprocessing steps.\\n\\nWhile we believe the compute time is already minimal considering current applications, we provide two more recipes to even further improve the run time. \\n\\n- Data processing tasks can be easily parallelized across multiple CPUs. Since GFlowNet is an off-policy algorithm, $ |\\\\text{Aut}(s)|$ can be computed concurrently with the policy's learning process.\\n- For large graphs, fragment-based generation is highly likely to be employed. In such cases, we can utilize an approximate correction formula, as outlined in the paper.\\n\\nIn conclusion, the computational overhead of computing automorphism in practice be minor relative to computation of the entire pipeline.\"}", "{\"comment\": \"I thank the authors for their responses. After reading them, the other reviews, Emmanuel's comment, and the additional comparison to \\\"Baking Symmetry into GFlowNets\\\", I am still leaning towards acceptance and would like to keep my initial score. Importantly, the authors should add the comparison to the revised paper.\"}", "{\"comment\": \"Thanks for appreciating our work and providing helpful feedback!\\n\\n# Experiments\\n\\n### **Implications of experiments: top-k diversity and reward**\\n\\nThe purpose of measuring top-k diversity and reward is not to demonstrate the correctness of the proposed method. Instead, the experiment aims to provide insights into how unbiased distribution benefits downstream tasks, as diverse and high-reward molecules are crucial for drug discovery. To assess the correctness of our method, we used Pearson correlation.\\n\\nThe effects of bias correction depend on the landscape of the given reward function. If many high-reward molecules are also highly symmetric, the proposed method is more likely to identify these molecules compared to methods without correction. Conversely, if only a few high-reward molecules are symmetric, the correction does not guarantee strong performance in terms of top-k diversity and reward. However, regardless of the task, it is essential to recognize the effects of the correction in advance. Without corrections, there is a risk of inadvertently missing candidate molecules.\\n\\n### **Datasets**\\n\\nTo clarify, GFlowNets are trained based on a given reward function and, in principle, do not require a dataset. We referenced the QM9 dataset because the reward function used in our experiments was trained on QM9, which provides important molecular properties, the HOMO-LUMO gap.\\n\\nThe ZINC dataset, on the other hand, was used to construct the fragment vocabulary. However, it was not directly utilized beyond this, as the purpose of GFlowNet is not to learn a distribution from the data but rather to generate graphs guided by the reward function.\\n\\nWe hope this explanation addresses your concern.\\n\\n### **Choice of reward exponent**\\n\\nWe found that prior work used wildly different reward exponents $\\\\beta$ for their experiments. For example, the very first GFlowNet paper used $\\\\beta=10$ for sEH experiment, while Multi-objective GFlowNet used 96, and LS-GFN used 6. Our reasoning was that if we use high reward exponent, the training depends more on exploration algorithm and requires longer training, so we chose modest values.\\n\\n# Technical Contributions\\n\\n### **The first paper to study the theoretical foundations on equivalent actions**\\n\\nTo the best of our knowledge, ours is the first paper to study the theoretical foundations of the equivalent action problem, both within the GFlowNet (to our best knowledge, and graph generation communities). While the problem might seem straightforward in hindsight, we would like to emphasize that this recognition often comes after the issue has been formally identified and analyzed. Such seemingly straightforward findings can have a profound impact, as they address fundamental challenges that, once resolved, open new avenues for research and application\\u2014something we strongly believe as a strength of our work, not a limitation.\\n\\nWe also highlight that our findings go beyond correcting GFlowNet\\u2019s sampling distribution. We introduce a novel method for estimating model likelihood, which has significant implications for a variety of graph-related tasks. This dual contribution demonstrates the broader relevance and potential influence of our work in the field.\\n\\n### **Impact of our findings on graph generation**\\n\\nAlthough we are not certain how previous works tackled the equivalent action problem, it is highly likely that they employed the approximation $p(s'|s) \\\\approx p(G'|G)$, either intentionally or unintentionally, as evidenced by some open-source GFlowNet implementations. While this approximation leads to an incorrect sampling distribution, we believe that the previous experimental results remain valid, provided the metrics used are consistent and the results are interpreted carefully with the problem in mind. However, we do believe that performance can be improved with the correction we propose, and we recommend that this correction be included in every future work. In addition, we believe our formulation that distinguishes states and graphs is also helpful for clarifying problems in other graph tasks as well; we briefly summarized how our findings can be applied to methods that introduce \\u2018node ordering\\u2019 as a variable in Appendix D.\\n\\n# Presentation\\n### **Notation**\\nWe will revise the manuscript and include the table of notations to make it clearer and more user-friendly in the updated version, specifically by distinguishing between states/graphs and transitions/terminal states. \\n\\n### **Limitations**\\nThank you for your feedback. We will address the discussion on limitations of our work in the revised version.\\n\\n### **Theorem 2 \\u2192 Corollary**\\nThank you for your suggestion. We are open to modifying Theorem 2 into a Corollary in the revised version, as it is an implication from Theorem 1.\"}", "{\"summary\": \"This work points out that automorphic actions may cause GFlowNets to artificially over/undersample terminal states compared to the target distribution. They also propose a fix by re-scaling the reward to account for the size of the automorphism group of terminal states. They illustrate the pathology and their fix in a toy example and a molecule generation task.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work is overall well-written and easy to follow;\", \"It shows that a common practice of treating graphs as if they were the GFlowNet states leads to incorrect sampling --- implying that, to some extent, there is a series of incorrect experiments in the GFlowNet literature;\", \"Authors provide a quick fix to the issue.\"], \"weaknesses\": [\"It appears Figure 3 uses Equation 3 to compute the final state probabilities. I am not sure this is a fair evaluation. I suggest the authors use the empirical approximations of the distributions over terminal states (based on GFlowNet samples) for comparison. For instance, measuring L1 between the empirical sampling distribution and the target.\", \"The metrics in Table 1 have no direct relationship to goodness-of-fit. I understand enumerating the terminal states is impossible for extensive supports, making computing the L1 distance to the target unfeasible. Nonetheless, authors could use the FCS [1] as a proxy. Otherwise, we cannot draw conclusions about sampling correctness in large environments.\", \"Authors said the additional cost of running BLISS in the final states is negligible. I reckon this should be task-specific. This shouldn't intuitively be negligible if all intermediate states are also final. Please elaborate on the discussion and provide numbers/experimental evaluations.\", \"The experimental campaign is relatively short compared to recent works on GFlowNets.\", \"While I value the authors' contribution, I believe their contributions and derivations are somewhat straightforward and the work's novelty is limited.\", \"[1] https://openreview.net/forum?id=B8KXmXFiFj\"], \"questions\": [\"It would be nice to see an illustration of the bias authors point to using a uniform target. Then, plotting the marginal over the size of automorphism relations for each sample should highlight this bias.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for appreciating our work and providing a great summary!\\n\\n# Computational Cost\\n\\nWhile computing the exact $ |\\\\text{Aut}(s)| $ has inherent complexity, as discussed in the paper, this complexity is unavoidable for exact computation. However, irrespective of the computational cost, fixing the sampling bias due to the ignorance of equivalent actions is a fundamental issue that needs to be resolved. **This correction introduces an inherent computational cost, but it is necessary to maintain the consistency of sampling**. In practice, fast heuristic algorithms often perform well, particularly for relatively smaller graphs, and significantly reduce the computational overhead associated with calculating $ |\\\\text{Aut}(s)| $.\\n\\nFurthermore, our proposed method requires computing automorphisms only once per trajectory. We present additional experimental results measuring compute time below. Note that the scale of the experiments in our paper corresponds to QM9 and ZINC250k.\\n\\n---\\n| Dataset | Sample size | Avg. number of atoms (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _bliss_ (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _nauty_ (mean \\u00b1 std) |\\n|-|-|-|-|-|\\n| QM9 | 133885 | 8.80 \\u00b1 0.51| 0.010 ms \\u00b1 0.008| 0.019 ms \\u00b1 0.079|\\n| ZINC250k | 249455 | 23.15 \\u00b1 4.51| 0.022 ms \\u00b1 0.010| 0.042 ms \\u00b1 0.032|\\n| CEP | 29978 | 27.66 \\u00b1 3.41| 0.025 ms \\u00b1 0.014| 0.050 ms \\u00b1 0.076|\\n| *Large | 304414 | 140.07 \\u00b1 49.38| -| 0.483 ms \\u00b1 12.600|\\n---\\n\\n*Large: the largest molecules in PubChem, data retrieved from https://github.com/danielflamshep/genmoltasks. This data is used in the paper \\u201cLanguage models can learn complex molecular distributions.\\u201d\\n\\n**Experiments were conducted on an Apple M1 processor.\\n\\n---\\n\\nCompared to sampling trajectories, which involves multiple forward passes through a neural network, the compute time for $ |\\\\text{Aut}(s)|$ is negligible. For comparison, we report the speed of molecular parsing algorithms measured using ZINC250k: 0.06 ms \\u00b1 0.70 (SMILES \\u2192 molecule) and 0.04 ms \\u00b1 0.05 (molecule \\u2192 SMILES). The combination of two parsing steps is often used to check the validity of a given molecule in various prior works. In words, computing $ |\\\\text{Aut}(s)|$ is in an order of magnitude faster than validity checking algorithm.\\n\\nOur primary focus in this work is on **small molecule generation for drug discovery**, where smaller molecular sizes are most relevant. These sizes align with the practical requirements of many real-world drug discovery tasks, making our experiments and methodology well-suited to this domain.\\n\\nThat said, we emphasize that our method is not inherently limited to small molecules and can extend to larger molecules. The scalability of the approach depends on the computational efficiency of the symmetry calculations, and modern graph-processing tools enable handling larger molecular structures effectively. While the specific experiments in our paper focus on small molecules, the underlying principles and methodology remain applicable to larger graphs, provided appropriate computational resources and preprocessing techniques are employed. Furthermore, the compute time for counting automorphisms for large molecules is as small as few milliseconds as reported in the table.\\n\\nWhile we believe the compute time is already minimal considering current applications, we provide two more recipes to even further improve the run time.\\n\\n- Data processing tasks can be easily parallelized across multiple CPUs. Since GFlowNet is an off-policy algorithm, $\\\\text{|Aut(s)|}$ can be computed concurrently with the policy's learning process.\\n- For large graphs, fragment-based generation is highly likely to be employed. In such cases, we can utilize an approximate correction formula, as outlined in the paper.\\n\\nIn conclusion, the computational overhead of computing automorphism in practice is minor relative to computation of the entire pipeline.\\n\\nWe hope this addresses your concerns.\"}", "{\"comment\": \"Thanks for giving us the opportunity to further clarify our method.\\n\\nThe table below summarizes the runtime for different configurations of atom-based and fragment-based generation methods. To ensure fairness, we re-run all experiments to disable parallel computation, using single processor and GPU. Training steps were limited to 1,000, with all other settings kept consistent with the original paper. We report (mean \\u00b1 std) with three runs.\\n\\n---\\n| | ***Atom** | ****Fragment** |\\n| --- | --- | --- |\\n| No corrections (Vanilla GFlowNet) | 44.60 min \\u00b1 4.69 | 23.84 min \\u00b1 0.45 |\\n| Exact reward scaling (**ours**) | 49.47 min \\u00b1 3.14 | 27.17 min \\u00b1 2.62 |\\n| Approximate reward scaling (**ours**) | - | 24.92 min \\u00b1 2.96 |\\n| Exact isomorphism tests (Ma et al. [1]) | 276.96 min \\u00b1 6.28 | 385.12 min \\u00b1 12.12 |\\n---\\n\\n***Atom:** atom-based generation, with rewards given by a proxy trained on QM9 dataset.\\n\\n****Fragment:** fragment-based generation, with rewards given by a proxy that predicts binding energy to the sEH target.\\n\\n[1] Ma et al., \\u201cBaking Symmetry into GFlowNets,\\u201d 2024.\\n\\n---\\n\\nWhen no corrections are applied, the fragment-based method is faster due to its shorter trajectories. However, when exact isomorphism tests are introduced, the computational cost increases significantly. Specifically, the fragment-based method with exact isomorphism tests incurs the highest computational cost (385\\u2009min), reflecting the impact of handling larger molecules.\\n\\nOn the other hand, our method introduces minimal additional overhead, making it a practical alternative for both atom-based and fragment-based generation tasks, as the differences in runtime are within the standard deviations. Additionally, we used open-source code for the experiments, making only minor changes to the original implementation. Consequently, there is some additional overhead due to the conversion of data types. We believe this overhead could be eliminated if our method were seamlessly integrated into the pipeline.\\n\\nWe hope this addresses your concerns.\"}", "{\"comment\": \"Thank you for the detailed response. However, I would like to defer the final assessment until the authors provide a detailed comparison with \\\"Baking Symmetry into GFlowNets.\\\"\"}", "{\"comment\": \"Dear reviewers,\\n\\nThank you for your thoughtful and valuable feedback. We have carefully revised our paper to address your comments, aiming to improve clarity and provide a more comprehensive presentation. Notably, we have included thorough comparisons to the workshop paper *\\u201cBaking Symmetry into GFlowNets\\u201d* to offer readers additional context and to appropriately credit prior work, highlighting how our contributions build upon and extend the existing literature.\\n\\nWe remain confident that our work provides significant contributions to the community by addressing the fundamental issue of equivalent actions in GFlowNets in a rigorous and comprehensive manner and proposing a much more efficient and practical solution.\\n\\nBelow, we summarize the key updates:\\n\\n- **Comparison with Ma et al. (2024)**: To further position our work in context, we made a detailed comparison to *\\u201cBaking Symmetry into GFlowNets\\u201d* paper (Section 2 and Appendix B).\\n- **Evaluation:** In response to **Reviewer 6mnv**, we included FCS metric (Silva et al., 2024) in our evaluation (Section 6.2).\\n- **Discussion:** In response to **Reviewer ynfw**, we updated \\u201cDiscussion and Conclusion\\u201d section, which includes limitations as well as implications our work has on prior work in the presence of bias (Section 7).\\n- **Notation:** We have clarified notations to distinguish between state-level and graph-level policies, eliminating overloaded terms, as suggested by **Reviwer ynfw** (Section 3.2, Section 4.1):\\n - $p_\\\\mathcal{S}$, $p_\\\\mathcal{G}$, and $p_\\\\mathcal{S}^{\\\\top}$ now denote the state-level policy, graph-level policy, and marginal state probability, respectively.\\n - For backward policy, $q_\\\\mathcal{S}$ and $q_\\\\mathcal{G}$ are used for state-level and graph-level policies, respectively.\\n - Forward and backward graph-level actions are now denoted by $\\\\overrightarrow e$ and $\\\\overleftarrow e$, respectively.\\n- **Experiments:** In response to **Reviewer 6mnv**, we have included additional results using a uniform target distribution to further validate and illustrate our approach (Section 6).\\n- **Computation:** To **Reviewer ynfw, Rf27**, a new section on computation has been added to the Appendix for completeness (Appendix G).\\n\\nOverall, we have made several revisions to enhance the paper's readability.\\n\\n### **Relation to \\u201cBaking Symmetry into GFlowNets\\u201d**\\n\\nWhile the issue of equivalent actions in GFlowNets was identified and partially addressed in the \\\"Baking Symmetry into GFlowNets\\\" paper, the discussion was limited to experimental validation, indicating that equivalent actions could lead to \\u201cpotentially\\u201d incorrect flow probabilities. In contrast, our work provides the first rigorous theoretical foundation for automorphism correction, demonstrating that this issue is not just experimental but a fundamental and systematic challenge tied to graph symmetries, both for atom-based and fragment-based generation. \\n\\nNow, given the well-supported motivation for addressing this equivalent actions problem, the key question then becomes: **Is there an efficient solution to resolve this fundamental issue** **of equivalent actions in GFlowNets?**\\n\\n### **Novel Contributions**\\n\\nIn addition to establishing a theoretical foundation, we propose an efficient solution to this problem. Our method applies the correction only once at the end of trajectories, rather than at every transition within a trajectory. This correction involves computing the order of automorphisms, which we found to be computationally efficient even for the largest molecules in the PubChem dataset. The solution is both simple and easy to implement. Any alternative, more \\\"sophisticated\\\" approaches to this problem would, in essence, amount to approximating the order of automorphisms.\\n\\nAnother notable contribution is the introduction of a novel model likelihood estimator that accounts for equivalent actions. Since the estimation of model likelihood is a fundamental measure for all generative models\\u2014essential for evaluating model bias, performance, generalization, and more\\u2014this contribution has the potential to significantly influence future research.\\n\\n### **General Implications**\\n\\nOur findings carry significant implications, especially given that GFlowNets were initially popularized for their reward-matching capabilities. We emphasize that correcting for automorphisms is a fundamental requirement for unbiased sampling, which is critical for applications like molecule discovery. In this regard, our results highlight the need for future work to explicitly detail the methods used to address equivalent actions, ensuring reproducibility and rigorous evaluation. We believe the problem was identified previously, but its significance and seriousness have been emphasized solely in our paper.\\n\\nWe hope this provides additional clarity regarding the contributions of our work, and we would sincerely appreciate any reconsideration of the assigned score in light of these clarifications.\\n\\nRegards, \\n\\nAuthors\"}", "{\"summary\": \"This paper shows that the so-called problem of equivalent actions (appearing from graph symmetries) biases graph generation processes in GFlowNets. To tackle this, the paper proposes a simple correction procedure that scales the reward function by the number of symmetries of the associated graph (terminal state). Experiments on artificial data and molecule generation tasks aim to show the effectiveness of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Clarity**: Overall, the text is well-written and easy to follow (despite some overloaded notation);\", \"**Motivation and Relevance**: The motivation is clear, and the problem is relevant as graph (molecule) generation is one of the main applications of GFlowNets;\", \"**Flexibility**: The proposed correction procedure is flexible as it applies to different training schemes (balance conditions).\"], \"weaknesses\": [\"**Computational cost**: While the paper mentions the additional cost didn't lead to \\\"significant delays in computation\\\", it is not clear why. I believe the paper deserves a more comprehensive discussion about the computational complexity of the proposal. Also, I wonder if the proposed approach becomes prohibitive in some settings.\", \"**Experiments**: The theoretical analysis does not seem to support the claimed gains on real-world datasets. What are the implications of correctness to top-k diversity/reward? Also, although the paper cites ZINC250K in the Introduction, the experiments only include the QM9 dataset.\", \"**Technical novelty**: The theoretical contributions of the paper are straightforward. I wonder if the GFlowNet community already knows about the equivalent action problem.\", \"**Notation**: I found the notation overloaded, which may confuse readers unfamiliar with GFlowNets. For instance, the paper uses the same $P_F$ to refer to the graph-level, state-level policies, and the marginal distribution over terminal states (i.e., $P_F(x)$).\", \"**Limitations**: The paper does not discuss limitations.\"], \"questions\": \"1. Could the authors provide a detailed analysis of the computational complexity of the proposal? Are there environments where the proposed method becomes prohibitive?\\n\\n2. Could you provide time comparisons for the real-world experiments?\\n\\n3. The paper says \\\"A reward exponent of 1 is used for the atom-based task, and a value of 16 is used for the fragment-based task\\\". Was this choice based on prior works? If not, could you elaborate on this choice?\\n\\n4. Is this the first paper to bring attention to the \\\"action equivalent problem\\\"? Could you elaborate on the impact of your findings on previous works that use GFlowNets for graph generation? \\n\\n5. I suggest turning Theorem 2 into a Corollary of Theorem 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle Reminder: Discussion Period Closing Soon\", \"comment\": \"Dear Reviewer mRgy,\\n\\nWe hope this message finds you well. As the author-reviewer discussion period is set to close in two days, we wanted to kindly remind you of the opportunity to share any additional questions, comments, or suggestions regarding our work.\\n\\nWe would like to highlight our key contributions\\n\\n- **Rigorous theoretical analysis** of the \\\"equivalent action problem\\\" in GFlowNets, demonstrating that failing to account for equivalent actions introduces a systematic bias, especially in graph generation tasks involving high-symmetry objects.\\n- **Novel Correction Method**: Development of an automorphism-based reward-scaling technique to correct the bias, ensuring accurate modeling of the target distribution. This solution applies efficiently to atom- and fragment-based graph generation schemes.\\n- **Unbiased Model Likelihood Estimator**: Introduction of an unbiased estimator for model likelihood, allowing for rigorous evaluation of GFlowNet performance in generative tasks.\\n- **Efficient Implementation**: Proposed a computationally efficient method for automorphism correction, which requires only one computation per trajectory rather than at each transition, significantly reducing computational overhead.\\n\\nWe sincerely hope that the reviewer takes these contributions into account in their evaluation.\\n\\nWe would deeply appreciate your support in this process and look forward to hearing from you.\\n\\nBest, Authors\"}", "{\"metareview\": \"**Summary**: This work brings attention to the equivalent action problem with GFlowNets (an important class of generative models, inspired by reinforcement learning, for discrete structured data/graphs). Specifically, the authors analyse this problem theoretically, demonstrating that failing to account for equivalent actions may introduce systematic bias in the sampling distribution for atom-based as well as fragment-based graph generation. They further relate this bias to the number of symmetries associated with the graph and propose a corrective automorphism-based reward-scaling approach for unbiased sampling, providing empirical validation of its effectiveness.\\n\\n**Strengths**: The reviewers appreciated several aspects of this work, notably, the motivation and relevance of the problem, clarity of the presentation, sound theoretical analysis, flexibility of the corrective procedure to accommodate different training schemes (trajectory balance, detailed balance, and their extensions), and the empirical substantiation. \\n\\n**Weaknesses**: The reviewers also raised several concerns and questions pertaining to the additional computational overhead due to the corrective procedure, overloaded notation, lack of discussion on the limitations of the proposed approach, insufficiency of the experiments e.g. in terms of dataset size, fairness of evaluation, metrics not being aligned with goodness-of-fit hindering validation of sampling correctness, and technical novelty. They also provided several constructive suggestions. \\n\\n**Recommendation**\\nMost reviewers provided detailed, insightful reviews. However, one of the reviewers (mRgy) asked only rather generic questions about generative models and did not participate subsequently in the discussions. Therefore, I decided to not consider their evaluation in my recommendation. \\n\\nI commend the authors for their thoughtful response and additional experiments, which were generally appreciated by the reviewers, prompting many of them to upgrade their assessment of this work. In contrast to the point raised by one of the reviewers, I do not think the technical machinery needs to be sophisticated/straightforward so long as the theory provides meaningful/actionable/clear insights or interpretations, which this work does. \\n\\nDuring the response period, a public comment was posted pointing to a prior work ``Ma, Bengio, Bengio, and Zhang, Baking Symmetry into GFlowNets, NeurIPS 2023 AI for Science Workshop\\\" drawing everyone's attention. This workshop paper had clearly identified the issue of equivalent/idempotent actions in GFlowNet and proposed a closely related method for correcting the flows to address the problem, although without theoretical justification. A part of the code was also leveraged by the authors of the current work. \\n\\nTo their credit, the authors of this work acknowledged their oversight in failing to appropriately position the multiple contributions of that workshop paper (including it being the first to identify the equivalent action issue and providing valuable context for the current work). \\n\\nAt least one reviewer felt that despite the authors' response on this particular issue, the current work loses its novelty - as presented in the original submission - significantly. While I think that the theoretical formalism and the insights provided by the authors here form an important contribution in itself, I cannot disregard this concern about novelty. \\n\\nGiven all the facts and discussion, I believe it would only be fair to the authors of the workshop paper if this paper goes through another review cycle so that the new set of reviewers can make a more informed assessment of the merits of this paper, especially its novelty and repositioning of the contributions with respect to prior work. I'm therefore not able to recommend a positive decision for this paper at this time. However, I'd like to state that should the program chairs decide to overrule this recommendation, I won't have any (strong) objections. \\n\\nI hope the authors take this recommendation in the right spirit (though I understand it might be disappointing for them), and use all the feedback and discussion to make a stronger submission.\", \"additional_comments_on_reviewer_discussion\": \"Please see the Metareview for all the details.\"}", "{\"summary\": \"This paper focuses on Generative Flow Networks (GFlowNets), which are generative models used to produce graphs. While GFlowNet theory ensures that a fully trained model can sample from an unnormalized target distribution, the challenge lies in computing state transition probabilities, particularly due to equivalent actions that lead to the same state. The paper analyzes these equivalent actions in graph generation tasks and proposes efficient solutions to mitigate the associated challenges.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well organized and the theories are well formulated.\\n\\n2. The motivation is well introduced.\", \"weaknesses\": \"1. How does the proposed method compare with other graph generative models, such as flow-based and discrete diffusion-based models?\", \"questions\": \"N.A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to address issues in graph generative GFlowNets that may fail to construct the target distribution due to equivalent actions. Specifically, it analyzes how discrepancies in the number of automorphism groups cause GFlowNets to incorrectly estimate the true reward. To address this, the paper incorporates the number of automorphism groups into the reward function and proves how this corrects reward underestimation. Notably, this paper also considers practical implementation for correction in fragment-based generation. Experimental results show that the proposed method better captures the target distribution.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and easy to follow, and the proposed method is conceptually straightforward.\", \"This work is the first to address a significant pitfall in GFlowNets, i.e., errors due to equivalent actions, within primary setting of GFlowNets, i.e., graph generation.\", \"The authors provide a solid theorem for the corrected objectives showing how their global optima enable GFlowNets to construct the correct target distribution. Although the proofs consider TB and DB, these can also be easily extended to other objectives, such as subTB.\", \"The experiments are thorough and consider both important settings, namely atom-wise and fragment-wise graph generation.\"], \"weaknesses\": \"No weakness in the major flows. It seems that there are no errors in the proof.\", \"questions\": [\"To better highlight the pitfalls, I wonder if the authors provide or illustrate a toy-example or toy-experiments where the conventional approaches induce an incorrect generative distribution, e.g., a distribution significantly biased towards the graphs with a low number of automorphism groups.\", \"Can authors provide the experimental computational costs for computing $|\\\\text{Aut}(s)|$? I am curious about how much overhead the proposed method requires in practice, although the authors provide the time complexity in **Line 378**. Could this overhead be minor relative to time for reward computation or time for sampling trajectories?\", \"In DB-based implementation, I wonder if there might be improvement in convergence speed when we reparameterize the flow function $F(s)=\\\\tilde{F}|\\\\text{Aut}(s)|$ (like a prior flow reparameterization approach [1]), although this preserves the asymptotic optimality to induce the target distribution.\", \"---\", \"[1] Pan et al., Better Training of GFlowNets with Local Credit and Incomplete Trajectories, ICML 2023\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable feedback and suggestions!\\n\\n# Evaluation\\n### **On Evaluation**\\n\\nTo clarify, in Figure 3 (the results of the toy experiment), we computed state probabilities by enumerating all possible trajectories, rather than using Equation 3. This approach was feasible because we constrained the problem size, making the exact computation of state probabilities tractable. Additionally, we took advantage of the many overlapping sub-trajectories, which allowed us to eliminate redundant computations.\\n\\n### **On goodness-of-fit, diversity, and rewards**\\n\\nThe purpose of the metrics presented in Table 1 is to show the effects of the correction on downstream tasks, specifically in discovering diverse molecules and high-reward molecules, rather than to demonstrate the goodness-of-fit of the proposed method. To assess the goodness-of-fit, we measured Pearson correlation between $\\\\log R(x)$ and $\\\\log P_F(x)$ as shown in Figure 4. Estimating $\\\\log P_F(x)$ was possible without enumeration using empirical approximation we proposed in Equation 3. Although we understand that Pearson correlation is not a perfect measure as it is scale-invariant, it is considered as a good proxy in practice. While we appreciate your suggestion regarding the FCS metric, we are not entirely certain about its implementation, as the code is unavailable. Since the FCS metric requires estimating the marginal distribution over terminal states, we believe it could be used with our proposed estimation method in the future. We would be more than happy to use FCS if once the code becomes available.\\n\\n# Experiments\\n\\nUnlike other papers on GFlowNets, the purpose of our paper is not to simply improve the performance of GFlowNets. Rather, the work is focused on identifying the critical bias present in previous work of GFlowNets. That said, we are willing to include more experimental results in revised paper, including illustrative experiments using uniform target.\\n\\nWe excluded the result of uniform target from the paper because the Pearson correlation cannot be computed (as the uniform target has zero variance). However, we are happy to include it in the revised version. The plots are very similar to Figure 3, however, the result based on the number of rings.\\n\\n# Technical contribution\\n\\nWe emphasize that seemingly simple findings often have the potential to be highly impactful. We would like to highlight that our work is the first work to theoretically address the \\\"equivalent action problem,\\\" which is particularly critical when using GFlowNets to model target distributions. This problem, previously overlooked, fundamentally affects the correctness of the sampling distribution in GFlowNets.\\n\\nBeyond correcting this issue, our findings propose a new method to estimate model likelihood, which has significant implications for various graph-related tasks. Moreover, our formulation, which explicitly distinguishes between states and graphs, provides a fresh perspective that we believe can offer valuable insights into other graph-centric applications.\\n\\nAdditionally, as discussed in Appendix D, we briefly outline how our findings can be extended to improve methods that incorporate \\\"node ordering\\\" as a variable. This demonstrates that the implications of our work extend beyond GFlowNets and can influence a broader range of methodologies. We hope these contributions underscore the novelty and potential impact of our research.\"}", "{\"comment\": \"Thank you for your interest in our paper and for your insightful question!\\n\\n**Equation (1) is correct, and your reasoning aligns with our findings, assuming that the generation process permits sampling of any graph in $\\\\mathcal{G}$.**\\n\\nFor illustration, let us assume that constant rewards are assigned to terminal states, and we start with a fixed number of isolated nodes in the initial state. In this process, we are only allowed to add new edges. Your reasoning suggests that we should scale the reward by a factor of $1 / |[G]|$. For example, if the terminal graph is \\u201c\\u2460-\\u2461-\\u2462\\u201d, we should divide the final reward by 3, which corresponds to the number of configurations of different adjacency matrices (that corresponds to \\u201c\\u2460-\\u2461-\\u2462\\u201d, \\u201c\\u2461-\\u2460-\\u2462\\u201d and \\u201c\\u2460-\\u2462-\\u2461\\u201d).\\n\\nIn general, the number of different configurations equals $|[G]| = N!/|\\\\mathrm{Aut}(G)|$, so that $1 /|[G]| = |\\\\mathrm{Aut}(G)|/N!$. Since the initial state has $N$ disconnected nodes, we have $|\\\\mathrm{Aut}(G_0)| = N!$, resulting in $1 /|[G]| = |\\\\mathrm{Aut}(G)|/|\\\\mathrm{Aut}(G_0)|$. This is precisely the scaling term we proposed in the paper!\\n\\nIn practice, however, some graphs are not allowed to be sampled by design. For instance, if we sample graphs node-by-node, the next graph after \\u201c\\u2460-\\u2461\\u201d will be either \\u201c\\u2460-\\u2461-\\u2462\\u201d or \\u201c\\u2462-\\u2460-\\u2461\\u201d, but there is no way to sample \\u201c\\u2460-\\u2462-\\u2461\\u201d. To sample \\u201c\\u2460-\\u2462-\\u2461\\u201d, we would need to allow the policy to sample \\u201c\\u2462\\u201d even from the initial state. This would enlarge the action space and effectively treat node IDs as distinct node types.\\n\\nIn this context, even when using graph-level transitions to model flows, we do not \\\"allow individual graphs to represent states\\\" in such a way that each state is reachable from the initial state. However, when computing graph-level transitions, there is a risk of mistakenly treating graphs as states. The purpose of the statement was to explicitly distinguish between states and graphs, and it does not imply vanilla GFlowNets were in fact modeling graphs with node IDs.\"}", "{\"comment\": \"We provide additional comparisons, focusing specifically on the runtime.\\n\\nThe table below summarizes the runtime for different configurations of atom-based and fragment-based generation methods. To ensure fairness, we re-run all experiments to disable parallel computation, using single processor and GPU. Training steps were limited to 1,000, with all other settings kept consistent with the original paper. We report (mean \\u00b1 std) with three runs.\\n\\n---\\n| | ***Atom** | ****Fragment** |\\n| --- | --- | --- |\\n| No corrections (Vanilla GFlowNet) | 44.60 min \\u00b1 4.69 | 23.84 min \\u00b1 0.45 |\\n| Exact reward scaling (**ours**) | 49.47 min \\u00b1 3.14 | 27.17 min \\u00b1 2.62 |\\n| Approximate reward scaling (**ours**) | - | 24.92 min \\u00b1 2.96 |\\n| Isomorphism tests (Ma et al. [1]) | 276.96 min \\u00b1 6.28 | 385.12 min \\u00b1 12.12 |\\n---\\n\\n***Atom:** atom-based generation, with rewards given by a proxy trained on QM9 dataset.\\n\\n****Fragment:** fragment-based generation, with rewards given by a proxy that predicts binding energy to the sEH target.\\n\\n[1] Ma et al., \\u201cBaking Symmetry into GFlowNets,\\u201d 2024.\\n\\n---\\n\\nWhen no corrections are applied, the fragment-based method is faster due to its shorter trajectories. However, when exact isomorphism tests are introduced, the computational cost increases significantly. Specifically, the fragment-based method with exact isomorphism tests incurs the highest computational cost (385\\u2009min), reflecting the impact of handling larger molecules.\\n\\nOn the other hand, our method introduces minimal additional overhead, making it a practical alternative for both atom-based and fragment-based generation tasks, as the differences in runtime are within the standard deviations. Additionally, we used open-source code for the experiments, making only minor changes to the original implementation. Consequently, there is some additional overhead due to the conversion of data types. We believe this overhead could be eliminated if our method were seamlessly integrated into the pipeline.\\n\\nWe hope this provides additional context regarding prior work and our contributions.\"}", "{\"comment\": \"After reviewing the above clarification and existing work \\\"Baking Symmetry into GFlowNets\\\", I believe this work loses novelty in its motivation but still has a basic value to be accepted due to (1) technical improvements, (2) extensive evaluation, and (3) theoretical contributions. Therefore, my score is 6 at this time.\"}", "{\"comment\": \"Thanks for appreciating our work and providing valuable questions!\\n\\n# Illustrative Example\\n\\nThanks for the suggestion! We included the result of the toy experiment for illustrative purposes, but found that it may be difficult to understand without careful reading. In the revised version, we will include a similar experiment with uniform target distribution.\\n\\n# Computaional Cost\\n\\nWhile computing the exact $ |\\\\text{Aut}(s)| $ has inherent complexity, as discussed in the paper, this complexity is unavoidable for exact computation. However, irrespective of the computational cost, fixing the sampling bias due to the ignorance of equivalent actions is a fundamental issue that needs to be resolved. **This correction introduces an inherent computational cost, but it is necessary to maintain the consistency of sampling**. In practice, fast heuristic algorithms often perform well, particularly for relatively smaller graphs, and significantly reduce the computational overhead associated with calculating $ |\\\\text{Aut}(s)| $.\\n\\nFurthermore, our proposed method requires computing automorphisms only once per trajectory. We present additional experimental results measuring compute time below. Note that the scale of the experiments in our paper corresponds to QM9 and ZINC250k.\\n\\n---\\n| Dataset | Sample size | Avg. number of atoms (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _bliss_ (mean \\u00b1 std) | Compute time for \\\\|Aut(s)\\\\| using _nauty_ (mean \\u00b1 std) |\\n|-|-|-|-|-|\\n| QM9 | 133885 | 8.80 \\u00b1 0.51| 0.010 ms \\u00b1 0.008 | 0.019 ms \\u00b1 0.079|\\n| ZINC250k | 249455 | 23.15 \\u00b1 4.51| 0.022 ms \\u00b1 0.010| 0.042 ms \\u00b1 0.032|\\n| CEP | 29978 | 27.66 \\u00b1 3.41| 0.025 ms \\u00b1 0.014| 0.050 ms \\u00b1 0.076|\\n| *Large | 304414 | 140.07 \\u00b1 49.38|-| 0.483 ms \\u00b1 12.600|\\n---\\n\\n*Large: the largest molecules in PubChem, data retrieved from https://github.com/danielflamshep/genmoltasks. This data is used in the paper \\u201cLanguage models can learn complex molecular distributions.\\u201d\\n\\n**Experiments were conducted on an Apple M1 processor.\\n\\n---\\n\\nCompared to sampling trajectories, which involves multiple forward passes through a neural network, the compute time for $ |\\\\text{Aut}(s)|$ is negligible. For comparison, we report the speed of molecular parsing algorithms measured using ZINC250k: 0.06 ms \\u00b1 0.70 (SMILES \\u2192 molecule) and 0.04 ms \\u00b1 0.05 (molecule \\u2192 SMILES). The combination of two parsing steps is often used to check the validity of a given molecule in various prior works. In words, computing $ |\\\\text{Aut}(s)|$ is in an order of magnitude faster than validity checking algorithm.\\n\\nWe used the *bliss* algorithm in our paper. It is easy to use as it is included in the igraph package and is fast enough for our purposes. If molecular symmetries grow, such as when symmetric fragments are repeated in polymers, we can still count automorphisms in few milliseconds using the *nauty* package as can be seen in the table. We observed that the pynauty package does not natively support distinguishing between different edge types, requiring us to transform the input graphs by attaching virtual nodes to handle this limitation. The reported time in the table reflects these preprocessing steps.\\n\\nWhile we believe the compute time is already minimal considering current applications, we provide two more recipes to even further improve the run time. \\n\\n- Data processing tasks can be easily parallelized across multiple CPUs. Since GFlowNet is an off-policy algorithm, $ |\\\\text{Aut}(s)|$ can be computed concurrently with the policy's learning process.\\n- For large graphs, fragment-based generation is highly likely to be employed. In such cases, we can utilize an approximate correction formula, as outlined in the paper.\\n\\nIn conclusion, the computational overhead of computing automorphism in practice be minor relative to computation of the entire pipeline.\\n\\n# Effects of Reparameterization\\n\\nGreat question! In our preliminary experiments, we observed no effect of reparameterization on convergence speed. While we are open to further investigation, we offer one possible explanation.\\n\\nWhen using a fixed backward policy, we have a unique state flow function, denoted as $F(s)$. This is the function we obtain if corrections are made at every step. On the other hand, if corrections are applied at the end of trajectories, the flow function itself must learn to correct automorphisms. In this case, the flow function becomes $\\\\tilde F(G) = F(s)|\\\\text{Aut}(s)|$ (see figure 6 in the paper). \\n\\nIf we decompose $\\\\tilde F(G)$ into two parts, namely $F(s)$ and $|\\\\text{Aut}(s)|$, it is may happen that the challenging part of the learning process lies in $F(s)$, which is related to reward function $R(s)$. This could explain why intermediate reward signals led to faster learning in previous experiments, whereas intermediate corrections may require further investigation.\\n\\nWe hope this explanation addresses your questions.\"}", "{\"comment\": \"I have read the authors' rebuttal and other reviewers' comments, as well as Emmanuel's public comment. I believe this work loses significant novelty in light of \\\"baking symmetry into GFlowNets.\\\" Some of my concerns were not sufficiently addressed, e.g., using a proper correctness measure for large supports and promoting a comparison using empirical distributions directly (instead of Eq. 3), among others. Therefore, I am keeping\\u00a0my\\u00a0score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"### **Comparison with other graph generative models, such as flow-based and discrete diffusion-based models**\\n\\nWe acknowledge the reviewer's interest in comparing our proposed method with other graph generative models, such as flow-based and discrete diffusion-based models. We provide a brief overview to highlight the distinctions:\\n\\n**Flow-Based Models:** These models, like normalizing flows, transform simple base distributions into complex ones through a series of invertible and differentiable mappings. This approach allows for exact likelihood computation and efficient sampling, making them effective in continuous data domains such as image generation.\\n\\n**Discrete Diffusion-Based Models:** These models generate data by learning to reverse a noising process, starting from noise and progressively refining it to match the target data distribution. They have been particularly successful in generating high-quality images and have been extended to other domains.\\n\\n**GFlowNets:** In contrast, GFlowNets conceptualize generation as a sequential decision-making process, constructing complex objects like graphs through sequences of actions. They aim to sample structures proportional to a given reward function, facilitating the discovery of diverse high-reward structures. This approach is particularly advantageous in scenarios where the target distribution is unnormalized or difficult to sample from directly.\\n\\nPlease note that our work focuses on addressing the unique challenges of equivalent actions in GFlowNets (rather than improving the performances of other graph generative models), improving the robustness and sampling efficiency of GFlowNets in tasks where such challenges are critical bottlenecks, such as graph generation. We believe this distinction is important and will clarify it further in the revision.\"}", "{\"title\": \"A question to the authors\", \"comment\": \"Dear authors,\\n\\nI recently stumbled upon your paper and, despite being nicely written, I am struggling to understand why Equation (1) is the right way of correcting the automorphism-induced bias in a GFlowNet. To clarify my point, I will assume that the (state-level) reward function is constant, i.e., the target distribution is uniform and that the graphs are unlabeled. \\n\\nIn this case, a naive implementation of a GFlowNet (on the graph-level state graph, $(\\\\mathcal{G}, \\\\mathcal{A})$, with unmodified reward function) would sample each state $s$ in proportion to the size of its equivalence class when balanced, namely, $p_{T}(s) \\\\propto |s|$, instead of uniformly at random. Under the flow network epistemology, the reason for this is that the terminal flow associated to the equivalence class $s$ equals the sum of the terminal flows associated to each of its members. Hence, as the authors noticed, the sampling distribution is inherently biased. \\n\\nWe both disagree, however, in how to eliminate this bias. On the one hand, I would _reduce_ the reward associated to a graph $G$ by a factor of $1/|[G]|$. On the other hand, the authors _increase_ this same reward by a factor of $|\\\\text{Aut}(G)|$ in Equation (1). Nonetheless, in doing so, each state $s$ would be sampled proportionally to $|\\\\text{Aut}(G)| \\\\cdot |s|$, as explained, and the bias would persist. In this regard, it is mostly unclear to me how plugging the equation in Theorem 1 into the TB loss solves the biased distribution problem. Also, it appears to me that the empirical analysis would yield the same conclusions if the scaling factor $|\\\\text{Aut}(G)|$ was replaced by any discrete-valued isomorphism-invariant function. \\n\\nAdditionally, the text contains seemingly conflicting statements, e.g., \\\"If we allow individual graphs to represent states, the equivalence class of a larger graph will be sampled exponentially more often.\\\" and \\\"If we do not scale the reward, we are effectively reducing the rewards for highly symmetric graphs\\\". Does the unmodified GFlowNet tends to sample highly symmetric graphs more or less frequently? \\n\\nImportantly, I may have failed to properly understand this work, and I hope my comments above do not disrupt the review process. Nevertheless, I would be happy to understand the issue with my reasoning. :)\"}" ] }
BkLLtZX7AZ
Spatially-aware Photo-realistic Face Relighting using Joint Embedding of Light Properties
[ "Hemanth Pidaparthy", "Tezuesh Varshney", "Pavan Sudheendra" ]
Single image face relighting is the challenging problem of estimating the illumination cast on images by a point light source varying in position, intensity and possibly colour. Learning the relationship between the light source properties and the face location is critical to the photo-realism of the estimated relit image. Prior works do not explicitly model this relationship which adversely affects the accuracy and photo-realism of the estimated relit image. We present a novel framework that explicitly models this relationship by integrating a novel light feature embedding with self-attention and cross attention layers in a custom image relighting network. Our proposed method estimates more photo-realistic relit images with accurate shadows and outperforms prior works despite being trained only on synthetic data. Our method is able to generalize to out-of-training light source positions and also achieves unsupervised adaptation from synthetic to real images.
[ "Face Relighting", "Joint Light Property Embedding", "Realistic Shadows" ]
https://openreview.net/pdf?id=BkLLtZX7AZ
https://openreview.net/forum?id=BkLLtZX7AZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wUsYSrsOtC", "hMDOwBaRWA", "I9G1FhuobM", "8ZxfDlB6gP", "1JXCanK20k" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731558301711, 1730003828575, 1730338333335, 1730590175682, 1730218932615 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3841/Authors" ], [ "ICLR.cc/2025/Conference/Submission3841/Reviewer_xt6i" ], [ "ICLR.cc/2025/Conference/Submission3841/Reviewer_ApxL" ], [ "ICLR.cc/2025/Conference/Submission3841/Reviewer_ST3n" ], [ "ICLR.cc/2025/Conference/Submission3841/Reviewer_gxNc" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their detailed and thoughtful feedback. We will focus on improving the idea and addressing all the comments.\"}", "{\"summary\": \"This paper addresses the problem of single-image face relighting by generating images illuminated by a point light source with variable position, intensity, and potentially color. The proposed framework explicitly models the relationship between light source properties and face orientation by integrating a light feature embedding within self-attention and cross-attention layers in an image relighting network. Trained solely on synthetic data, the method produces relit images with shadows that outperform previous approaches. It demonstrates generalization to out-of-training light source positions and achieves unsupervised adaptation from synthetic to real images.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces a lighting embedding that enables joint modeling of light source properties, specifically position, color, and intensity. The image relighting network explicitly models the relationship between light source characteristics and the face\\u2019s spatial configuration. Experimental results indicate that the proposed method surpasses existing techniques on two benchmark datasets.\", \"weaknesses\": [\"Overall, I think the proposed method lacks novelty, and the experimental results are unconvincing.\", \"Utilizing a 7D vector (position, color, intensity) to represent point light sources is a basic form of lighting representation. Applying positional embedding (PE) to this 7D vector for lighting embedding is not novel, as PE for high-frequency encoding is common in transformers and neural radiance fields (NeRFs). The network architecture is a simple encoder-decoder, and the use of cross-attention and self-attention layers in the relighting network is also straightforward. The design does not contribute new insights to the field.\", \"The evaluation is insufficient. The relighting results are hard to assess fully, as no video sequences are provided to demonstrate images illuminated under rotated lighting. The visual results are limited and unsatisfactory, and the quantitative improvements over Pidaparthy et al. (2024) are limited.\", \"The synthetic-to-real adaptation could be validated more robustly with additional examples across diverse real images and lighting conditions, as current qualitative results do not sufficiently demonstrate robustness in varied real-world scenarios.\", \"Claims regarding edge device optimization remain unsupported by runtime benchmarks.\"], \"minor\": [\"Certain visual figures (e.g., Fig. 5) could benefit from clearer labeling to improve the clarity of comparisons with prior methods.\"], \"questions\": [\"Clarification on the novelty of the proposed approach\", \"Lack of sufficient evaluation\", \"Discussion of runtime\", \"Refer to weaknesses for details.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This submission presents a novel method to model the relationship between lighting attributes (color, intensity, position) and the face itself (orientation, semantic location) in an attempt to improve the relighting performance. This is done by using a lighting network as well as a convolutional autoencoder combined with multi-head self and cross attention to model the relationships between face features and lighting features. Experiments demonstrate state-of-the-art performance on two datasets quantitatively and qualitatively: Multi-PIE (a controlled dataset) and Real Human (out of training distribution lighting conditions). The method can also handle different light colors, which is largely absent from relighting methods that do not leverage real captured light stage data and environment maps.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"State of the art performance quantitatively on Multi-PIE and Real Human compared with several existing baselines.\\n\\nThe ability to handle different light colors is absent from many methods trained without real light stage data and environment maps.\", \"weaknesses\": \"There is a complete lack of ablations in the paper. Minimally, a natural ablation that I would expect is whether the cross attention between image features and lighting features actually leads to a quantitative and qualitative improvement in performance. It is otherwise hard to gauge the significance of the claimed contributions.\\n\\nOn a related note, the second contribution about modeling the relationship between light color, intensity, and position is difficult to accept given the environment map exists as a lighting representation and models all three components. If the authors wish to highlight that they don't require environment maps or light stage data during training, this should be reflected in the contributions to avoid confusing readers. \\n\\nIn the experiments section, why is there no comparison with the DiFaReli: Diffusion Face Relighting method? It is one of the most recent in-the-wild face relighting methods. The authors claim that it adds additional inference time but there are no experiments in the paper or claims to novelty related to inference time. Thus, it should be compared against quantitatively and qualitatively. \\n\\nQualitatively, the relighting results of this work do not strike me as noticeably better than prior work, especially compared with Pidaparthy et al. (2024). To me, many of the images (even with ground truth) seem to have comparable quality or it's unclear from the provided information which is better. \\n\\nThere are some mistakes in the paper. For example, the methods of Hou et al. 2021 and 2022 do not use the same dataset: please examine this carefully. There are errors related to this both in Table 1 and the introduction. \\n\\nThere is also almost no detail about the Real Human dataset except that it contains out of distribution lighting conditions. It would be better to be more specific so that readers are more convinced of the comprehensiveness of evaluations.\", \"there_are_several_important_citations_missing_in_this_work_from_the_relighting_domain\": \"-COMPOSE: Comprehensive Portrait Shadow Editing (ECCV 2024)\\n\\n-SwitchLight: Co-design of Physics-driven Architecture and Pre-training Framework for Human Portrait Relighting (CVPR 2024)\\n\\n-NeRFFaceLighting (SIGGRAPH 2023)\", \"questions\": \"I would suggest rewording or rethinking what the contributions in this work are. As it stands, I am not convinced that the claimed contributions are valid (e.g. second contribution about modeling relationships between lighting attributes when environment maps exist). Please check carefully whether among face relighting methods that do not require environment maps, something similar has been done. If not, reword this contribution to explicitly mention that it is novel w.r.t methods that do not require env maps and real captured light stage data.\\n\\nI would also suggest creating an ablation study table to better convince readers of the paper's contributions. As I mentioned in Weaknesses, I'd minimally expect an ablation with and without the cross attention layers between image and lighting features. In addition, experiments against additional recent baselines such as DiFaReli would be appreciated. This includes quantitative and qualitative comparisons. For quantitative, we can use the metrics presented in Table 1. For qualitative, I'd like to see the results on the images presented in Figure 5. As of now, the only recent baseline is Pidaparthy et al. (2024). \\n\\nPlease include more information about the Real Human dataset since this is lacking in the submission. Ideally we should discuss more about the range of lighting conditions, types of poses/expressions included and any additional augmentations found in the dataset. Simply saying the conditions are outside of the training distribution is vague and unclear. \\n\\nPlease correct mistakes in the paper as I mentioned under \\\"Weaknesses\\\" and conduct a more comprehensive review of recent relighting work to avoid missing important references. This includes the datasets used in Hou et al. 2021 and 2022. 2021 used the DPR dataset and Yale dataset. 2022 used CelebA-HQ: please check this carefully.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to more explicitly model the relationship between image and light features, to achieve better single-image face relighting. Overall, the model encodes RGB as well as luminance into image features and (positionally encoded) light properties including XYZ, RGB, and strength into light features, perform cross-attention between them, and finally decode the output relit image.\\n\\nThe authors made a synthetic dataset with Blender in an OLAT setup where they used 7 \\u201cmaximally separated light colors.\\u201d Training losses include RGB reconstruction loss, lighting loss, and a perceptual loss via VGG. The authors also discussed their thougts of not using a SH representation for lighting, as commonly used by other works.\", \"other_technical_details_that_stood_out_to_me_include\": \"(1) instead of an MLP, the authors reshaped the embeddings into maps and performed convolution, and (2) they derived KV from lighting features and Q from image features, which is surprising (more below).\\n\\nThe authors show some qualitative and quantitative comparisons against baseline methods plus some ablation studies in the supplemental material PDF.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"I like the general idea of guiding networks to reason more about the light-face relationship and think it\\u2019s essential to eventually enable physically-based relighting with accurate shadows, specular highlights, and even sub-surface scattering.\\n\\nSince lighting is represented with XYZ, this model is theoretically capable of modeling spatially-varying effects (though no such result was shown, unfortunately).\\n\\nWriting and presentation are clear, making this paper easy to follow.\", \"weaknesses\": \"For a relighting paper like this, it\\u2019s almost compulsory to provide result videos where a moving light illuminates a face as viewed from a fixed view point. This not only shows how stable/predictable the model is performing for nearby lights but also shows off its theoretical capability of modeling spatially-varying effects.\\n\\nFrom the image results presented in the paper, I don\\u2019t think the results produced by this method are high-quality. Specifically, the specular highlights are missing/unnatural, and the shading/shadows appear irregular, unlike those cast by natural objects. Admittedly, Table 1 shows this approach achieves the best performance, but Figure 5 shows the proposed method and the baselines are performing, IMO, equally non-pleasing results. \\n\\nFigure 6, though, shows reasonable results for the middle two columns where the subjects are in front of a clean black background, which resembles the background used by the authors in producing the synthetic dataset for training. This hints at the limitation of training on a synthetic dataset like this, which makes the model struggle with real-world images.\", \"questions\": \"I am not convinced of the reasons why SH is not desirable here. First, SH is unable to model spatially-varying effects, true, but this paper doesn\\u2019t show any of such effects either\\u2026 Also, the authors claim SH is unable to model RGB lighting, but you can still use SH for strength and RGB as a uniform \\u201cscale\\u201d applied to the whole SH map. Can the authors clarify if my understanding is correct?\\n\\nI was expecting KV to be derived from the image features and Q from the lighting features, because intuitively, you want to query with a new lighting condition (via Q) and come up with an answer by \\\"combining\\\" input image patches (via KV), but the authors did this the other way around. What are the intuitions and rationales behind this choice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a face-relighting method. The method takes in a reference image under another lighting with the image\\u2019s luminance and outputs the relit image under the target lighting. The authors claim that using colored OLAT images can enhance performance and prepare a dataset with seven fixed colors. A new lighting network that consists of CNN modules and MDHA modules is also proposed to convert the lighting condition (including the point lighting\\u2019s intensity, position, and color) into a high-dimension feature. This feature is then input with the other inputs into the residual convolutional autoencoder to output the final relit image. Evaluations show that the proposed method outperforms sota methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper adds lighting color into the lighting embedding with a new lighting encoder to [Pidaparthy et al., 2024] and achieves higher metrics on test datasets.\", \"weaknesses\": \"1. The proposed method is very incremental to [Pidaparthy et al. (2024)]. The overall pipeline (learned lighting encoding + residual conv ae for relit image generation) is unchanged. And the residual convolutional autoencoder component is the same. Thus, the novelty is low.\\n2. If efficiency is the goal (can run on edge devices), it should be explicitly marked in the title, abstract, and introduction. The inference speed should also be compared with other methods, e.g. inference time per image, model size and even power-consumption during inference.\\n3. The data quality is very low.\\n 1. The 3D models in the collected dataset do not have important PBR information for realistic portrait relighting, e.g. accurate specular modeling and subsurface scattering. The quality of the referenced paper for data source [Pidaparthy et al. (2024)] is below the bar of top conferences like ICLR.\\n 2. The color of the lighting in the training data is not continuous, why use separated colors instead of continuously sampled ones? Sampling only the maximally separated ones can cause problems in interpolation. Please justify this.\\n4. I don\\u2019t see the connection between adding lighting color and enhancing accurate shadow or help learning the relationship between the light source properties and the face location. Shadow/visibility has nothing to do with lighting color but only with lighting positions.\\n5. L213: \\u2018SH does not account for model light color\\u2019: you can concatenate the lighting color with SH coefficients.\", \"typo\": \"L147: artifacts?\", \"questions\": \"1. In section 6 it is mentioned that a segmentation mask is provided to the model, but this input is not mentioned in section 4. Do you input a segmentation mask or not?\\n2. Supplement ablation variant 1, do you provide lighting color here?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BkJrXT3e5T
CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation
[ "Dejia Xu", "Weili Nie", "Chao Liu", "Sifei Liu", "Jan Kautz", "Zhangyang Wang", "Arash Vahdat" ]
Recently video diffusion models have emerged as expressive generative tools for high-quality video content creation readily available to general users. However, these models often do not offer precise control over camera poses for video generation, limiting the expression of cinematic language and user control. To address this issue, we introduce **CamCo** , which allows fine-grained Camera pose Control for image-to-video generation. We equip a pre-trained image-to-video generator with accurately parameterized camera pose input using Plücker coordinates. To enhance 3D consistency in the videos produced, we integrate an epipolar attention module in each attention block that enforces epipolar constraints to the feature maps. Additionally, we fine-tune CamCo on real-world videos with camera poses estimated through structure-from-motion algorithms to better synthesize object motion. Our experiments show that CamCo significantly improves 3D consistency and camera control capabilities compared to previous models while effectively generating plausible object motion.
[ "Video Generation", "3D Generation" ]
Reject
https://openreview.net/pdf?id=BkJrXT3e5T
https://openreview.net/forum?id=BkJrXT3e5T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vRAoDxryFd", "upMklf0NZF", "odzW8itae4", "TLl2CrGcK5", "KpvoqLVj30", "FREihxlIxz", "C5tktFZTX0" ], "note_type": [ "official_review", "meta_review", "official_review", "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1730639128908, 1734724434884, 1730161550256, 1730583139154, 1737523477461, 1730572929827, 1730674062911 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1962/Reviewer_uLsi" ], [ "ICLR.cc/2025/Conference/Submission1962/Area_Chair_HgHW" ], [ "ICLR.cc/2025/Conference/Submission1962/Reviewer_XEZi" ], [ "ICLR.cc/2025/Conference/Submission1962/Reviewer_wzan" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1962/Reviewer_NugB" ], [ "ICLR.cc/2025/Conference/Submission1962/Reviewer_idaw" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces CamCo, a novel framework for camera-controllable, 3D-consistent image-to-video generation, enabling fine-grained control over camera viewpoints while ensuring geometric consistency in the generated videos. This is achieved by: 1. camera pose parameterization with Pl\\u00fccker Coordinates; 2) epipolar attention to improve 3D consistency; 3) performing data curation and fine-tuning for dynamic Scenes using SFM. Results show a step forward comparing with prior camera controllable video generation in terms of accuracy of cameras and 3D consistency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors implement a data curation pipeline that annotates in-the-wild videos with estimated camera poses using structure-from-motion algorithms. This enhances the model's ability to generate plausible object motion in addition to camera movements, addressing the challenge of synthesizing dynamic scenes.\\n2. The paper provides thorough quantitative and qualitative evaluations, demonstrating that CamCo outperforms baseline methods in terms of visual quality, camera controllability, and geometric consistency. Metrics like FID, FVD, and COLMAP error rates support these claims.\\n3. The inclusion of ablation studies validates the effectiveness of the proposed components, such as the Pl\\u00fccker coordinate parameterization and the ECA module. This strengthens the paper's contributions by showing the impact of each component.\", \"weaknesses\": \"1. The biggest weakness of the paper is its technical contribution, its main designs are Pl\\u00fccker coordinates and epipolar attention, but none of these is exactly novel, even on the constrained domain of camera controllable video generation --- the former was used in CameraControl [1] and the later was used in Collaborative Video Diffusion [2]. The authors should discuss what is novel about their approach while using these techniques.\\n2. Without sufficient dynamic training data, the model tends to overfit to static scenes with minimal object motion. Although the authors address this by curating additional dynamic data, it indicates a reliance on data quality and diversity --- this is especially concerning given that the authors choose do develop their model on SVD, a model that is known to generate very limited motions.\\n3. The quantitative evaluation primarily uses FID, FVD, and COLMAP error rates. While these are standard metrics, they may not fully capture perceptual quality, temporal coherence, or user satisfaction. For instance, FVD is known to be biased towards good qualitative single frames without taking the overall motion coherence into account. Incorporating additional metrics such as de-biased FVD [3] or user studies could provide a more comprehensive assessment of the model's performance.\\n\\n\\n[1] He et al. CameraCtrl: Enabling Camera Control for Text-to-Video Generation, in arXiv, 2024.\\n\\n[2] Kuang et al. Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control, in NeurIPS, 2024.\\n\\n[3] Ge et al., On the Content Bias in Fr\\u00e9chet Video Distance, in CVPR 2024.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Summary:\\nThe paper tackles the important problem of camera-controlled image-to-video generation. It achieves this by using Pl\\u00fccker coordinates to parameterize camera pose input and an epipolar attention module to enforce epipolar constraints. With fine-tuning on real videos and estimated camera poses, the experiments show improved 3D consistency and camera trajectory control compared to prior methods.\", \"strength\": [\"The exposition is good. The paper is easy to read and the figures are well-prepared.\", \"Improved results over existing controllable camera image-to-video generation results\"], \"weakness\": [\"Limited technical contributions.\", \"Foreground dynamics are limited (due to the use of epipolar constraints)\"], \"justification\": [\"Four reviewers are leaning negative about the paper, primarily due to the limited contributions from the paper. Unfortunately, the authors did not engage with the reviewers in the rebuttal period. As the concerns from the reviewers are not resolved, the AC finds no ground to accept.\"], \"additional_comments_on_reviewer_discussion\": \"The authors did not provide rebuttal and answer questions from the reviewrs.\"}", "{\"summary\": \"The paper introduces a mechanism that adds camera control to a text-to-video diffusion model, i.e., SVD. For this purpose, the authors propose to finetune the model on the WebVid dataset using a conditioning mechanism that uses Plucker coordinates and epipolar attention to improve consistency. The proposed method is shown to outperform the MotionCtrl baseline.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"The problem this method tackles is an important and impactful one. The manuscript is very clear and well written. The evaluation is adequate and the proposed method outperforms the shown baselines quantitatively. The qualitative results are certainly not as impressive as recent video generation models, like sora, but adequate given the limited capabilities of the SVD base model.\", \"weaknesses\": \"In my assessment, the main weakness of this submission is a lack of novel contributions. This paper is not the first to propose camera conditioning for text-to-video generation (e.g., line 93), it does not introduce epipolar attention, and it does not introduce Pucker coordinates. Similar to other works in this general area this is a \\\"systems paper, which combines several known components into an adapter network. Systems papers are important and valuable, but only if they enable a new capability or solve a problem in some novel and better way. This is not the case here, because very similar systems have been described before, like MotionCtrl, and the quantitative improvements of this work are marginal (see e.g. Table 1). Without a clear technical contribution on the methods side and only marginal improvements for the targeted application, it seems difficult to get excited about this submission.\\n\\nWhat I also find concerning is that the CameraCtrl paper is only tangentially mentioned. This paper has been on arxiv for more than 7 months and basically claims the same system's level contributions, including Plucker conditioning. CameraCtrl is not included in the baseline comparisons. The authors claim that it's concurrent work, but in an area that is so fast paced, I find it difficult to convince myself that a paper that has been online for such a long time should be treated as being concurrent. Regardless, this is not my primary concern.\", \"questions\": \"How does the proposed method compare with the CameraCtrl approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper extends the I2V model SVD by adding camera control. The proposed method, CamCo, parameterizes input camera poses as Plu\\u0308cker coordinates and feeds the condition into temporal attention and the newly added Epipolar attention layers. These two layers are tuned to teach the network how to react to the provided 3D camera condition, while the remaining layers, e.g., self-attn. layers, are frozen to retain the quality of the generated videos.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method is explained clearly. The authors compare with a few baselines and show the best adherence to the input camera according to Table 1 (despite the most important one is missing). The curated dynamic video dataset can be a good contribution if released.\", \"weaknesses\": \"1. Despite the effort of annotating the dynamic dataset WebVid, from the results in the suppl. page, the foreground dynamic is still largely lost. Even the eagle example doesn't show prominent motion; in another example where a bird flying above a lake, the proposed method does produce more object translation than baselines, but the object size is fairly small. Arguably, the proposed method still suffers from the common problem shared with the state-of-the-art camera-conditioned methods, i.e., lost of foreground dynamics. Any idea how to improve here?\\n2. Limited novelty: Epipolar attention and Plu\\u0308cker coordinate are very standard in 3D generation field, e.g., [1]. Existing video generation methods have applied one of them, if not both, to facilitate camera conditioning, e.g., CameraCtrl also adopts Plu\\u0308cker coordinate to parameterize cameras. If the ideas are similar, I expect to see in-depth discussion/analysis why one is superior than the other. The submission nonetheless compares with CameraCtrl only \\\"qualitatively\\\" not quantitatively. CameraCtrl has released the code a while back and this submission follows its evaluation metrics, so quantitative comparison should not be too difficult. \\n\\n\\n[1] Kant et al., SPAD : Spatially Aware Multiview Diffusers, CVPR24. https://yashkant.github.io/spad/\", \"questions\": \"1. Why is epipolar attn. inserted before temporal attn., not after? Any intuition or empirical evidence?\\n2. Why training with WebVid doesn't result in a ShutterStock watermark? As far as I know, a big portion of WebVid videos contain a \\\"ShutterStock\\\" watermark. In fact, WebVid is also not publicly available anymore. Any method trained with the pre-downloaded copy (like the authors clarify in L546) is hard to reproduce. If any preprocessing is performed to prevent the watermark from emerging in the generated videos, the authors should disclose and explain the details. \\u00a0\\n3. Table 3 doesn't have a corresponding discussion. L.505-506 says Table 4, but I don't see Table 4 anywhere, so I assume it's a typo?\\n4. Will the curated Particle-SfM annotations for dynamic videos be released?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a method to condition a pre-trained image to video model with explicit 3D camera control. The camera information is represented as plucker embeddings and features extracted from these are provided as input to the temporal layers. In addition, an epipolar attention layer is introduced to better preserve the geometry/rigidity of the reconstructed scenes. The method is trained on SVD using camera parameters obtained from both static and dynamic scenes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The presentation is clear.\", \"The ablation studies are good. The metrics used to evaluate the method are reasonable. A good set of visual results are provided in the supplementary.\", \"The design choices of using plucker embeddings and epipolar attention are reasonable.\"], \"weaknesses\": [\"Many pieces of the work have been also discussed in other previous/concurrent work. Use of plucker embeddings for cameras is becoming a standard. The use of epipolar attention has been introduced in previous work that tackle multi-view generation. Hence, although reasonable, the paper does not introduce very specific novel contributions.\", \"While the use of epipolar attention improves scene rigidity, it is not discussed at all how this affects the dynamic videos. The epipolar constraints would not hold for objects that are moving.\", \"The dynamic video examples shown in the supplementary -seem to have relatively low motion.\"], \"questions\": [\"As mentioned above, how do the epipolar constraints affect the dynamic video cases? (also related to the last comment above)\", \"When comparing to CameraCtrl (which seems closely related), it seems the text 2 video version of CameraCtrl is used, this paper also shows results on SVD. Are the comparisons done for both models using SVD?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to address the image-to-video generation with precise camera control. The author parameterizes the camera pose using Pl\\u00fccker coordinates and adopts a epipolar attention module to improve 3D consistency. The author further augments the training set with a curated dataset to better capture object movements. Experiments on different source datas demonstrate the effectiveness and generalization of the model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Good presentation**\\n\\nThe paper is easy to read and have good figures to show the method. Overall the pipeline is reasonable and each module is clear introduced.\\n\\n**Good experimental results**\\n\\nThe model shows superior results over existing controllable camera image-to-video generation results, with much lower COLMAP error and FVD. \\n\\n**Good generalization performance.**\\n\\nI appreciate the author conduct rich experiments on multiple source unseen data to show the generalization ability. The author also shows the good 3D consistency in the generated videos.\", \"weaknesses\": \"**Very Limited Contribution**\", \"the_2_fundamental_modules_introduced_in_this_paper\": \"Pl\\u00fccker embedding and epipolar attention are all commonly used techniques. For Pl\\u00fccker embedding, previous work in CameraCtrl (He et al., 2024) also used the same technique. Even their used for text to video generation other than image-to-video, but the key techniques are the same, that is how to better incorporate the camera pose into the video generation model other than using R and t. Also works in 3D generation[1, 2] also uses the same one. Similar for epipolar attention used in (Tseng et al., 2023). Compare to Tseng, one improvement is the efficiency of the attention. However, I could not find experimental support to show the improvement of the speed or training time. Besides, the author doesn't compare to Tseng's attention, and I could not tell is there more significant difference with other implementations. Also for the dataset augmentation, it seems the pipeline directly follows MotionCtrl.\\n\\n\\n**Training set**\\n\\nGiven the author augments the training set, but in order to demonstrate the effectiveness of each proposed module, the comparison to MotionCtrl should be conducted on the same training set. The author should further mention this in the paper to ensure the improvement is coming from the better camera encoding and epipolar attention other than high-quality data sources. \\n\\n\\n\\n**Object and camera movement decomposition**\\n\\nMotionCtrl offers the flexibility to decompose the camera and object motions in generation. However, it seems the proposed method could not achieve this. Also the object motion shows in the paper and video seems very limited. I am expect to see more objects with higher dynamic motion for comparison.\\n\\n\\n\\n\\n[1] Chen et al. Ray Conditioning: Trading Photo-consistency for Photo-realism in Multi-view Image Generation. ICCV 2023\\n[2] kant et al. SPAD : Spatially Aware Multiview Diffusers. CVPR 204\", \"questions\": \"**MotionCtrl**\\n\\nMotionCtrl offers user-specified object motions, and they show good object motions with custom trajectory. Why in the comparison section of the paper, the MotionCtrl always output very limited object motions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Bk13Qfu8Ru
Severing Spurious Correlations with Data Pruning
[ "Varun Mulchandani", "Jung-Eun Kim" ]
Deep neural networks have been shown to learn and rely on spurious correlations present in the data that they are trained on. Reliance on such correlations can cause these networks to malfunction when deployed in the real world, where these correlations may no longer hold. To overcome the learning of and reliance on such correlations, recent studies propose approaches that yield promising results. These works, however, study settings where the strength of the spurious signal is significantly greater than that of the core, invariant signal, making it easier to detect the presence of spurious features in individual training samples and allow for further processing. In this paper, we identify new settings where the strength of the spurious signal is relatively weaker, making it difficult to detect any spurious information while continuing to have catastrophic consequences. We also discover that spurious correlations are learned primarily due to only a handful of all the samples containing the spurious feature and develop a novel data pruning technique that identifies and prunes small subsets of the training data that contain these samples. Our proposed technique does not require inferred domain knowledge, information regarding the sample-wise presence or nature of spurious information, or human intervention. Finally, we show that such data pruning attains state-of-the-art performance on previously studied settings where spurious information is identifiable.
[ "Spurious Correlations", "Data Pruning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=Bk13Qfu8Ru
https://openreview.net/forum?id=Bk13Qfu8Ru
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yrAJkjRkU9", "xzV2rkl9jA", "wZ5E3xHzxY", "uXTn5BqYXS", "tGgSnxyIPu", "rqxows17vx", "oNgvQRtGog", "oDZjKvPWc1", "n3hHBbw7oX", "mke7DbQibw", "lgiKd6rgKe", "l1xOjThJ3i", "gADRAH23Ty", "f0tolVOtls", "eRYo7LgPYD", "eEIGIJZNJY", "duxMTJ5HvT", "dicrI60ggL", "amSaaYE8wM", "aaSdC6eK4z", "ZyZha3WlwH", "ZpIJzxt9yS", "WrgkkyleOQ", "WNEtA50sdL", "VI8JOsIloX", "Unq30wBKv1", "UQYzAf4Xcq", "T3wFVeAOp7", "RMNYpnOx3Q", "R7Xk2bQqpi", "PgVNo6ImZe", "KznjjNoi4W", "Jmxq7JfaOI", "JLYw21L8PO", "IUr75VoiNW", "IREgDZjKHh", "Fys0qfItwX", "FOcIU04MVg", "ER6ZhvjvLQ", "E37LoGnnlF", "BnVasKpaz0", "B1QW8fCZS7", "7qHqMQDuIa", "3kTt6hlY7C", "3WaMh3TFRn", "2DuodJI7eY", "0jMSn9OGmt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732904705675, 1732903856928, 1731968472091, 1730776741507, 1732793819815, 1732574848252, 1730131066690, 1732229553402, 1731970253625, 1732793616318, 1731971273228, 1732384680762, 1731969116411, 1731970933304, 1731968934803, 1729973779497, 1732506047791, 1732394106392, 1737523440627, 1731970530390, 1732380295720, 1732554115448, 1732598755177, 1731969373496, 1732590843320, 1730673041572, 1732299144598, 1733114116856, 1732433094591, 1732205657938, 1732904254878, 1731968291588, 1734640467261, 1733246041246, 1732505253600, 1731968693482, 1732220234294, 1732904602762, 1732299034124, 1733114261637, 1732849048211, 1732514758228, 1732433048477, 1731977644816, 1732506098633, 1732299277296, 1732432825835 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_7BQb" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_7BQb" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_7BQb" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_n9k8" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Area_Chair_N8q2" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_eCbW" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_n9k8" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Area_Chair_N8q2" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ], [ "ICLR.cc/2025/Conference/Submission1207/Area_Chair_N8q2" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Authors" ], [ "ICLR.cc/2025/Conference/Submission1207/Reviewer_X6Fn" ] ], "structured_content_str": [ "{\"comment\": \"**Insights 1 and 2:**\\n\\n**Samples with simple core features do not contribute to spurious correlations. Samples with hard core features are primary contributors to spurious correlations.**\\n\\nTo show this, we perform the same experiment in Section 4, but instead of CIFAR-10S, we use the MultiNLI dataset. First, we remove all samples with negation words from the **training** data and then we compute the sample-wise difficulty scores as we do for CIFAR-10S in Section 4. We then create two settings: one where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 hardest input samples belonging to class 1 (**contradicts**) and another where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 easiest input samples belonging to class 1 (**contradicts**). We do the same to a set of test samples belonging to class 2 (**neutral with**) and class 3 (**entailed by**).\\n\\nConsistent with the standard MultiNLI setting, we measure the degree of spurious feature reliance through Worst Group Accuracy (accuracy of the set of test samples of class 2 or class 3 with the spurious feature).\\n\\nWe observe that WGA is significantly worse when the word \\u201cnever\\u201d occurs in the hardest samples vs. the easiest samples during training.\", \"introducing_spurious_feature_in_easiest_100_samples\": \"WGA = **55.22%**\", \"introducing_spurious_feature_in_hardest_100_samples\": \"WGA = **1.04%**\\n\\nA higher WGA indicates low reliance on spurious features.\\n\\nThe gap in worst group accuracy is 54.18%. Note that the number of samples containing the spurious feature is the same in both settings (= 100).\\n\\nAdditionally, we note that there are 191,504 training samples in this setting. There are 57,498 samples belonging to the **contradicts** class. We introduce the spurious feature in only 100 samples of the **contradicts** class (0.17% of samples within the class, 0.0522% of all samples in the training set.) We also observe that in a setting with no spurious features during training, Worst Group Accuracy is 67.42%.\\n\\nSimply varying which 100 samples contain the spurious negation word \\u201cnever\\u201d has such a **huge** impact on Worst Group Accuracy. This finding is extremely insightful, novel and is consistent with the results observed in Section 4 (Figure 2) of our paper.\\n\\nThis experiment reinforces the claim that samples with hard core features are primary contributors to spurious correlations and that samples with simple core features do not contribute to spurious correlations.\\n\\n**Insight 3:**\\n\\n**Excluding a few key samples during training severs spurious correlations.**\\n\\nFor this, we simply show the results in Figure 4 but for the MultiNLI dataset. Note: Due to computational constraints, we only show three pruning sparsities.\", \"worst_group_accuracy\": \"| Prune % | 20%| 25%| 33.33%|\\n| -------- | ------- | ------- | ------- |\\n|Pruning Easiest| 66.81| 65.59| 65.33|\\n|Pruning Hardest| 72.21| 73.17| 76.05| \\n\\nKindly note that the model attains 65.9% Worst Group Accuracy on the original, unpruned dataset.\\n\\nWe refer the Reviewer to Fig. 10 in the appendix for a better understanding.\\n\\n**Insight 4:**\\n\\n**Spurious feature strength created a specific distribution of the training data.**\\n\\nTo show this, we use a smaller subset of the MultiNLI dataset and vary the strength of the spurious signal by varying the proportion of samples containing the spurious feature.\", \"distribution_of_samples_with_spurious_features_in_identifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|57.4% | 24.3% | 11.5% | 6.8% |\", \"distribution_of_samples_with_spurious_features_in_unidentifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|28% | 21% | 24% | 27% |\\n\\nTo confirm that the unidentifiable setting still causes the network to rely on spurious correlations, we create another setting where we remove all samples with spurious features and compare the Worst Group Accuracies below:\", \"unidentifiable_setting\": \"64.89%\", \"no_samples_with_spurious_features_setting\": \"70.73%\\n\\nWe observe that in the setting with no samples containing the spurious features, Worst Group Accuracy is higher, indicating that the unidentifiable setting still causes the network to rely on spurious correlations but the samples containing spurious features are uniformly distributed.\\n\\n---\\n\\\\\\nKindly let us know if there are any remaining concerns and we will be happy to address them.\"}", "{\"comment\": \"We thank the reviewer for their tireless effort towards our work! We sincerely thank them for taking the time to understand our work in depth and present detailed comments and suggestions that have definitely helped improve our paper. We are grateful that the reviewer recognizes the key insights in the paper that we are excited about. Thank you again for your time and effort.\"}", "{\"title\": \"Response to Reviewer n9k8 (Part 2)\", \"comment\": \"**\\\"Scaling up the difficulty of the core features of the 100 samples.\\\"**\\n\\nApologies for the confusion. In the CIFAR-10S experiments in Section 5, we take 100 samples to which we add synthetic spurious features and we keep changing the 100 samples like a 100-sample-sized sliding window to probe into. By scaling up the difficulty of the core features of the 100 samples, we simply increase the difficulty of the 100 samples into which we introduce the spurious features. We first sort the samples by difficulty. For results in Figure 2, we simply introduce spurious features in the easiest 100 and the hardest 100 samples. For results in Figure 3, we consider all unique 100 sample subsets by sliding a window of sample size 100 across the sorted samples list. Since there are 5000 total samples in the class, we have 50 different subsets in which spurious features are introduced. We will be more than happy to provide any further explanation.\\n\\nAs we increase the difficulty of the 100 samples into which we introduce spurious features, we see that spurious feature reliance exhibits polynomial growth instead of linear growth with increasing difficulty. \\n\\n**What do the authors mean by \\u201cgroup labels\\u201d in line 457?**\\n\\nWe sincerely apologize for not including a definition of group labels. We follow the literature concerning spurious features/correlations, where group labels commonly refer to labels that indicate the presence or absence of spurious features in each training sample within a class.\\n\\nThus, within a class, samples with the spurious feature would have a group label = 1 and samples without the spurious feature would have a group label = 0. Note that this is different and independent from class labels in classification tasks. We make this point clearer in the paper. In identifiable settings (as in the existing literature) where the strength of the spurious signal is relatively stronger, it is trivial to identify group labels, as is shown in Section 3 (Figure 1 (b) (Right)).\\n\\n\\n\\n**Can the authors provide a brief and self contained description of the point that wants to be raised in this section (while I believe this is not the central part of the work)?**\\n\\nOur work is primarily interested in novel settings where spurious signals are weak, in that the signals are not easily identifiable, as you have described in sentence 2 of your review.\\n\\nHowever, in Section 6.3, we show that even in previously studied settings where the spurious signals are \\u201cstrong\\u201d (identifiable settings), simply pruning a few samples can yield state-of-the-art results. While not the primary focus of our paper, we believe this finding is very important. Current techniques that attain good performance in these settings are extremely complex and computationally expensive. Our method is very simple to understand, takes only a few additional lines of code, is easy to reproduce and yields state-of-the-art performances, even on benchmarks that do not fit into the primary objectives of this paper.\\n\\nPlease note that Waterbirds and MultiNLI are two of the most commonly studied benchmarks in literature concerning spurious correlations for Vision and Language tasks (Sagawa et. al. 2020 ICML, Kirichenko et. al. 2023 ICLR, Liu et. al. (ICML, 2021), Zhang et. al. (ICML, 2022), Ye et. al. 2023 AISTATS). Additionally, we highlight the robustness of our pruning technique by showing that pruning sparsities within a wide range can attain state-of-the-art or competitive performance on these benchmarks (Figure 8).\\n\\nWe have improved the writing of this section thanks to your suggestions and questions. Please let us know if you have any concerns and we will be happy to address them.\\n\\n(Liu et. al. ICML, 2021) \\u201cJust Train Twice: Improving Group Robustness without Training Group Information,\\u201d ICML, 2021.\\n\\n(Zhang et. al. ICML, 2022) \\u201cCorrect-n-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations,\\u201d ICML, 2022.\"}", "{\"summary\": \"This paper aims to mitigate the problem of spurious correlations in deep learning models. Through a sequence of simulation experiments, they authors discover that a small subset of training data containing \\u201chard\\u201d core features is primarily responsible for the model learning spurious correlations, even when the spurious signal is weak. The authors then demonstrate through subsequent experiments that pruning this subset of samples effectively severs the link between spurious and core features.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This work represents an example of my favorite kind of work. A very important and easy-to-understand problem (spurious correlations learned by models that affect their generalizability) and a very intuitive solution, once the authors explain and demonstrate it. I also felt the work was well written, guided the reader through the gaps in the literature, and step by step demonstrated the veracity of their arguments with simple and compelling experiments.\", \"weaknesses\": \"Inductive Solution\\nThere I assume many types of spurious correlations can be learned by a model. For example, it is will documted that outliers can overinfluence functional estimation, and create spurious corrleations. The proposed solution addresses only a specific type of spurious correlation (i.e., roughly speaking, where a certain class has an over-representation of a given feature that its not truly indicative of the class label), which arises under the particular conditions the authors generate (i.e., \\\"the spurious feature takes the form of a line running through the center of the images\\\", \\\"images of men with glasses\\\", etc. ), given the distributional properties of the datasets considered, and given deep learning architectures (e.g., mostly ResNet) considered. Moreover, their solution relies heavily on specific empirical observations about how this particular type of spurious correlation manifests and is distributed, given the particular generation process followed. For instance, the authors observe that in their simulations \\\"in settings where the strength of the spurious signal is not significantly greater than the strength of the invariant signal...samples containing spurious features are uniformly distributed across the training distribution...[and] the presence of spurious features does not have a significant impact on the training distribution...[therefore] samples containing hard core features that also contain the spurious feature are primary contributors to the formation of spurious correlations\\\". They therefore, conclude that \\\"to mitigate spurious correlations, one would only have to prune the hardest samples in the training set, as this subset of the data would contain samples with spurious features that have hard core features.\\u201d This reasoning appears to be purely inductive and raises questions about the generalizability of the observations and subsequent solution. Specifically, I'd assume that the observations about the distribution of spurious signals and sample difficulty will hold across other types of spurious correlations, which may behave differently under varying dataset characteristics or model architectures. \\n\\nTheoretical Justification/Generality\\nWithout a theoretical foundation supporting the general applicability of the inducitve observations, it remains unclear whether the observations that lead to solution are universal and/or if the prunning method can serve as a general approach to mitigating spurious correlations in models. Therefore, the paper could be significantly improve with a theoretical justifications for the generality of the inductive observation. For example, 1) is the particular type of spurious correlation the authors consider representative of all spurious correlations, 2) does the distributional uniformality of samples containing features with spurious correlation across the training samples generalize across types of spurious correlations, 3) is the pruning of this particular type of subset as a solution to (any type of) spurious correlations, and 4) are these justifications architecture/data dependent. \\n\\nRelated, but unique, I think structuring the pruning solution as a formal model, statistical hypothesis test, etc. would strengthen the theoretical foundation of the proposed pruning technique, and again perhaps shed light on it's generality. A theoretical or statistical clarity on why the samples containing spurious correlation are distributed uniformaly (e.g., I wonder if this is a consequence of the inverse probability transform). A claim that spurious feature reliance experienceing polynomial growth with increasing sample difficulty, would be more trustworthy if supported by some theoretical formulation or model. \\n\\nPruning Consequences\\nWhile the authors demonstrate that prunning a particular type of subset of data points will reduce spurious correlations, there is no discussion of what are the consequences. I am inclined to beleive that throwing out data is going to have some sort of negative consequence, and therefore it important to know what trade-off is being made. This will allow a better comparison to other methods that do not simply prune data as well as enable practitioners to understand/determine if the cost of pruning data is worth the benefit in removing spurious correlations.\", \"questions\": \"My question(s) would simply be can the authors provide theory, models, statistical justification etc. (or fairly robust/general empirics) to address the listed weaknesses?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 6)\", \"comment\": \"**Insight 4:**\\n\\n**Spurious feature strength created a specific distribution of the training data.**\\n\\nTo show this, we use a smaller subset of the MultiNLI dataset and vary the strength of the spurious signal by varying the proportion of samples containing the spurious feature.\", \"distribution_of_samples_with_spurious_features_in_identifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|57.4% | 24.3% | 11.5% | 6.8% |\", \"distribution_of_samples_with_spurious_features_in_unidentifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|28% | 21% | 24% | 27% |\\n\\nTo confirm that the unidentifiable setting still causes the network to rely on spurious correlations, we create another setting where we remove all samples with spurious features and compare the Worst Group Accuracies below:\", \"unidentifiable_setting\": \"64.89%\", \"no_samples_with_spurious_features_setting\": \"70.73%\\n\\nWe observe that in the setting with no samples containing the spurious features, Worst Group Accuracy is higher, indicating that the unidentifiable setting still causes the network to rely on spurious correlations but the samples containing spurious features are uniformly distributed.\\n\\n---\\n\\\\\\n**Additional Results:**\\n\\n**Distribution of the MultiNLI setting covered in the paper (Identifiable) (Figure 5(b) extension).**\", \"distribution_of_samples_with_spurious_features\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|44.39% | 23.83% | 16.68% | 15.08% |\\n\\nQ1 in this setting contains almost half of all samples with spurious features while Q4 only contains 15%. This shows that samples with spurious features are not uniformly distributed when viewed through the lens of sample difficulty. This is in contrast to the CelebA setting (Unidentifiable), in which samples with spurious features are uniformly distributed (Figure 5(b) right in text).\\n\\nWe refer the Reviewer to Fig. 11 in the appendix for a better understanding.\\n\\n---\\n\\\\\\nKindly let us know if there are any additional concerns that we can address which can help increase our score. We greatly appreciate your detailed comments that have helped improve our paper. Thank you again for taking the time to review our work.\"}", "{\"comment\": \"I appreciate the response from the authors. My questions 2, 3 and 4 are addressed in the response. Nevertheless, the question 1 remains.\\n\\nThe authors state that the strength of a feature includes both its frequency and area. Those concepts are actually not well-defined. For example, the eye-glasses is considered the spurious feature in the CelebA dataset. While eyeglasses can have different shapes, colors, and sizes, red eyeglasses shall be conceivably easier to identify than a transparent-frame. I would like to express two points: the \\\"magnitude\\\" is clearly an important aspect of the feature strength, and \\\"feature\\\" itself is not well-defined, thus making it vague in what scenarios the conclusions of this work will hold. The latter concern agrees with the comment of Reviewer X6Fn on the generalizability.\"}", "{\"summary\": \"This paper study the spurious feature associated with the trained model, focusing on how this correlation is formed during the training process and proposing a data pruning approach to mitigate spurious correlation. Specifically, the authors find that spurious correlation is caused by a few samples that are hard to learn and contains spurious features. As a result, this paper proposes to remove those data points from the training dataset to improve model performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper organization is clear and easy to follow.\\n2. The problem of identifying and mitigating spurious correlation is critial.\\n3. This paper considers a challenging scenario where spurious features have weak signals and therefore are difficult to be detected.\", \"weaknesses\": \"**Major Concerns:**\\n\\n1. Lack of rigorous formulation and solution. Throughout this paper, there is no single equation or definition that clearly states the problem and the proposed solution. Key concepts that heavily mentioned in the paper, such as spurious correlation, core/invariant features, simple/hard features, are not well-defined. For a more readable paper, the authors are encouraged to (1) state a self-contained problem with properly defined concepts (2) write a pesudocode for the proposed data pruning algorithm (3) disclose complete experimental details including but not limited to training/test dataset processing procedure, models, calculation of evaluation metric (which is not well-defined as well), and baseline methods.\\n\\n2. Lack of clarity in findings. Most figures and tables are not self-explained and neither explained by their titles. As a result, it is confusing how do they support the claims in the main text. This problem is aggravated due to the previous point. For example, I did not understand the message in Figure 2, as concepts like \\\"easy/hard\\\" are undefined, and authors do not explain how they perform the experiments, e.g., how to add spurious features into the training data and how to calculate the misclassification rates. The authors should review all their findings and claims and rewrite their evidence to be more supportive and convincing.\", \"questions\": \"Please find in the weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed responses. The revised manuscript greatly enhances its readability and I have a better understanding on the main contribution of this paper.\\n\\nI would like to raise my score to 5, and am willing to increase if the following questions are also addressed: \\n\\n(1) The \\\"strength\\\" of a feature can be understood as its magnitude like the strength of signal/noise. It seems the strength in this paper actually represent its frequency. Is it correct? If so, I would suggest to replace strength as frequency. \\n\\n(2) While I agree with most observations and the proposed method for unidentifiable case, could the authors explain how to prune in the identifiable setting? Given that the authors claim \\\"Yang et al. (2024) show that in settings where the strength of the spurious signal is significantly greater than the strength of the invariant signal, it is possible to identify which samples contain spurious features in them and which ones do not\\\", how to identify spurious samples as stated in \\\"simply pruning those spurious samples containing the hardest core features\\\". Also, what does that mean by \\\"we work with group labels as is done in ...\\\"? \\n\\n(3) Can authors clarify the rationale behind \\\"the presence of strong spurious information enables the network to understand samples with hard core features better\\\"?\\n\\n(4) Suppose a sample has a hard invariant feature and an easy spurious feature, should it be easy or hard to learn (small or large training error)? My understanding is that the sample diffculty is estimated by the training error, in this case, it is unclear how to identify \\\"spurious samples containing the hardest core features\\\".\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 1)\", \"comment\": \"We thank the reviewer for their comments. We have addressed all your concerns by providing all relevant definitions, detailed figure captions, technical details for the proposed approach and **additional** training details. Note that in addition to providing additional training details, we have also provided our code through an anonymous github repository to ensure easy reproducibility (Link: https://github.com/Anon-ICLRDP/ICLR_DP). We have included all these components in our paper but we provide them below for your convenience:\\n\\n**Key Concept Definitions:**\\n\\n**Lines 82-88 in revised text:**\\nConsistent with past literature, we study the supervised classification setting where $S = {\\\\{(x_i, y_i)}\\\\}^N_{i=1}$ denotes the training dataset of size $N$ and network is trained to learn a mapping between $x_i$ (input) and $y_i$ (class label) using empirical risk minimization (Vapnik, 1998). Every training sample $s \\\\in S$ contains a core feature ($c_i$) that represents its class ($y_i$). A fraction of all samples within a class contain the spurious feature $(a_i)$ associated with that class. **Core (or invariant) features** are causal to the class label $y_i$ and are fully predictive of the task, as they are present in all samples. **Spurious features** are not causal to the class labels and are partially predictive of the task, as they are present in only a fraction of all samples of a class.\\n\\nPast literature has found that during training, deep networks choose to rely on spurious features over core/invariant features if the spurious features are easier to learn than the core features. They form a correlation between these spurious features and ground truth labels. Such correlations are called spurious correlations.\\n\\n**Lines 90-92 in revised text:**\\n\\n**Spurious Correlations:** The correlation a network forms between spurious features and class labels. Such correlations are undesirable as they are not causal to class labels and can disappear during testing or become associated with a different task, causing these networks to malfunction.\\n\\n\\n**Lines 128-136 in revised text:**\\n\\n**Feature Difficulty:** Consistent with deep learning literature (specifically, those works concerned with spurious correlations), difficulty of learning a feature is determined by the following three factors: (1) Proportion (or Frequency) of training samples containing the spurious feature (Sagawa et. al. 2020 ICML, Shah et. al. NeurIPS 2020, Kirichenko et. al. 2023 ICLR), (2) Area Occupied and Position (if it is centered or not) in the training sample (Moayeri et. al. 2022 NeurIPS) and (3) The amount of noise in the signal (Sagawa et. al. 2020 ICML, Ye et. al. 2023 AISTATS). A feature which is present in a large portion of all training samples, occupies a lot of area, is centered, and has little to no variance, is easy to learn. On the other hand, a feature which is present in a small portion of all training samples, occupies little area, is not centered, and has a lot of noise/variance, is hard to learn.\\n\\n(Shah et. al. 2020 NeurIPS) \\u201cThe Pitfalls of Simplicity Bias in Neural Networks,\\u201d NeurIPS, 2020.\\n\\n(Sagawa et. al. 2020 ICML) \\u201cAn Investigation of Why Overparameterization Exacerbates Spurious Correlations,\\u201d ICML, 2020.\\n\\n(Kirichenko et. al. 2023 ICLR) \\u201cLast Layer Re-training is Sufficient for Robustness to Spurious Correlations,\\u201d ICLR, 2023.\\n\\n(Moayeri et. al. 2022 NeurIPS) \\u201cHard imagenet: Segmentations for objects with strong spurious cues\\u201d, NeurIPS, 2022.\\n\\n(Ye et. al. 2023 AISTATS) \\u201cFreeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise\\u201d, AISTATS, 2023.\\n\\n**Problem Statement and Pseudocode:**\\n\\n**Lines 201-202 in revised text:**\\n\\n**Problem Statement:** How does one sever spurious correlations in settings where attaining spurious information is difficult or impossible?\\n\\n\\n**Lines 290-294** in revised text:\\n\\n**Pseudocode:**\\n\\n1) Train the network on the task for n epochs, where n << t and t is the total number of epochs in the training schedule.\\n2) Compute sample-wise difficulty scores as $|| p(w,x) - y||_2$ , where $p(w,x)$ is the probability distribution given by the network for sample $x$, $w$ denotes the network parameters after the nth epoch and $y$ is the one hot encoding of the ground truth value.\\n3) Prune samples with high difficulty scores.\\n4) Train a new network on the pruned dataset for t epochs.\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 5)\", \"comment\": \"We thank the reviewer for their comments.\\n\\n**Clarifying Figures 4 and 6:**\\n\\nIn Fig. 4, we only prune those samples that contain spurious features and hard core features. This is to make the point that samples with hard core features + spurious features are the primary contributors to spurious feature reliance. Through Fig 6, we propose the final data pruning algorithm which prunes all the samples with hard core features (which contains samples with hard core features + spurious features and also sample with hard core features + no spurious features, Lines 435 - 438), as spurious information is unidentifiable. We have added additional text to make this clearer on Line 436.\\n\\n---\\n\\\\\\n**Presenting the same insights with textual data:**\\n\\n**Insights 1 and 2:**\\n\\n**Samples with simple core features do not contribute to spurious correlations. Samples with hard core features are primary contributors to spurious correlations.**\\n\\nTo show this, we perform the same experiment in Section 4, but instead of CIFAR-10S, we use the MultiNLI dataset. First, we remove all samples with negation words from the **training** data and then we compute the sample-wise difficulty scores as we do for CIFAR-10S in Section 4. We then create two settings: one where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 hardest input samples belonging to class 1 (**contradicts**) and another where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 easiest input samples belonging to class 1 (**contradicts**). We do the same to a set of test samples belonging to class 2 (**neutral with**) and class 3 (**entailed by**).\\n\\nConsistent with the standard MultiNLI setting, we measure the degree of spurious feature reliance through Worst Group Accuracy (accuracy of the set of test samples of class 2 or class 3 with the spurious feature).\\n\\nWe observe that WGA is **significantly** worse when the word \\u201cnever\\u201d occurs in the hardest samples vs. the easiest samples during training.\", \"introducing_spurious_feature_in_easiest_100_samples\": \"WGA = **55.22%**\", \"introducing_spurious_feature_in_hardest_100_samples\": \"WGA = **1.04%**\\n\\nA higher WGA indicates low reliance on spurious features.\\n\\nThe gap in worst group accuracy is 54.18%. Note that the number of samples containing the spurious feature is the same in both settings (= 100).\\n\\nAdditionally, we note that there are 191,504 training samples in this setting. There are 57,498 samples belonging to the **contradicts** class. We introduce the spurious feature in only 100 samples of the **contradicts** class (0.17% of samples within the class, 0.0522% of all samples in the training set.) We also observe that in a setting with no spurious features during training, Worst Group Accuracy is 67.42%.\\n\\nSimply varying which 100 samples contain the spurious negation word \\u201cnever\\u201d has such a **huge** impact on Worst Group Accuracy. This finding is extremely insightful, novel and is consistent with the results observed in Section 4 (Figure 2) of our paper.\\n\\nThis experiment reinforces the claim that samples with hard core features are primary contributors to spurious correlations and that samples with simple core features do not contribute to spurious correlations.\\n\\n---\\n\\\\\\n**Insight 3:**\\n\\n**Excluding a few key samples during training severs spurious correlations.**\\n\\nFor this, we simply show the results in Figure 4 but for the MultiNLI dataset. Note: Due to computational constraints, we only show three pruning sparsities.\", \"worst_group_accuracy\": \"| Prune % | 20%| 25%| 33.33%|\\n| -------- | ------- | ------- | ------- |\\n|Pruning Easiest| 66.81| 65.59| 65.33|\\n|Pruning Hardest| 72.21| 73.17| 76.05| \\n\\nKindly note that the model attains 65.9% Worst Group Accuracy on the original, unpruned dataset.\\n\\nWe refer the Reviewer to Fig. 10 in the appendix for a better understanding.\"}", "{\"title\": \"Global Response\", \"comment\": \"We thank the reviewers for their comments. We are glad that the reviewers recognize the key contributions of the paper:\\n\\n**Contributions:**\\n1) Identifying and targeting novel settings where obtaining spurious information is difficult/impossible and showing the failure of past techniques on these settings.\\n2) Discovering that spurious correlations are primarily formed from a handful of all samples containing spurious features through extensive empirical investigation.\\n3) Proposing a novel data pruning solution that severs spurious correlations in the proposed novel settings while attaining state-of-the-art performances on previously studied settings.\\n\\nWe believe we have addressed the concerns of all reviewers by responding to them individually. We summarize the resolution of the primary concerns that were shared by the reviewers:\\n\\n**Weaknesses Resolved:**\\n\\n1) Included more results across more architectures and hyperparameters to further reinforce our claims and observations. (Reviewers eCbW, X6Fn)\\n2) Improved all relevant formal definitions/more detailed figure captions/technical details. (Reviewers 7BQb, n9k8)\\n3) Released our code through an anonymized github link and will open-source it upon acceptance. (Reviewer eCbW)\\n\\n\\nHowever, we strongly believe that the score received from reviewer 7BQb is not a fair assessment of our work. Their only concern was the clarity or presentation of the paper but they have given our contribution and soundness a low score as well. We would like to emphasize that the clarity and writing quality of the paper were well received by all other reviewers. Nevertheless, we have addressed the clarity concerns of reviewer 7BQb and hope that our work is now well received. If not, we are happy to address any further concerns.\"}", "{\"title\": \"Apologies on Late Discussion Engagment\", \"comment\": \"I apologize to the authors and the review team for jumping into this discussion period a little later than others, I had an unexpectedly sick infant at home this week.\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 1)\", \"comment\": \"We thank the reviewer for their comments and detailed review. We are happy to hear that our work represents your favorite kind of work and that our paper is written well with compelling experiments.\\n\\nBefore we address your questions, we would like to emphasize that the scope of our paper is to provide empirical evidence regarding the gaps in literature, novel insights regarding the behavior of deep neural networks in the presence of spurious correlations and the effectiveness of our proposed solution. To do so, we make sure to cover most benchmarks generally studied in spurious correlations literature (identifiable settings) while introducing new ones. To the best of our knowledge, there do not exist other benchmarks that may offer new insights but we will be more than happy to verify our claims on any other benchmarks based on your recommendation.\\n\\n**Question 1: Is the particular type of spurious correlation the authors consider representative of all spurious correlations?**\\n\\nThank you for this question. The spurious correlations we study are consistent with all previous literature that studied spurious correlations, where spurious features are easier to learn than core features and are not fully predictive of the task. In settings where this does not hold (so for instance, settings where the spurious features are harder to learn than the core features), the network will choose to ignore these features (Shah et. al. 2020 NeurIPS) and so the spurious correlations present in the dataset are not learned. In other words, we are primarily concerned with settings where the network relies on spurious correlations present in the dataset and exclude those settings where spurious correlations are present in the dataset but are not learned. The latter has never been studied in deep learning literature. We are unaware of any other spurious correlations previously studied.\\n\\nWithin the setting studied, we define two types of spurious correlations. One where the strength of the spurious signal is significantly greater than the core signal (identifiable) and another where the strength of the spurious signal is relatively weaker (unidentifiable, novel).\\n\\nAdditionally, we would like to point out that not all benchmarks studied in this paper have spurious features that are over-represented in a certain class. In Section 2, we show that spurious correlations are formed even when 10% (or 50%) of all samples within a class contain the spurious feature.\\n\\nIf the reviewer's concern was with respect to the kinds of spurious features studied, we would like to emphasize that our experiments consider benchmarks where spurious features are backgrounds (snow, water and land backgrounds), objects or words (eyeglasses, trees, negation words, etc.) or synthetic lines through which we believe we covered almost all commonly used benchmarks regarding spurious features.\\n\\n(Shah et. al. 2020 NeurIPS) \\u201cThe Pitfalls of Simplicity Bias in Neural Networks,\\u201d NeurIPS, 2020.\\n\\n**Question 2: Does the distributional uniformality of samples containing features with spurious correlation across the training samples generalize across types of spurious correlations?**\\n\\nThe answer is no. In Section 6.1, we show that in settings where the strength of the spurious signal is significantly greater than the strength of the core features (Identifiable Settings), samples containing spurious features are no longer uniformly distributed (Figure 5). The key attribute which determines the distribution is the strength of the spurious signal in the training set. In settings where the strength of the signal is significantly greater, however, it is trivial to identify the presence of spurious features in training samples.\\n\\n**Question 3: Is the pruning of this particular type of subset as a solution to (any type of) spurious correlations?**\\n\\nYes. We are unaware of any other type of spurious correlation learned by deep neural networks and we have covered almost all existing benchmarks studied in literature. However, we are happy to verify our claims on any other benchmarks based on your recommendation.\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 3)\", \"comment\": \"**Additional Experimental Details:**\\n\\n**CIFAR-10S.** We follow a similar approach to Nagarajan et. al. (ICLR, 2021) for adding a spurious line where pixel values for a vertical row of pixels in the middle of the first input channel are set to the maximum possible value (255) before normalization and before any augmentations. We use the same augmentations used for training on the original CIFAR-10.\\n\\n**CelebA.** In this setting, we maintain 5000 Female Samples without Eyeglasses and 2500 Male samples with Eyeglasses and 2500 Male samples without Eyeglasses. Consistent with the implementation in Sagawa et. al. (ICLR, 2019), Liu et. al. (ICML, 2021), we do not use any augmentations.\\n\\n**Hard ImageNet.** In this setting, we maintain 58 Dog Sled samples with minimal spurious features and 100 Ski samples randomly drawn from the dataset. All remaining classes are maintained the same. We use the same augmentations used for training on ImageNet.\\n\\n**Waterbirds.** We use the original Waterbirds setting commonly used in practice (Sagawa et. al. (ICLR, 2019), Liu et. al. (ICML, 2021), Zhang et. al. (ICML, 2022), Kirichenko et. al. (ICLR, 2023)). We use the augmentations used in Kirichenko et. al. (ICLR, 2023) when training, which are similar to the augmentations used for training on ImageNet.\\n\\n**MultiNLI.** We use the original MultiNLI setting used in practice (Sagawa et. al. (ICLR, 2019), Liu et. al. (ICML, 2021), Kirichenko et. al. (ICLR, 2023)). Consistent with the implementation in Sagawa et. al. (ICLR, 2019), Liu et. al. (ICML, 2021), Kirichenko et. al. (ICLR, 2023)), we do not use any augmentations.\\n\\n(Nagarajan et. al. ICLR, 2021) \\u201cUnderstanding the failure modes of out-of-distribution generalization,\\u201d ICLR, 2021.\\n\\n(Shah et. al. 2020 NeurIPS) \\u201cThe Pitfalls of Simplicity Bias in Neural Networks,\\u201d NeurIPS, 2020.\\n\\n(Sagawa et. al. 2020 ICML) \\u201cAn Investigation of Why Overparameterization Exacerbates Spurious Correlations,\\u201d ICML, 2020.\\n\\n(Sagawa et. al. 2019 ICLR) \\u201cDistributionally Robust Neural Networks For Group Shifts: On the Importance of Regularization for Worst-Case Generalization,\\u201d ICLR, 2019\\n\\n(Kirichenko et. al. 2023 ICLR) \\u201cLast Layer Re-training is Sufficient for Robustness to Spurious Correlations,\\u201d ICLR, 2023.\\n\\nLiu et. al. (ICML, 2021) \\u201cJust Train Twice: Improving Group Robustness without Training Group Information,\\u201d ICML, 2021.\\n\\nZhang et. al. (ICML, 2022) \\u201cCorrect-n-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations,\\u201d ICML, 2022.\"}", "{\"title\": \"Response to Reviewer eCbW (Part 2)\", \"comment\": \"**Question 1: How sensitive is the pruning method to changes in hyperparameters?\", \"question_2\": \"The robustness of sample difficulty estimation across different model architectures is not clear at the current version of the paper. Would it be possible to add some results or explanations for that?**\\n\\nCurrent experiments are run on ResNet-50 and BERT, which is a transformer-based model. We also include experimental results on VGGNets, smaller ResNets and different hyperparameters in response to Weakness 2.\\n\\n**Question 3: Table 1 shows that some SOTa methods achieve better results. Could you please give some explanation on that? Also, could you please open source the code to enhance the reproducibility?**\\n\\nOnly one existing SOTA method achieves better results (Worst-Group Accuracy) on only one dataset (MultiNLI). While we do not have an explanation for this, it is important to note that results on identifiable benchmarks are not the primary focus of the paper. The primary focus of our paper is on novel settings where spurious information is unidentifiable. Additionally, current techniques that attain good performance in these settings are extremely complex and computationally expensive. Our method is very simple to understand and implement - requiring only a few additional lines of code, is easy to reproduce, and yields state-of-the-art performance, even on benchmarks that do not fit into the primary objectives of this paper.\\n\\nSure, to enhance reproducibility, we have added an anonymized repository for the experimental results presented in this paper, and we will open-source it upon this paper\\u2019s acceptance. Kindly let us know if you have any difficulties executing this code and reproducing the results. Link: https://github.com/Anon-ICLRDP/ICLR_DP\\n\\n\\n**Question 4: Suggestions for Improvement:**\\n\\n**1) It would be helpful to include a section that discusses the potential negative impacts or limitations of the method, such as the risk of pruning samples that are informative but rare.**\\n \\nWe thank the reviewer for this suggestion. We believe that it is important to include such limitations. However, in our extensive empirical evaluation across multiple different datasets, we see little to no reduction in testing accuracy when pruning these key samples because they generally do not contribute much to generalizability (Lines 366 - 368, Figure 4 (Right), Figure 6 (Right), Figure 7 (Right), Table 1 (Mean Accuracy)). If one were to continue to prune more of the harder samples, test accuracy would drop (Sorscher et. al., 2022 NeurIPS). However, based on our current observations, it is clear that the amount of data needed to be pruned to severe spurious correlations is less than the amount of data needed to observe noticeable drops in test accuracy. We will be happy to perform any further analysis to assess the potential impacts or limitations of the method based on the reviewer\\u2019s recommendation.\\n\\n(Sorscher et. al., 2022 NeurIPS) Sorscher, Ben, et al. \\\"Beyond neural scaling laws: beating power law scaling via data pruning.\\u201d, 2022 NeurIPS.\\n\\n**2) Extending the empirical analysis to more complex and real-world datasets with non-synthetic spurious correlations could further validate the applicability of the method.**\\n\\nWe would like to emphasize that across the 5 datasets that we study (HardImageNet, CelebA, Waterbirds, MultiNLI and CIFAR-10S), 4 of them are real-world datasets containing real-world spurious features and correlations. We intentionally included one dataset containing synthetic spurious features (CIFAR-10S) as it allows us to vary the strength of the spurious feature relative to the core, invariant feature. All of the findings from the one synthetic dataset translate perfectly to the four real-world datasets.\\n\\nTo the best of our knowledge, there do not exist any datasets utilized in literature for spurious correlations that could offer additional or better insights. However, we will be more than happy to include results from any datasets that the reviewer would suggest that we include in our analysis.\"}", "{\"summary\": \"This paper empirically investigates the phenomenon of spurious correlations being learned by deep learning models.\\n\\nThe authors are mainly interested in the setting where the spurious signals are \\\"weak\\\", in the sense that are not easy to be detected and removed.\\n\\nIn this setting, the authors propose a data pruning approach to limit the effects of the spurious correlations.\\n\\nThe method consists in looking at the (few) training points that are more difficult to classify during training. These points are most likely making the model overfit patterns that are not necessarily causal to the right label, and are therefore a major cause to the learning of spurious patterns. This is confirmed experimentally.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is very well written, it is clear, the approach followed by the authors is logically sound, and I personally enjoyed reading it. While I am not certain that it is true that all previous work adresses cases where the spurious signal is stronger than the core features, the contribution seems indeed novel, and the results are reasonable and well explained.\\n\\nThe paper is on point. It proposes mainly one single method, and the writing is structured in a way that the reader can digest the phenomenology before being given the final algorithm / approach.\", \"weaknesses\": \"It is sometimes not obvious what the authors mean by \\\"strong\\\" or \\\"weak\\\". While providing precise definitions is beyond the purposes of this work, some examples during the introduction could facilitate the reading. For example, I was confused in the paragraph at line 201, where the strength of the signal is defined both in terms of the geometry of the pattern and it's frequency in the data. These two aspects are fundamentally different, and putting them altogether might not result in the best model to investigate this problem...\\n\\nThe reason behing I do not give a higher schore is that some of the results are not entirely surprising. The phenomenlogy described by Figure 2 is interesting, but at the same time - to my understanding - in line with what we expect from the influence of individual samples to the final parameters of the model (see, e.g., [1, 2]). Nevertheless, this might be a new perspective in the community of spurious features / correlations, and I therefore recommend acceptance.\\n\\n[1] https://arxiv.org/pdf/1906.05271\\n[2] https://proceedings.neurips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Paper.pdf\", \"questions\": \"There should be a typo in line 162 \\\"as shown in...\\\"\\n\\nI am still confused about the setting used to obtain Figure 3. And what do the authors mean by \\\"polynomial growth\\\" in this case? What does it mean to \\\"scale up the difficulty of the core features of the 100 saples\\\"?\\n\\nI am also confused by Section 6.3 - What do the authors mean by \\\"group labels\\\" in line 457? Can the authors provide a brief and self contained description of the point that wants to be raised in this section (while I believe this is not the central part of the work)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer eCbW,\\n\\nKindly let us know if we have resolved your concerns. We have provided results across different architectures and hyperparameters, have released our code through an anonymous github link, and have provided explanations for the remaining questions. \\n\\nWe will be happy to address any remaining concerns. Thank you.\"}", "{\"title\": \"Summary of our response.\", \"comment\": \"Dear Reviewer X6Fn,\\n\\nThank you for your acknowledgment. We provide a condensed version of our response for your convenience:\\n\\n---\\n\\n\\n**Point 1**\\n\\nThe scope of our work is to provide empirical evidence regarding/addressing (1) gaps in literature, (2) novel insights about the behavior of deep neural networks in the presence of spurious correlations, and (3) the effectiveness of our proposed solution. \\n\\nTo do so, we make sure to cover most benchmarks commonly studied in spurious correlations literature while introducing new ones.\\n\\n**Point 2**\\n\\n**Question 1: Is the particular type of spurious correlation the authors consider representative of all spurious correlations?**\\n\\nWe cover all types of spurious correlations present in most existing benchmarks in literature concerning spurious correlations and propose new ones as well. We are unaware of any other benchmarks that may offer additional insights.\\n\\n**Point 3**\\n\\n**Question 2: Does the distributional uniformality of samples containing features with spurious correlation across the training samples generalize across types of spurious correlations?**\\n\\nNo, in Section 6.1, we show that if the strength of the spurious signal is significantly greater (identifiable settings), the distributional uniformality no longer holds.\\n\\n**Point 4**\\n\\n**Question 3: Is the pruning of this particular type of subset as a solution to (any type of) spurious correlations?**\\n\\nYes, we have shown that pruning a few key players severs spurious correlations in all types of spurious correlations studied in our paper, which, to the best of our knowledge, is representative of all spurious correlations that are currently known and studied.\\n\\n**Point 5**\\n\\n**Question 4: Are these justifications architecture/data dependent?**\\n\\nNo. Our claims and observations hold for 5 datasets and different architectures (ResNets and Transformer based models, already in the paper) and we have also shown that such pruning is robust to changes in hyperparameters and additional architectures.\\n\\n**Point 6**\\n\\n**Question 5: Concerns regarding trade-offs**\\n\\nIn our evaluation across 5 different datasets with very diverse characteristics, we see little to no reduction in testing accuracy. Based on our observations, it is evident that the amount of data needed to be pruned to sever spurious correlations is less than the amount needed to observe significant drops in test accuracy.\\n\\n---\\n\\nWe hope this helps. Thank you.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 2)\", \"comment\": \"**Clarifying Figure 2**: Please note that we have expanded on the figure caption and mentioned figure titles in the main text for better readability: (Lines 174-177 in text, with additional references in Section 5). For the reviewer, we provide a simple explanation below:\\n\\nThe figures \\u201cEasiest\\u201d refers to injecting synthetic spurious features in the easiest 100 samples while the figures \\u201cHardest\\u201d refers to injecting synthetic spurious features in the hardest 100 samples.\\n\\n**Explanation of evaluation metrics:** Reflecting on your comment, we added a more detailed explanation of the evaluation metrics used in our paper. Additional sentences: Lines 261-264 in text.\\n\\nCurrent practice in deep learning utilizes Worst-Group Accuracy (WGA) to assess the degree of spurious feature reliance in binary classification tasks. WGA computes the accuracy of test samples that contain the spurious feature associated with a different class during training. While suitable for simple binary classification tasks, WGA becomes insufficient to assess the reliance on spurious features in settings with multiple classes. This is because WGA cannot differentiate between loss in test accuracy due to spurious correlations, or due to lack of learnability of invariant correlations stemming from limited capacity or insufficient training data. In such settings, we measure the degree of spurious feature reliance through Spurious Misclassifications, i.e. the percentage of samples of one class (c1) containing the spurious feature of another class (c2) that are misclassified as (c2) during testing. Lower Worst Group Accuracy indicates heavy reliance on spurious correlations while high worst group accuracy indicates little to no reliance on spurious correlations. A high number of Spurious Misclassifications indicates heavy reliance on spurious correlations while a low number Spurious Misclassifications indicate little to no reliance on spurious correlations.\\n\\n**Experimental Details:**\\n\\nAll experimental details are provided in the Appendix (Sections A.1 and A.2). We have already provided the models used and included all baseline methods in our evaluation. We provide additional experimental details within the same sections.\\n\\n\\n\\n**CIFAR-10S**. We use the ResNet20 implementation from Liu et. al. (ICLR, 2019) that we train for 160 epochs. The network is optimized using SGD with an initial learning rate 1e-1 and weight decay 1e-4. The learning rate drops to 1e-2 and 1e-3 at epochs 80 and 120 respectively. We maintain a batch size of 64. Sample difficulty is computed after the 10th epoch.\\n\\n**CelebA**. We use an ImageNet pre-trained ResNet-50 from PyTorch Paszke et. al. (Neurips, 2019) that we train for 25 epochs. The network is optimized using SGD with a static learning rate 1e-3 and weight decay 1e-4. We maintain a batch size of 64. Sample difficulty is computed after the 10th epoch.\\n\\n**Hard Image-Net**. We use an ImageNet pre-trained ResNet-50 from PyTorch Paszke et. al. (Neurips, 2019) that we train for 50 epochs. The network is optimized using SGD with a static learning rate 1e-3 and weight decay 1e-4. We maintain a batch size of 128. Sample difficulty is computed after the 1st epoch.\\n\\n**Waterbirds**. We use an ImageNet pre-trained ResNet-50 from PyTorch Paszke et. al. (Neurips, 2019) that we train for 100 epochs. The network is optimized using SGD with a static learning rate 1e-3 and weight decay 1e-3. We maintain a batch size of 128. Sample difficulty is computed after the 1st epoch.\\n\\n**MultiNLI**. We use a pre-trained BERT model that we train for 20 epochs. The network is optimized using AdamW using a linearly decaying starting learning rate 2e-5. We maintain a batch size of 32. Sample difficulty is computed after the 5th epoch.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nAs we near the end of the discussion period, this is your last chance to engage.\\n\\nPlease review the authors' replies and feedback from other reviewers. If any concerns remain, request clarifications. The authors have one more day to respond before the reviewer-only discussion week begins.\\n\\nThank you for your efforts.\\n\\nBest regards,\\nArea Chair\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 4)\", \"comment\": \"We thank the reviewer for their acknowledgement and continued efforts to improve our paper.\\n\\nOf the 4 non-synthetic tasks (non-testbed), one of them is a Language task, MultiNLI (Lines 244 - 249). This task further reinforces the generalizability of our claims and observations, as we use Transformer-based models (BERT) in this setting, which are significantly different from standard feed forward networks used in certain experimental settings in our paper. We make this clearer in the introduction of Section 4, where we emphasize that our study contains both Vision and Language Tasks.\\n\\nKindly let us know if there are any remaining concerns and we will be happy to address them. Thank you.\"}", "{\"comment\": \"I thank the authors for their continued willingness to clarify their work. Given their use of text data to demonstrate generalizability, my final request to the authors for this paper in that pursuit would be to produce examples of Figures 4 and 5b using the MultiNLI data to show that the observations/insights in them are consistent within textual data. Separately, I'd also ask the authors to add text to the paper to make clear the difference between Figures 4 and 6, they appear to be very similar, so if they are attempting to convey different information, the nuance isn't clear to me.\\n\\nRelatedly, the authors say \\\"In Fig. 4, we show that by simply excluding a few samples containing the spurious feature with hard invariant features in the CelebA setting studied in Sec. 3 (5% of all samples with spurious features in that class, 1% of the total train set), we observe significant improvements in Worst Group Accuracy\\\" but later also say \\\"we note that on pruning these samples, we do not observe significant drops in overall testing accuracies, implying that these samples do not contribute significantly to generalizability either (Fig. 4).\\\" So they appear to be claiming Fig 4, which best I can tell can only show empirical evidence of pruning's joint impact on train and test accuracy from one dataset, is used as support for the insights about two datasets, one of which is \\\"unidentifiable\\\" while the other is \\\"identifiable\\\".\\n\\n**Future Suggestions**\\n\\nIn the spirit of offering suggestions in general, below I lay out why I think the authors actually could go further than my requests above, and empirically demonstrate that **all** their major insights also exist in the textual data. Doing so, as I said prior, would go a very long way in demonstrating the generalizability of these insights. So if not for this work, perhaps something for them to consider in the future.\\n\\nAs I understand it the authors make a set of inductive observations based on analyzing various datasets. To be clear, I do not believe the authors show every insight on every data set they directly could, but since 4 out of 5 are images perhaps they feel the results are somewhat transferable. Given they only have one text data set, it could be valuable to put some additional energy into finding/generating/augmenting text data that would allow them to show each of these insights.\\n\\n\\n\\n1. Samples with simple core features do not contribute to spurious correlations.\\n\\n2. Samples with hard core features are primary contributors to spurious correlations.\\n\\nTo recreate 1 or 2 (Figure 2) with text data, the authors would need a \\\"test bed\\\" dataset that \\\"does not contain significant spurious cues that can impact the difficulty of learning.\\\" I assume such a text dataset would exist that they could recreate their analysis by simulating easy to difficult spurious correlation by adding text appropriately (analogous to the addition of a line to the images in CIFAR-10). Perhaps they could even use the MultiNLI directly, and simply remove (or add) negation words in all its text samples to remove the spurious correlation, since \\\"in the MultiNLI setting, the spurious feature comprises the same set of\\nnegation words across the training data.\\\" If the decision was to remove negation words in the original MultiNLI, maybe simulating increasingly difficult spurious features could be reintroducing negation words at different frequencies (i.e., the number of negation words) in each sentence, or perhaps some other better way.\\n\\n\\n 3. Excluding a few key samples during training severs spurious correlations.\\n \\n\\nTo demonstrate point 3 (Figure 4) one should be able to simply use the MultiNLI data directly. In fact, the authors claim that the results of Figure 4 are evidence for both the Celeb Data and also the MultiNLI. But I wonder if this was a mistake as Figure 6 seems to be a very similar graph also described as representing the Celeb Data?\\n \\n\\n 4. Spurious feature strength creates a specific distribution of the training data\\n\\n\\nTo demonstrate point 4, Figure 5a could again be done with whatever text data is used in points 1 and 2. Figure 5b could directly use the MultiNLI, as it is already an example of an identifiable case. I think the Multi can be adjusted to represent an unidentifiable case using the negation removal suggestions on MultiNLI mentioned for points 1 and 2. Specifically, I think one can make the MultiNLI have the same properties of the Celeb Data: one class can have all the examples of the spurious feature (men-glasses or contradicts-negation) and that proportion of class examples with that feature can be controlled (by how many contradicts examples the negation is moved from).\\n\\n\\n 5. Spurious Information is often unattainable \\n\\n\\nTo demonstrate point 5, one can again make the MultiNLI have the same properties of the Celeb Data as described in point 4, and the directly produce the graph from Figure 2.\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 2)\", \"comment\": \"**Question 4: Are these justifications architecture/data dependent?**\\n\\nNo, the justifications are not architecture/data dependent. Our original experiments are conducted on ResNet-50 and Transformer based architectures like BERT. We test on 5 different datasets with different numbers of classes, input dimensions, difficulties, and spurious features. To further reinforce our empirical findings, we show that we obtain similar results with VGG16 and smaller ResNets like ResNet18. We also show that our results on the CelebA setting are consistent across different hyperparameters such as different learning rates and weight decays. We will include these results in the appendix.\\n\\nWorst Group Accuracy of Figure 4 for different architectures/hyperparameters for the same Prune Percentage (%):\", \"resnet18\": \"| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 26.33| 24.56| 23.63| 22.85| 22.43| 17.1| 26.33| 35.84|\\n|Pruning Hardest| 50.18| 73.67| 68.49| 74.31| 84.17| 89.35| 80.2| 83.32|\", \"vgg16\": \"| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n| Pruning Easiest| 33.45| 28.62| 28.34| 29.88| 36.39| 60.53| 71.73| 77.96|\\n|Pruning Hardest| 37.58| 54.09| 68.93| 71.59| 82.16| 85.86| 86.49| 85.23|\\n\\nLearning Rate = 0.01:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 18.98| 15.76| 12.11| 22.76| 18.63| 30.11| 14.36| 51.96|\\n|Pruning Hardest| 49.72| 48.67| 66.88| 74.44| 82.21| 78.29| 87.32| 85.99|\\n\\nLearning Rate = 0.0001:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 19.49| 16.68| 17.8| 15.55| 14.5| 18.23| 35.05| 51.37|\\n|Pruning Hardest| 45.74| 59.32| 68.9| 73.47| 79.38| 85.22| 87.61| 86.49|\\n\\n\\n\\nWeight Decay = 0.001:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 18.06| 15.64| 19.86| 16.26| 13.98| 25.05| 18.13| 22.98|\\n|Pruning Hardest| 42.63| 62.63| 70.31| 75.22| 82.7| 87.82| 88.3| 86.23|\\n\\nWeight Decay = 0.01:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 12.74| 15.56| 13.75| 17.44| 14.54| 36.03| 21.64| 27.06|\\n|Pruning Hardest| 38.57| 53.11| 57.31| 73.81| 83.43| 90.3| 87.26| 87.7|\\n\\n\\n**Question 5: While the authors demonstrate that prunning a particular type of subset of data points will reduce spurious correlations, there is no discussion of what are the consequences. I am inclined to beleive that throwing out data is going to have some sort of negative consequence, and therefore it important to know what trade-off is being made. This will allow a better comparison to other methods that do not simply prune data as well as enable practitioners to understand/determine if the cost of pruning data is worth the benefit in removing spurious correlations.**\\n\\nWe thank the reviewer for this suggestion. We also believe that it is important to include such limitations. However, in our extensive empirical evaluation across five different datasets with different numbers of classes, input dimensions, difficulties, and spurious features, we see little to no reduction in testing accuracy when pruning these key samples because they generally do not contribute much to generalizability (Lines 366 - 368, Figure 4 (Right), Figure 6 (Right), Figure 7 (Right), Table 1 (Mean Accuracy)). If one were to continue to prune more of the harder samples, test accuracy will drop (Sorscher et. al., 2022 NeurIPS). However, based on our current observations, it is evident that the amount of data needed to be pruned to severe spurious correlations is less than the amount of data needed to observe noticeable drops in test accuracy. We will be happy to perform any further analysis to assess the potential impacts or limitations of the method based on the reviewer\\u2019s recommendation.\\n\\n(Sorscher et. al., 2022 NeurIPS) \\\"Beyond neural scaling laws: beating power law scaling via data pruning.\\u201d, 2022 NeurIPS.\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 7)\", \"comment\": \"We thank the reviewer for their thoughts and comments.\\n\\nWe would like to clarify that in our responses, we only state that we vary frequency and area alone. If the reviewer has concerns regarding these factors, we refer them to papers (already cited in our text, Lines 127 - 135) that study their role in the strength of (spurious/core) features - Sagawa et. al. 2020 ICML, Shah et. al. 2020 NeurIPS, Moayeri et. al. 2022 NeurIPS, etc. We simply follow them. There are additional factors that can influence the strength of a feature that we do not vary/modify in our experiments. In the definition provided in the text, we also mention noise in the signal as a factor that influences the strength of a feature. To the best of our knowledge, current literature studies only these three factors extensively and this is why we only include these in our definitions.\\n\\nMost works in spurious correlations consider the simple setting where samples belonging to a class contain one core feature and may contain one spurious feature associated with that class (Sagawa et. al. 2020 ICML, Liu et. al. 2021 ICML, Zhang et. al. 2022 ICML, to list a few). In Vision settings, features are described as objects. In Language settings, features are described as words or sentences.\\n\\nThe definitions we provide are consistent with what is currently known and accepted in the literature. We do not claim to encompass all possible factors and it is for this reason that we do not provide numeric values regarding the strengths of features in the 5 settings studied. We clarify that this is beyond the scope of this work.\\n\\nAs most works in spurious correlations literature, we start with settings where spurious correlations are formed (identifiable or unidentifiable/novel) and propose solutions to tackle them, while providing novel insights and attaining SOTA on previously studied settings.\\n\\n\\\\\\n(Sagawa et. al. 2020 ICML) \\u201cAn Investigation of Why Overparameterization Exacerbates Spurious Correlations,\\u201d ICML, 2020.\\n\\n(Shah et. al. 2020 NeurIPS) \\u201cThe Pitfalls of Simplicity Bias in Neural Networks,\\u201d NeurIPS, 2020.\\n\\n(Moayeri et. al. 2022 NeurIPS) \\u201cHard imagenet: Segmentations for objects with strong spurious cues\\u201d, NeurIPS, 2022.\\n\\n(Liu et. al. ICML, 2021) \\u201cJust Train Twice: Improving Group Robustness without Training Group Information,\\u201d ICML, 2021.\\n\\n(Zhang et. al. ICML, 2022) \\u201cCorrect-n-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations,\\u201d ICML, 2022.\\n\\n\\\\\\nKindly let us know if there are any remaining concerns and we will be happy to address them.\"}", "{\"summary\": \"The paper explores how deep neural networks often rely on spurious correlations present in training data, which can lead to performance drop under distributional shifts. The authors highlight a novel setting where spurious signals are weaker, making their identification challenging. They propose a new data pruning technique that selectively removes some training samples that contribute significantly to the formation of spurious correlations. This technique operates without requiring detailed information on the nature or presence of spurious features. The authors demonstrate that their approach achieves state-of-the-art performance on both standard and challenging benchmarks, including scenarios where spurious features are identifiable and unidentifiable.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper addresses a scenario where the strength of the spurious signal is relatively weaker and thus it is difficult to detect any spurious informationhat spurious signals compared to other works where the strength of the spurious signal is significantly greater\\nthan that of the core. \\n2) The paper provides a thorough experimental design that spans both identifiable and unidentifiable spurious feature scenarios. The use of multiple datasets (e.g., CIFAR-10S, CelebA, Hard ImageNet, Waterbirds, MultiNLI) showcases the method's robustness.\\n3) The paper is well-organized, detailing the rationale behind the proposed method, experimental setup, and results.\", \"weaknesses\": \"1) The paper could explore more thoroughly the practicality of applying the proposed pruning method to large datasets. Specifically,even though the proposed method shows promise for datasets of moderate size, an assessment of its computational cost and efficiency on large and real world data would be beneficial.\\n2) The approach relies on assessing sample difficulty as a proxy for contribution to spurious correlations. Clarifying the robustness of this metric under different training regimes (e.g., varied architectures or optimization strategies) could strengthen the generalizability of the findings.\\n3) Although the paper discusses state-of-the-art methods, further comparative analysis with recently emerging pruning and robust training techniques that do not rely on explicit spurious feature identification would be helpful.\", \"questions\": \"1) How sensitive is the pruning method to changes in hyperparameters?\\n2) The robustness of sample difficulty estimation across different model architectures is not clear at the current version of the paper. Would it be possible to add some results or explanations for that?\\n3) Table 1 shows that some SOTa methods achieve better results. Could you please give some explanation on that? Also, \\n4) could you please open source the code to enhance the reproducibility?\", \"suggestions_for_improvement\": \"1) It would be helpful to include a section that discusses the potential negative impacts or limitations of the method, such as the risk of pruning samples that are informative but rare.\\n2)Extending the empirical analysis to more complex and real-world datasets with non-synthetic spurious correlations could further validate the applicability of the method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 5)\", \"comment\": \"**(2) While I agree with most observations and the proposed method for unidentifiable case, could the authors explain how to prune in the identifiable setting? Given that the authors claim \\\"Yang et al. (2024) show that in settings where the strength of the spurious signal is significantly greater than the strength of the invariant signal, it is possible to identify which samples contain spurious features in them and which ones do not\\\", how to identify spurious samples as stated in \\\"simply pruning those spurious samples containing the hardest core features\\\". Also, what does that mean by \\\"we work with group labels as is done in ...\\\"?**\\n\\nIdentifiable settings have been extensively studied in literature (Sagawa et. al. 2020 ICML, 2019 ICLR, Kirichenko 2023 ICLR, to list a few). (Sohoni et. al. 2020 NeurIPS, Liu et. al. 2021 ICML, Zhang et. al. 2022 ICML, Ahmed et. al. 2021 ICLR, Creager et. al. ICML 2021, Yang et. al. 2024 AISTATS) found that in these identifiable settings, deep neural networks form a very strong reliance on spurious features. Thus, through a network's learned (biased) representations, they were able to identify which samples within a class contain spurious features and which ones do not. One way is to cluster the network\\u2019s representations of its training samples. Since the network relies very strongly on spurious features and ignores invariant features, clusters are formed based on the presence/absence of spurious features instead of class labels. Identifying which cluster contains the spurious feature associated with that class can be done by clustering at specific points in the training schedule or by observing the margin with which samples within a cluster are classified. Another popular way to identify samples containing spurious features in identifiable settings is to simply train with high regularization, as we have already shown in Section 3 (Figure 1(b) Right). Samples with spurious features are correctly classified whereas samples without spurious features are incorrectly classified.\\n\\nSince it has been established that it is possible to identify samples containing spurious features in identifiable/popular settings, most seminal works (Sagawa et. al. 2019, ICLR, Kirichenko 2023 ICLR, Deng et. al. 2023 NeurIPS) simply make use of information regarding the sample-wise presence of spurious features. For simplicity and to reduce the number of components in our paper, we do the same. Additionally, in Table 1 of our text, we primarily compare our results to those that directly make use of information regarding the sample-wise presence of spurious features.\\n\\nBuilding on the previous paragraph, we explain group labels:\\n\\nIn most literature concerning spurious features/correlations, group labels commonly refer to labels that indicate the presence or absence of spurious features in each training sample within a class. Thus, within a class, samples with the spurious feature would have a group label = 1 and samples without the spurious feature would have a group label = 0. Note that this is different and independent from class labels in classification tasks. We have made this point clearer in the revised text (Lines 463-464). In identifiable settings (as in the existing literature) where the strength of the spurious signal is relatively stronger, it is trivial to identify group labels, as we have explained above.\\n\\nFinally and more importantly, we would like to point out that the primary purpose of our work is to study and tackle novel settings where spurious features/information is unidentifiable and propose novel insights (Spurious correlations are formed from a handful of all samples containing spurious features). The purpose of Section 6.3 is to show that in settings where spurious features/signals are strong and identifiable, simply pruning a few samples can yield state-of-the-art results. While not the main focus of our paper, we believe this finding is very important. Current techniques that attain good performance in these settings are extremely complex and computationally expensive. Our method is very simple to understand, takes only a few additional lines of code, is easy to reproduce and yields state-of-the-art performances, even on benchmarks that do not fit into the primary objectives of this paper. The robustness of our findings, and by extension, their importance, is further highlighted by observing that pruning sparsities of a wide range can attain state-of-the-art or competitive performance on these benchmarks (Figure 8).\"}", "{\"comment\": \"Dear Reviewer eCbW,\\n\\nIt would be great if we could hear back from you so that we can know if we have addressed all of your concerns.\\n\\nThank you.\"}", "{\"title\": \"Relevant Example\", \"comment\": \"In closing, I'll highlight that the authors cited Shah et. al., 2020 in their response to my original review; this work provides a formal theory on when the network will (not) learn core (complex) vs spurious (simple) features, which then they expand on with simulations. Therefore when Shah et. al., 2020 make claims like \\\"Neural networks exhibit simplicity bias\\\" and \\\"Extreme simplicity bias leads to non-robustness\\\" it allows us to more deeply interrogate facets of their claims and understand their generalizability to settings they do not explicitly consider. If the authors of this present work were able to provide some theoretical results that even conjecture about the presence of (easy/hard) training samples with spurious features to the forming of spurious correlations and/or their removal on the accuracy, this would give way more credence to the claimed contributions and certainly would merit a higher score from me. If not, I would still like to congratulate the authors on a really interesting paper and strongly encourage them to reconsider what they feel they must claim as proven.\"}", "{\"comment\": \"I thank the authors for their careful rebuttal.\\n\\nI am satisfied with it, it clarified my previous questions, and I therefore confirm my score. I would maybe recommend to not use the term \\\"polynomial\\\" in Figure 3, used to indicate something that simply grows \\\"super-linearly\\\". This behaviour is not proven to be anything fundamental accross datasets, so I wouldn't stress it that much during the narrative.\\n\\nOn a separate matter, I also invite Reviewer 7BQb to reconsider their score, which I believe to not be in line with the value of this work.\"}", "{\"comment\": \"Dear Reviewer eCbW,\\n\\nAs per your request, we have:\\n\\n1) Open sourced the code.\\n2) Addressed your questions and concerns.\\n3) Provided additional experimental results across architectures and hyperparameters.\\n\\n---\\n\\\\\\nTo further reinforce the generalizability of our insights and observations, we have re-created certain critical Vision experiments in the paper for Language tasks as well (in addition to the original MultiNLI experiments already present in the paper.)\\n\\n**Insights 1 and 2:**\\n\\n**Samples with simple core features do not contribute to spurious correlations. Samples with hard core features are primary contributors to spurious correlations.**\\n\\nTo show this, we perform the same experiment in Section 4, but instead of CIFAR-10S, we use the MultiNLI dataset. First, we remove all samples with negation words from the **training** data and then we compute the sample-wise difficulty scores as we do for CIFAR-10S in Section 4. We then create two settings: one where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 hardest input samples belonging to class 1 (**contradicts**) and another where we introduce the spurious negation word \\u201cnever\\u201d at the end of the 100 easiest input samples belonging to class 1 (**contradicts**). We do the same to a set of test samples belonging to class 2 (**neutral with**) and class 3 (**entailed by**).\\n\\nConsistent with the standard MultiNLI setting, we measure the degree of spurious feature reliance through Worst Group Accuracy (accuracy of the set of test samples of class 2 or class 3 with the spurious feature).\\n\\nWe observe that WGA is significantly worse when the word \\u201cnever\\u201d occurs in the hardest samples vs. the easiest samples during training.\", \"introducing_spurious_feature_in_easiest_100_samples\": \"WGA = **55.22%**\", \"introducing_spurious_feature_in_hardest_100_samples\": \"WGA = **1.04%**\\n\\nA higher WGA indicates low reliance on spurious features.\\n\\nThe gap in worst group accuracy is 54.18%. Note that the number of samples containing the spurious feature is the same in both settings (= 100).\\n\\nAdditionally, we note that there are 191,504 training samples in this setting. There are 57,498 samples belonging to the **contradicts** class. We introduce the spurious feature in only 100 samples of the **contradicts** class (0.17% of samples within the class, 0.0522% of all samples in the training set.) We also observe that in a setting with no spurious features during training, Worst Group Accuracy is 67.42%.\\n\\nSimply varying which 100 samples contain the spurious negation word \\u201cnever\\u201d has such a **huge** impact on Worst Group Accuracy. This finding is extremely insightful, novel and is consistent with the results observed in Section 4 (Figure 2) of our paper.\\n\\nThis experiment reinforces the claim that samples with hard core features are primary contributors to spurious correlations and that samples with simple core features do not contribute to spurious correlations.\\n\\n**Insight 3:**\\n\\n**Excluding a few key samples during training severs spurious correlations.**\\n\\nFor this, we simply show the results in Figure 4 but for the MultiNLI dataset. Note: Due to computational constraints, we only show three pruning sparsities.\", \"worst_group_accuracy\": \"| Prune % | 20%| 25%| 33.33%|\\n| -------- | ------- | ------- | ------- |\\n|Pruning Easiest| 66.81| 65.59| 65.33|\\n|Pruning Hardest| 72.21| 73.17| 76.05| \\n\\nKindly note that the model attains 65.9% Worst Group Accuracy on the original, unpruned dataset.\\n\\nWe refer the Reviewer to Fig. 10 in the appendix for a better understanding.\\n\\n**Insight 4:**\\n\\n**Spurious feature strength created a specific distribution of the training data.**\\n\\nTo show this, we use a smaller subset of the MultiNLI dataset and vary the strength of the spurious signal by varying the proportion of samples containing the spurious feature.\", \"distribution_of_samples_with_spurious_features_in_identifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|57.4% | 24.3% | 11.5% | 6.8% |\", \"distribution_of_samples_with_spurious_features_in_unidentifiable_setting\": \"| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|28% | 21% | 24% | 27% |\\n\\nTo confirm that the unidentifiable setting still causes the network to rely on spurious correlations, we create another setting where we remove all samples with spurious features and compare the Worst Group Accuracies below:\", \"unidentifiable_setting\": \"64.89%\", \"no_samples_with_spurious_features_setting\": \"70.73%\\n\\nWe observe that in the setting with no samples containing the spurious features, Worst Group Accuracy is higher, indicating that the unidentifiable setting still causes the network to rely on spurious correlations but the samples containing spurious features are uniformly distributed.\\n\\n---\\n\\\\\\nKindly let us know if there are any remaining concerns and we will be happy to address them.\"}", "{\"title\": \"Response to Reviewer n9k8 (Part 1)\", \"comment\": \"We thank the reviewer for their comments. We are happy to hear that they found our work to be very well written and glad that they enjoyed reading our work. We have addressed their comments below:\\n\\n**It is sometimes not obvious what the authors mean by \\\"strong\\\" or \\\"weak\\\". While providing precise definitions is beyond the purposes of this work, some examples during the introduction could facilitate the reading. For example, I was confused in the paragraph at line 201, where the strength of the signal is defined both in terms of the geometry of the pattern and its frequency in the data. These two aspects are fundamentally different, and putting them altogether might not result in the best model to investigate this problem...**\\n\\nThank you for this comment. The strength of the spurious signal indicates the ease of learning the spurious feature. When the strength of the spurious signal is strong, spurious features are learned easily and spurious correlations are formed easily. In all literature concerning spurious correlations, the strength of the spurious signal is primarily determined by the following three factors: (1) Proportion (or Frequency) of training samples containing the spurious feature (Sagawa et. al. 2020 ICML, Shah et. al. NeurIPS 2020, Kirichenko et. al. 2023 ICLR), (2) Area Occupied and Position (if it is centered or not) in the training sample (Moayeri et. al. 2022 NeurIPS) and (3) The amount of noise in the signal (Sagawa et. al. 2020 ICML, Ye et. al. 2023 AISTATS). A feature which is present in a large portion of all training samples, occupies a lot of area, is centered, and has little to no variance, has a very strong signal. On the other hand, a feature which is present in a small portion of all training samples, occupies little area, and has a lot of noise/variance, has a very weak signal. For instance, in Section 2, to reduce the strength of the spurious signal, we reduce the proportion of training samples that contain the spurious feature. We have added this to lines 128-136 of the revised text.\\n\\nWe ensure that both aspects (geometry and frequency) are not modified in the same experiment. So for instance, in Section 5, the strength of the spurious signal for CIFAR-10S is varied by only changing the geometry of the spurious feature. In Section 6, the strength of the spurious signal for CIFAR-10S is varied by modifying the proportion of samples containing the presence of the spurious feature. We show that these experiments can be replicated by performing the other modification. Below, we present the results in Section 6 (Figure 5) for CIFAR-10S across three seeds by doing both: varying the geometry (or increasing the area occupied compared to the unidentifiable setting) and varying the proportion (increasing the proportion/frequency of samples containing the spurious feature compared to the unidentifiable setting): \\n\\nIdentifiable Setting distribution by varying geometry (Spurious Samples only):\\n\\nQ1(Easiest): 53.2%, Q2: 24.26%, Q3: 16.13%, Q4 (Hardest): 6.39%\\n\\nIdentifiable Setting distribution by varying proportion (Spurious Samples only, already in the paper):\\n\\nQ1(Easiest): 49.93%, Q2: 38.68%, Q3: 8.78%, Q4 (Hardest): 2.6%\\n\\nUnidentifiable Setting distribution (Spurious Samples only, already in the paper):\\n\\nQ1(Easiest): 30.93%, Q2: 22.26%, Q3: 23.73%, Q4 (Hardest): 23.06%\\n\\nWe observe that in both Identifiable settings, Q1 contains most of the samples with spurious features while Q4 contains few samples with spurious samples.\\n\\nIn other words, these modifications are interchangeable as they ultimately influence the same property: The strength of the spurious signal.\\n\\n\\\\\\n\\\\\\n(Shah et. al. 2020 NeurIPS) \\u201cThe Pitfalls of Simplicity Bias in Neural Networks,\\u201d NeurIPS, 2020.\\n\\n(Sagawa et. al. 2020 ICML) \\u201cAn Investigation of Why Overparameterization Exacerbates Spurious Correlations,\\u201d ICML, 2020.\\n\\n(Kirichenko et. al. 2023 ICLR) \\u201cLast Layer Re-training is Sufficient for Robustness to Spurious Correlations,\\u201d ICLR, 2023.\\n\\n(Moayeri et. al. 2022 NeurIPS) \\u201cHard imagenet: Segmentations for objects with strong spurious cues\\u201d, NeurIPS, 2022.\\n\\n(Ye et. al. 2023 AISTATS) \\u201cFreeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise\\u201d, AISTATS, 2023.\\n\\n\\\\\\n**There should be a typo in line 162 \\\"as shown in...\\\"**\\n\\nThank you for pointing this out. We agree that there is a typo in line 162 and we fix the reference.\"}", "{\"metareview\": \"### Summary\\n\\nThe authors investigate spurious correlations in a controlled environment by introducing them into existing datasets. Through a series of calibrated experiments, they identify how individual samples contribute to spurious correlations. This leads to a pruning strategy that significantly improves worst-group performance without drastically reducing average performance. The proposed method achieves state-of-the-art results compared to other strategies.\\n\\n### Strenghts\\n\\nThe paper is well-written and easy to follow. \\nThe authors guide the reader through their experiments, elucidating their findings and the reasoning behind the method.\\nThis paper gives both a contribution to the understanding of the phenomenon and to the development of a mitigation strategy. \\n\\n### Weaknesses \\n\\nNo major concerns.\\nMarginal improvements over other mitigation strategies.\\n\\n### Reasons for acceptance\\n\\nThis paper presents a simple yet impactful idea, supported by experiments, that leads to a practical algorithm. All major critiques have been addressed, and I see no reason for rejection.\", \"additional_comments_on_reviewer_discussion\": [\"**Clarity:** Reviewers raised several questions about the core ideas and interpretation of the results. The authors provided convincing responses, conducted additional experiments, and updated their submission accordingly.\", \"**Limited experiments:** Though limited in benchmarks and architectures, the experiments align with related works. The authors also expanded their results.\", \"**Reproducibility:** Initially criticized for not sharing code, the authors have since released it.\", \"The discussion was extensive, and some reviewers stopped engaging, but I believe their concerns were addressed in the final replies.\"]}", "{\"title\": \"Global Response for the Area Chair and the Reviewers\", \"comment\": \"We thank the reviewers for their comments, which have helped improve the quality of this paper. We strongly believe that we have addressed all concerns raised by all reviewers. Below, we summarize the discussion at a high level:\\n\\n---\\n\\n>**Summary**\\n\\nThe reviewers recognized that this work tackles an important and critical problem (X6Fn, 7BQb) through a very intuitive solution (X6Fn), contains novel contributions (n9k8, X6Fn), delivers interesting and exciting insights and observations (X6Fn), discovers some valuable generalizable truths (X6Fn), contains simple and compelling experiments (X6Fn), provides a thorough experimental design with multiple datasets (eCbW), and that it was well written (n9k8, X6Fn) and well organized (eCbW, n9k8, 7BQb). While some reviewers had some initial comments regarding clarity of the paper (7BQb, n9k8), these have now been addressed and acknowledged by the reviewers.\\n\\n---\", \"the_following_are_comments_that_we_have_addressed_but_have_not_heard_back_regarding\": \"1) Reviewer **eCbW**: Reviewer eCbW asked us to **open source our code, which we have**. They point out that the settings studied in this paper are synthetic and that we should include settings with real-world spurious features. **This is incorrect** as only 1 out of the 5 datasets studied has synthetic spurious features which is **necessary and intentional** as it enables us to alter the synthetic spurious feature to draw insights. The other four datasets contain **real-world** spurious features in real-world settings. Furthermore, apart from the novel settings proposed, the standard settings studied are consistent with those in the literature. We have also received **no references or settings** that they want us to reproduce our results in. They also asked us to provide **additional results across different architectures and hyperparameters, which we have**. However, we have not heard back from them at all during the rebuttal phase.\\n \\n2) Reviewer **7BQb**: As we have stated in our last response, we believe Reviewer 7BQb may have misunderstood our explanation on what impacts the strength of a feature. There are many factors that can be varied to alter the strength of a feature. In our paper, the strength is **varied** by two factors: frequency and area. They additionally claim that these concepts are not well defined. We emphasize, however, that these concepts are heavily studied in all relevant literature that we have cited.\\nWe also note that discovering and considering all factors that can impact the strength of a feature is beyond the scope of this paper and that this does not impact the generalizability of our findings, as they ultimately alter the same thing: the strength of a feature. We have already shown that frequency and area can be modified interchangeably to alter the strength. One could also modify the strength by making red color in eyeglasses a lighter/darker shade, as the reviewer has pointed out. For instance, one could vary the shade to create the same plots in Figure 5 (a) but this will **not offer any new insights**. It is also unclear why the reviewer believes that their example of eyeglasses impacts generalizability and relates to Reviewer X6Fn\\u2019s comments. However, we have not heard back from them.\"}", "{\"title\": \"Response to Reviewer X6Fn (Part 3)\", \"comment\": \"We thank the reviewer for their comments. We are glad that they like our work very much, believe that the observations are exciting and that our paper is really interesting. It is encouraging to see these comments. We try our best to address their follow-up concerns below:\\n\\n\\\\\\n**I would instead recommend following the definition used in Singhla and Feizi, 2022, as it seems to often be cited by others in the authors' literature, as they seem to work to precisely codify the concept, even for their MTurkers.**\\n\\nWe thank the reviewer for this comment. We would like to clearly note that our settings cover **both Vision and Language**, so we cannot directly use the definition from Singhla and Feizi, 2022, because their definition is valid only for Vision settings (directly defined on visual features and objects). Also, please note that Singhla and Feizi, 2022 state that core attributes are the set of visual features that are **always** a part of the object definition. Additionally, in Liu et. al. 2021, there are no core features that are backgrounds. The only background features they consider are present in the Waterbirds task (Land/Water background) and these are spurious features, not core features. The core feature associated with each class is present in all samples of that class (e.g., all samples belonging to the Landbirds class contain landbirds (core feature) in them. Importantly, not all of the Landbird samples contain land backgrounds (spurious feature associated with that class). Some samples contain water backgrounds. (Section B (B.1) in the appendix). However, we are happy to alter the definitions to incorporate your comments as shown below:\\n\\n>Core (or invariant) features represent the class label $y_i$ and are semantically relevant to the task. They are also fully predictive of the task, as they are present in all samples. Spurious features do not represent the class labels and are semantically irrelevant to the task.\\n\\nWe simply use the definitions provided by Izmailov et. al. 2022 NeurIPS and Kirichenko et. al. 2023 ICLR. We make these changes to the revised text.\\n\\n\\\\\\nIzmailov et. al. 2022 NeurIPS, On Feature Learning in the Presence of Spurious Correlations\\n\\nKirichenko et. al. 2023 ICLR, Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations\\n\\n\\\\\\n\\\\\\nWhile we soften our claims below, we would like to point out that our empirical analysis covers 5 diverse datasets and different architectures. Additionally, we emphasize that the papers the reviewer references (Liu et al. (2021), Kirichenko et. al. (2023), Moayeri et al. (2023)) propose solutions and insights with empirical evidence alone. However, subsequent works then used their insights to create theories and techniques (e.g. Ye et. al. 2023 AISTATS). Not only does our work make use of the benchmarks mentioned in these works, but we also propose new settings where these techniques are not suitable. We believe pushing the boundary on what is currently known is a good step toward covering all spurious correlations in the future or creating a better understanding of the behavior of deep networks. For instance, the papers cited above do not cover all spurious correlations, as we show that their methods are not suitable in the proposed settings. However, the insights and contributions of these papers were critical in moving the field forward. While we do not prove our claims theoretically, we strongly believe that the insights and contributions of our paper have their own novel value and are important for future research.\\n\\nThat being said, we are happy to soften the claims by making changes to the contributions in the following manner.\", \"changes_in_contributions_listed_in_the_paper\": \">Contribution 2: We discover that spurious correlations are formed primarily due to a handful of all the samples containing spurious features **through extensive empirical investigation**. Based on this insight, we propose a simple and novel data pruning technique that identifies and prunes a small subset of the data that contains these samples.\", \"changes_in_contributions_in_global_response_to_the_review_team\": \">Contribution 2: Discovering that spurious correlations are primarily formed from a handful of all samples containing the spurious features **through extensive empirical investigation**.\\n\\n>Contribution 3: Proposing a novel data pruning solution that severs spurious correlations in the **proposed** novel settings while attaining state-of-the-art performances on previously studied settings.\\n\\n\\\\\\n\\\\\\nYe et. al. 2023 AISTATS \\u201cFreeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise\\u201d, AISTATS, 2023.\\n\\n\\\\\\nWe are happy to address any further concerns the reviewer may have. Thank you for your time and effort in improving our paper and thank you for your enthusiasm towards our paper.\"}", "{\"title\": \"Response to Reviewer eCbW (Part 1)\", \"comment\": \"We thank the reviewer for their comments. We address their concerns below:\\n\\n**Weakness 1: The paper could explore more thoroughly the practicality of applying the proposed pruning method to large datasets. Specifically, even though the proposed method shows promise for datasets of moderate size, an assessment of its computational cost and efficiency on large and real-world data would be beneficial.**\\n\\n\\nWe thank the reviewer for their comment. We would like to emphasize that while efficiency was not the primary objective of our work, data pruning is an efficient solution compared to other techniques as one only does standard training on a pruned (thus, smaller) dataset. This is in contrast to other techniques that do sample upweighting or representational alignment, which significantly increases the training time over just standard training. Additionally, we emphasize that Hard ImageNet, Waterbirds, MultiNLI, and CelebA studied in our paper are real-world datasets with realistic/real-world spurious features, and we intentionally utilized CIFAR-10S to better understand how varying the strength of (synthetic) spurious features impacts generalizability and training distribution (Lines 219-222, Lines 301-317, Figure 5(a).) Regarding the matter of scale, ImageNet is the only dataset that contains samples on a scale that is different from CIFAR-10 but it is not studied in the context of spurious correlations. To the best of our knowledge, there is no dataset at that scale that is studied in the context of spurious correlations. \\n\\n**Weakness 2: The approach relies on assessing sample difficulty as a proxy for contribution to spurious correlations. Clarifying the robustness of this metric under different training regimes (e.g., varied architectures or optimization strategies) could strengthen the generalizability of the findings.**\\n\\n\\nWe show that the computation of these ranks and subsequent pruning are robust even across architectures and different optimization strategies/hyperparameters. Below, we present the worst Group Accuracy for the CelebA Setting (Figure 4) for different architectures and hyperparameters for the same Prune Percentage (%). We will include these results in the revised text:\", \"resnet18\": \"| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 26.33| 24.56| 23.63| 22.85| 22.43| 17.1| 26.33| 35.84|\\n|Pruning Hardest| 50.18| 73.67| 68.49| 74.31| 84.17| 89.35| 80.2| 83.32|\", \"vgg16\": \"| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n| Pruning Easiest| 33.45| 28.62| 28.34| 29.88| 36.39| 60.53| 71.73| 77.96|\\n|Pruning Hardest| 37.58| 54.09| 68.93| 71.59| 82.16| 85.86| 86.49| 85.23|\\n\\nLearning Rate = 0.01:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 18.98| 15.76| 12.11| 22.76| 18.63| 30.11| 14.36| 51.96|\\n|Pruning Hardest| 49.72| 48.67| 66.88| 74.44| 82.21| 78.29| 87.32| 85.99|\\n\\nLearning Rate = 0.0001:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 19.49| 16.68| 17.8| 15.55| 14.5| 18.23| 35.05| 51.37|\\n|Pruning Hardest| 45.74| 59.32| 68.9| 73.47| 79.38| 85.22| 87.61| 86.49|\\n\\n\\n\\nWeight Decay = 0.001:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 18.06| 15.64| 19.86| 16.26| 13.98| 25.05| 18.13| 22.98|\\n|Pruning Hardest| 42.63| 62.63| 70.31| 75.22| 82.7| 87.82| 88.3| 86.23|\\n\\nWeight Decay = 0.01:\\n| Prune % | 10%| 25%| 40%| 50%| 75%| 90%| 95%| 97%|\\n| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |\\n|Pruning Easiest| 12.74| 15.56| 13.75| 17.44| 14.54| 36.03| 21.64| 27.06|\\n|Pruning Hardest| 38.57| 53.11| 57.31| 73.81| 83.43| 90.3| 87.26| 87.7|\\n\\n**Weakness 3: Although the paper discusses state-of-the-art methods, further comparative analysis with recently emerging pruning and robust training techniques that do not rely on explicit spurious feature identification would be helpful.**\\n\\nTo the best of our knowledge, there do not exist any promising pruning or robust training techniques that do not rely on explicit spurious feature identifications. We will be happy to compare our approach with techniques based on the reviewer\\u2019s recommendation.\"}", "{\"title\": \"Response to Reviewer n9k8 (Part 3)\", \"comment\": \"We thank the reviewer for their thoughtful comment. We agree with their statement and have made the required changes to the revised text. We believe their comments and feedback have helped improve the quality of our work.\\n\\nWe would also like to thank them for encouraging another peer reviewer, Reviewer 7BQb, to reconsider their score considering our work\\u2019s value. We hope that this discussion will lead to fair assessment of our work. Thank you again.\"}", "{\"comment\": \"Dear Reviewer 7BQb,\\n\\nKindly let us know if you are satisfied with our answer to your last question.\\n\\nTo re-iterate, we only claim to **vary** two factors in our experiments: frequency and area. There are many factors that can impact the strength of a feature, as discussed in literature (Sagawa et. al. 2020 ICML, Shah et. al. 2020 NeurIPS, Moayeri et. al. 2022 NeurIPS) and mentioned in our paper (Lines 127-135). Note that we only present those factors that are extensively discussed in literature. Discovering and considering all factors that may impact the strength of a feature/signal is beyond the scope of this work.\\n\\nTo address generalizability concerns (as you have cited Reviewer X6Fn\\u2019s comment), we would like to emphasize that we consider most existing benchmarks in literature, propose new ones and present experiments across different architectures and domains (Vision and Language). We would also like to point out that we have satisfied Reviewer X6Fn\\u2019s generalizability concerns.\\n\\nIf the reviewer still has any concerns, we present below many key vision experiments in the paper in the language setting to further reinforce the robustness of our insights and observations (in addition to the MultiNLI results already present in the paper.)\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 4)\", \"comment\": \"We sincerely thank the reviewer for taking the time to go through the revised version of our paper. We address their follow-up concerns below:\\n\\n**(1) The \\\"strength\\\" of a feature can be understood as its magnitude like the strength of signal/noise. It seems the strength in this paper actually represent its frequency. Is it correct? If so, I would suggest to replace strength as frequency.**\\n\\nIn our paper, the strength of spurious signals is **varied** by two main factors: **frequency** (Sec. 3 & 6) and **area** (Sec. 4.) This is consistent with existing literature (Lines 128 - 136 in our paper). \\n\\nBelow, we categorize sections of our paper where strength is varied by frequency or area:\\n\\n**By Frequency:**\", \"section_3\": \"Observing increased drops in Worst Group Accuracy (Female Samples with the Spurious Feature) by increasing the frequency of Male Samples with the Spurious Feature.\", \"section_6\": \"In the synthetic CIFAR-10S setting, we vary the strength of the spurious features (between identifiable and unidentifiable settings) by varying the frequency.\\n\\n**By Area:**\", \"section_4\": \"In the synthetic CIFAR-10S setting, we vary the strength of the three spurious features (S1, S2, and S3) by varying the amount of area that they take up in the image. S1 takes up the least amount of area and causes the least amount of spurious misclassifications (less spurious feature reliance). S3 takes up the most amount of area and causes the most amount of spurious misclassifications (more spurious feature reliance). S2 is in between S1 and S3. We observe that introduction of S3 (largest area) causes the most number of spurious misclassifications while introduction of S1 (smallest area) causes the least number of spurious misclassifications (Figure 2). **Please note that we do not vary the frequency of samples containing the spurious feature in the three settings. In other words, the same number (= 100) of samples contain spurious features occupying different areas.**\\n\\nIt is important to note the two attributes (frequency and area) can be used interchangeably to alter the strength of the spurious signal. Below, we present the distribution results in Section 6 (Figure 5 (a)) for CIFAR-10S across three seeds by modifying both attributes: varying the proportion (or increasing the proportion/frequency of samples containing the spurious feature compared to the unidentifiable setting) and varying the area (or increasing the area occupied compared to the unidentifiable setting). \\n\\nIdentifiable Setting distribution by varying area **only** (Spurious Samples only):\\n\\n\\n| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n|53.2% | 24.26% | 16.13% | 6.39% |\\n\\nIdentifiable Setting distribution by varying proportion **only** (Spurious Samples only, already in the paper):\\n\\n\\n| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n| 49.93% | 38.68% | 8.78% | 2.6%|\\n\\nUnidentifiable Setting distribution (Spurious Samples only, already in the paper):\\n\\n\\n| Q1 (Easiest) | Q2 | Q3 | Q4 (Hardest) |\\n| ------- | ------- | ------- | ------- | \\n| 30.93% | 22.26% | 23.73% | 23.06% |\\n\\nIn the two identifiable settings (varying the area vs. varying the frequency), the distributions of spurious samples are similar, where Q1 contains most of the samples with spurious features while Q4 contains very few samples with spurious samples.\\n\\n\\nWhile in our experiments, these are the only two factors that we vary, most benchmarks studied in spurious correlations literature exhibit a combination of the factors mentioned in our paper (Lines 128 - 136 in our paper). Consider the popular Waterbirds dataset, where spurious features occupy a lot of area: Water and Land backgrounds in the image. It is unlikely that the network will form spurious correlations if the background only spans a few pixels and contains a lot of noise, even if one were to maintain the frequency of samples containing the spurious features. This claim is supported by our experiments in Section 4, where if the spurious feature covers less area (S1), the network has minimal reliance on spurious features even though the same number of samples contain the spurious features across the three settings: S1, S2, and S3.\"}", "{\"comment\": \"Dear Reviewer 7BQb,\\n\\nWe hope our latest responses have helped address your concern. Kindly let us know if your concern still remains.\\n\\nThank you.\"}", "{\"title\": \"Thank You\", \"comment\": \"I thank the authors for their continued willingness and thoroughness to address the questions I've raised. I hope they feel like it has improved their work. While the most important discoveries do seem to replicate, the fact that not all do, I think gives the literature (or the authors themselves) additional fodder for digging deeper and understanding what is truly generally true, or at least under what conditions. I myself have curiosities (I feel deeply invested in understanding this question now) but I think the results from this additional analysis with the text data convince me (as much as is possible from purely empirical experiments) that the authors have truly discovered some valuable generalizable truths. I again thank the authors, congratulate them on a truly awesome paper, and will raise my score accordingly!\"}", "{\"comment\": \"**Definintions**\\nI appreciate the authors updated definitions. The points I attempted to express were mixed together, and I incorrectly cited Liu et. al. 2021 as support. I was trying to explain that it seems to me that *all* core features may not be required to be present in *all* samples of a given class---e.g., there could be particular traits that birds of class1 can have that no birds of class2 can have---and *some* background features could be present in *all* samples all samples---e.g., for a given sample, perhaps all the birds of class 1 just so happen to have the same background. This, though, is a philosophical argument about what it means truly be a core and background feature, so I think it makes sense for the authors to adopt their particular definition, and given it's reasonable (including the fact it is consistent with others in the literature) we operate from there.\\n\\n**Contexts Covered and Claims** \\nI appreciate the authors softening their claims, and I want to be clear that I completely agree with the author's response about the novelty and value of their work despite not providing theory, and that empirical work can/does spur new theory. To make the point salient I feel confident there exist theoretical and methodological tools that can be extended to formalize the solution authors have discovered, if it is in fact generally true, given the relationship of their result on data pruning to those in other settings I am very familiar with. The non-trivial work I still see is the proof of the generality of their observations, and under what conditions.\\n\\nI know the authors look at 5 datasets, and there are many things these datasets have in common that reasonably may mediate (or perhaps even define) the presence of their insights; I gave two as examples in my previous response: these are all image datasets (note this also separately constrains the architectures) and they seem to be easy prediction challenges. However, I was unaware the work covered text as well, I did a quick review of the paper again and unless I am missing the reference, they do not seem to make this clear. If I did not miss the reference, I would then recommend explicitly making this clear. I imagine other readers may not recognize this given the motivation and examples are all image-related. I am unsure if the authors already have resulted in textual data sets or if doing so would be simple. But if the authors can demonstrate the same set of discoveries with textual datasets as well, that would go a **long** way in making a case for the generality of the result. As a result, I would be very happy to raise my score (considerably), especially in light of how negative some of the other reviews appear to be, for reasons I don't fully understand.\"}", "{\"title\": \"Implied Claims of Generalizability\", \"comment\": \"**General Critique**\\nAt the core of all my comments is a concern of generalizability, i.e., to what other contexts the observations and subsequent conclusions the authors make can be applied. As discussed above, the idea of a spurious correlation has a broader meaning beyond the context the authors are considering, I feel this is simply a matter of intention and terminology. Therefore, other than the above recommendations, I'm happy to limit ourselves to the class of spurious correlations from the authors' literature, and their intended scope:\\n\\n> \\\"the scope of our paper is to provide empirical evidence regarding the gaps in the literature, novel insights regarding the behavior of deep neural networks in the presence of spurious correlations, and the effectiveness of our proposed solution.\\\"\\n\\nI completely understand this scope, but I do not understand how this scope and the empirical work the authors have done, support the stated contributions\\n\\n> **Contributions:**\\n> 1. Identifying and targeting novel settings where obtaining spurious information is difficult/impossible and showing the failure of past techniques in these settings.\\n> 2. Discovering that spurious correlations are primarily formed from a handful of all samples containing spurious features.\\n> 3. Proposing a novel data pruning solution that severs spurious correlations in all novel settings while attaining state-of-the-art performances on previously studied settings.\\n\\nThis scope can certainly (and seems to successfully) support Contribution 1 and consider a particular context of spurious correlation where other techniques have failed in the past. This is possible because to demonstrate failure, one simply needs to show examples of said failure. However, Contribution 2 and the first half of Contribution 3 are general claims about how spurious correlations manifest and subsequently the general efficacy of their proposed pruning solution. The space of ways deep learning models are being used in practice is massive, while the authors consider 5 image datasets where they either generate a particular type of feature to serve as a spurious correlation (e.g., line across the image) or rely on examples of previously studied, identified, or labeled spurious correlations. I completely understand the authors' choice to do so, and is in fact what I would likely do as well. I also understand that their literature has only provided this small set of labeled/benchmark data, so I agree they can claim to have obtained SOTA performance on previously studied settings, as this is something they can again demonstrate empirically. However, my challenge is that the paper (and the authors' responses to the review team) are written as if their observations about the spurious features they studied, the behavior of the spurious correlations induced by the features, and the response to the spurious correlation to their data pruning in their specific experiments are generally true of all spurious correlation learned from a \\\"spurious\\\" feature a deep learning model would use instead of the \\\"core\\\" feature. \\n\\nTo make this point salient, it would need to be true that all these empirical behaviors present in their simulations are indicative of the set of all such possible spurious correlations learnable by DNNs. This is why I asked about theoretical justifications, the formality of the pruning procedure as a statistical test, etc. because these could then allow us to analyze the (probabilistic) conditions of the data and the spurious feature such that authors reported contributions 2 and 3, enabling (at least to some degree) is to understand the generalizability of the observations. Again, the authors' response, that their scope is to provide empirical evidence, is well taken, but then we don't know what behaviors are generalizable or to what degree. Will everything hold if I'm considering DNN with the potential to learn spurious correlation in other contexts: tabular data, textual data, or speech data; I could see arguments for and against the result porting over. Or given that the image datasets the authors use generally have high test accuracy, does it all hold when the underlying prediction task is more difficult? To be clear, I do not want to encourage (or even imply) the authors should engage in a \\\"wac-a-mole\\\" exercise, given the limited space they have to begin with, because these are just a few quick examples. Instead, I hope that this helps elucidate the importance of the point: while I like the result and I think it's likely not limited to *just* the precise simulations the authors consider, I don't know how general it is. Moreover, I feel the authors are making strong positive assertions about the behaviors of DNN models with spurious correlation and the power of their data pruning solution, well beyond what they can prove. And frankly, I don't think they need to because the observations are exciting on their own!\"}", "{\"title\": \"Discussion Phase\", \"comment\": \"Dear Reviewers,\\n\\nPlease review the authors' replies and consider the feedback from other reviewers. If your concerns remain unresolved, feel free to ask for further clarifications. We have until November 26th for discussion.\\n\\nThank you for your efforts.\\n\\nBest regards,\\nArea Chair\"}", "{\"comment\": \"Dear Reviewer 7BQb,\\n\\nKindly let us know if we have resolved the 4 follow-up concerns. If not, we will be happy to address any remaining concerns. Thank you.\"}", "{\"title\": \"Response to Reviewer 7BQb (Part 6)\", \"comment\": \"**(3) Can authors clarify the rationale behind \\\"the presence of strong spurious information enables the network to understand samples with hard core features better\\\"?**\\n\\n\\nBy the statement highlighted by the reviewer, we mean that samples with hard core features incur a low training error as the network relies on strong spurious features to reduce training error for that sample. Thanks to your comment, we think this sentence might create future confusion and thus, rephrase it in the revised text.\\n\\u201cThe presence of strong spurious information enables the network to have lower training error for samples with hard core features + spurious features.\\u201d\\n\\nTo support the statement, we kindly refer the reviewer to Figure 5, where in the presence of strong spurious information (identifiable settings), most samples with spurious features have low training error. Additionally, to further reinforce the validity of this claim, we ran the following experiment:\\n\\nWe perform the same experiment in Section 4, where we first compute the training error of all samples when trained on clean CIFAR-10. We then introduce the spurious feature S3 into 100 samples with the highest training error, which gives us the results in Figure 2 (c) and (d). We additionally compare the average training error of the **same** 100 samples with and without the spurious features across three seeds:\", \"without_spurious_feature_s3\": \"1.30\", \"with_spurious_feature_s3\": \"0.68\\n\\nWe observe that by simply introducing strong spurious features, samples with hard core features (those that incur a high training error initially) no longer have a high training error.\\n\\n\\\\\\n\\\\\\n(Sagawa et. al. 2020) \\u201cAn Investigation of Why Overparameterization Exacerbates Spurious Correlations,\\u201d ICML, 2020.\\n\\n(Sagawa et. al. 2019 ICLR) \\u201cDistributionally Robust Neural Networks For Group Shifts: On the Importance of Regularization for Worst-Case Generalization,\\u201d ICLR, 2019.\\n\\n(Kirichenko et. al. 2023) \\u201cLast Layer Re-training is Sufficient for Robustness to Spurious Correlations,\\u201d ICLR, 2023.\\n\\n(Sohoni et. al. 2020) \\u201cNo subclass left behind: Fine-grained robustness in coarse-grained classification problems,\\u201d NeurIPS 2020.\\n\\n(Liu et. al. ICML, 2021) \\u201cJust Train Twice: Improving Group Robustness without Training Group Information,\\u201d ICML, 2021.\\n\\n(Zhang et. al. ICML, 2022) \\u201cCorrect-n-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations,\\u201d ICML, 2022.\\n\\n(Ahmed et. al. ICLR 2021) \\u201cSystematic generalisation with group invariant predictions,\\u201d ICLR 2021.\\n\\n(Creager et. al. 2021) \\u201cEnvironment inference for invariant learning,\\u201d ICML 2021.\\n\\n(Yang 2024) \\u201cIdentifying spurious biases early in training through the lens of simplicity bias,\\u201d AISTATS 2024.\\n\\n(Deng et. al. 2023) \\u201cRobust learning with progressive data expansion against spurious correlation,\\u201d NeurIPS 2023.\\n\\n\\\\\\n\\\\\\n**(4) Suppose a sample has a hard invariant feature and an easy spurious feature, should it be easy or hard to learn (small or large training error)? My understanding is that the sample diffculty is estimated by the training error, in this case, it is unclear how to identify \\\"spurious samples containing the hardest core features\\\".**\\n\\n\\nWhen a sample has a hard invariant feature and an easy spurious feature, it becomes easier to learn (low training error) than when it only has the invariant feature. In identifiable settings (which we understand is the setting you are referring to), samples with hard core features + spurious features become easier to learn than samples without spurious features. This is why **we cannot simply prune the hardest samples in the identifiable settings** (Lines 456 - 457).\\n\\nIn identifiable settings, however, it is very easy to identify which samples contain spurious features. In other words, it is very easy to identify group labels, by clustering for instance, as explained in question 2. In such settings, since we are aware of which samples contain spurious features, we prune those samples with spurious features that have a higher training error. These would include samples containing easy spurious features + hard core features versus those that contain samples containing easy spurious features + easy core features.\\n\\nWe would like to re-iterate that finding group labels in identifiable settings is trivial and we follow other seminal works that directly use group labels to mitigate spurious correlations. Note that we also compare our results with techniques that directly use group labels to mitigate spurious correlations.\\n\\n\\n\\\\\\n\\\\\\nWe will be more than happy to address any further concerns that the reviewer might have. Thank you for taking the time and effort to go through the revised version of our paper and suggesting thoughtful comments to improve the quality of the paper.\"}", "{\"title\": \"Meaning of spurious correlations and Causality\", \"comment\": \"While I still very much like this work, I would like first to clarify my questions about the \\\"representations of all spurious correlations\\\" and then respectfully push back on what the authors argue they have proven with the work.\\n\\n**Question 1: Is the particular type of spurious correlation the authors consider representative of all spurious correlations?** \\nI thank the authors for the clarification, though I do want to be clear about the nature of my question and make a request, given their updated paper. For context, I have carefully read a handful of papers in the literature that the authors cite--i.e., Shah et al. 2020, Sagawa et al. (2020a), Kirichenko et al. (2022), Moayeri et al. (2023), Liu et al. (2021), Zhang et al. (2022), and accept they are being consistent with this literature, focused specifically on deep learning (image) models. The source of my original question is that generally speaking spurious correlations do not simply exist in this literature, and can occur by different mechanisms, as a simple example is in the context of linear regression, a single outlier can cause the estimated regression slope to be significantly different from zero. Moreover, [spurious correlations](https://en.wikipedia.org/wiki/Spurious_relationship) are generally considered relationships learned in data between $X$ and $Y, when $Y has no actual *causal* impact on $Y$. Causality is a very precise, well-defined, and well-studied concept across various disciplines, and it does not appear that the authors are truly attempting to operate in the context of causality. While I would not hold the authors responsible for the use of spurious correlations outside of the context of causality, I would like to recommend they amend the causality references they adopted (I believe in response to Reviwers 7BQb concerns): e.g., \\n\\n> \\\"Core (or invariant) features are causal to the class label $y_i$ and are\\n> fully predictive of the task, as they are present in all samples.\\\"\\n\\nI would instead recommend following the definition used in [Singhla and Feizi, 2022](https://arxiv.org/abs/2110.04301), as it seems to often be cited by others in the authors' literature, as they seem to work to precisely codify the concept, even for their MTurkers. Instead, for example, I believe the \\\"present in all samples\\\" component of the authors' criteria is not a requirement of core features (Liu et al. (2021) discuss background features that are not present in all samples) and seems like it could be true of a \\\"spurious\\\" feature for a given dataset.\"}" ] }
Bjq4W7P2Us
Understanding and Mitigating Hallucination in Large Vision-Language Models via Modular Attribution and Intervention
[ "Tianyun Yang", "Ziniu Li", "Juan Cao", "Chang Xu" ]
Large Vision-Language Models (LVLMs) exhibit impressive capabilities in complex visual tasks but are prone to hallucination, especially in open-ended generation tasks. This paper explores why LVLMs tend to hallucinate and how to mitigate it. First, we conduct causal mediation analysis through counterfactual edits on specific modules in LVLMs. Our results disclose that Multi-Head Attention (MHA) modules contribute more to the probability of generating hallucination words than multi-layer perceptron modules. We then identify specific heads that are responsible for hallucination, referred to as hallucination heads. Second, we examine the behavior of hallucination heads. We find that they are concentrated in the middle and deeper layers, displaying a strong attention bias toward text tokens. Further, we show that the attention patterns of certain hallucination heads exhibit greater similarity to the base language model and change slowly during the instruction tuning process. Finally, we propose two simple yet effective methods to mitigate hallucination: one is training-free and can be applied directly during decoding, while the other involves fine-tuning. Both methods are targeted for hallucination heads to reduce their reliance on text tokens. Notably, our methods achieve up to 1.7x reduction in hallucination rate for the LLaVA-v1.5-7B model in COCO captioning task, outperforming existing baselines. Overall, our findings suggest that hallucinations in LVLMs are likely to stem from certain modules, and targeted interventions can effectively mitigate these issues.
[ "Large Vision-Language Models", "Hallucination", "Interpretability" ]
Accept (Poster)
https://openreview.net/pdf?id=Bjq4W7P2Us
https://openreview.net/forum?id=Bjq4W7P2Us
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOT6OwbWwO", "wBqlwjxZSe", "twKYmc6hhU", "tZ1oyGKmjn", "rKVPy5PxY1", "qvamt2HcLl", "lDGlYmNBqy", "jUw0GzWmmr", "QTO2eQCtmd", "Q8ksrcJWHR", "Q2YMsS5XiS", "OmNOCYwUvn", "LtW7E2CXLe", "L0ThCqZW7r", "JUsR9A8oEm", "JTjRkawofu", "IcCcBgbSmg", "GI9IPmkoF2", "FnXyKXMHGl", "FCJPvz61th", "Etsu2QCf5J", "Eea2sRzegh", "BDJY6SVCgc", "B3lZDNvXAD", "AimcO8WjdD", "A7cEW0N8A8", "1zol9TanLN", "1RLTwoDWCM", "0HrGIGFpqO" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732452265039, 1734931611989, 1732353844520, 1730755372732, 1732550883834, 1732619574938, 1732619473127, 1732369381523, 1732353323378, 1732619346503, 1732349586822, 1732353729492, 1732347910984, 1732544179060, 1730720126008, 1732549864909, 1732544319129, 1732556033683, 1732620326167, 1732353457720, 1737523384225, 1732403665008, 1732403909853, 1732354193765, 1732347873745, 1732349044179, 1730697894261, 1732348942203, 1729223374524 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission196/Reviewer_pX1o" ], [ "ICLR.cc/2025/Conference/Submission196/Area_Chair_Zn6P" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_pX1o" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_Ex6k" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_xVEm" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_xVEm" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_JmFc" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_Ex6k" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_Ex6k" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_Ex6k" ], [ "ICLR.cc/2025/Conference/Submission196/Authors" ], [ "ICLR.cc/2025/Conference/Submission196/Reviewer_xVEm" ] ], "structured_content_str": [ "{\"comment\": \"Thank the authors for the reply!\\n\\nRegarding Response 3, thank you for providing the qualitative results here. However, why and what is the intuition that the proposed method can be used for couting? For example, if an image has 2 dogs and the ground truth caption is \\\"the image has two dogs\\\" and the model's prediction is \\\"the image has three dogs\\\", the score in the equation will be 0 because there are no hallucinated objects based on the descriptions in lines 174 - 176.\"}", "{\"metareview\": \"This paper investigates hallucination in Large Vision-Language Models (LVLMs) through the lens of causal attribution and intervention. The work identifies specific \\\"hallucination heads\\\" within the multi-head attention mechanism and proposes two methods to mitigate hallucination: a training-free adaptive deactivation approach and a targeted fine-tuning strategy. The paper has solid experiments, and provides a systematic analysis for hallucination in VLMs. The paper also had good responses to questions presented by reviewers. For these reasons I vote to accept this paper!\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, there was extensive discussion between reviewers and authors that substantially strengthened the paper. The initial reviews raised several key concerns: Reviewer pX1o (Score: 6) focused on technical clarifications regarding metrics, implementation details, and methodology, particularly around the CHAIR evaluation and attention weight scaling. Reviewer JmFc (Score: 6) highlighted missing citations, limited evaluation metrics, and the need for deeper analysis of the fine-tuning approach. Reviewer Ex6k (Score: 8) questioned the impact on language generation capabilities and the focus on object hallucination, while Reviewer xVEm (Score: 8) raised concerns about model coverage and technical aspects of hallucination head identification. The authors responded comprehensively to these concerns with substantial additions: they extended their experiments to larger models including LLaVA-34B and Chameleon-30B, conducted human evaluations for generation quality, added comparisons with recent baselines, and provided detailed technical clarifications about their methodology. The authors' thorough response effectively addressed all major concerns while significantly enhancing the paper's contributions through additional experiments and analyses.\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Question 5**: Has it analyzed the difference between the attention patterns of hallucination heads and regular attention heads across system tokens, image tokens, prompt tokens, and output tokens? Is there any distinction, or can differences only be observed through output tokens?\\n\\n**Response 5**: Thank you for your question. We believe you are referring to Figure 4, which presents the attention patterns of hallucination and non-hallucination heads, albeit limited to output tokens. To address your concern, we have included an additional analysis in Figure 13 in the Appendix, which examines attention patterns across the entire sequence, encompassing system tokens, image tokens, prompt tokens, and output tokens. Please let us know if we have misunderstood your question.\\n\\nSpecifically, Figure 13 in the Appendix highlights that hallucination heads allocate significantly more attention weight to text tokens, often neglecting visual tokens. In contrast, non-hallucination heads distribute attention more evenly, with a greater focus on visual tokens. These findings are consistent with the observations in Figure 4 and further support the conclusion that hallucination and non-hallucination heads exhibit distinct attention patterns when processing input sequences.\\n\\n**Question 6**: What about the distribution of attention when judging LLMs hallucinations? LLMs unable to generate a caption to an image, right? Therefore, I don't think this kind of attention head observation at the decoder terminal is very reasonable.\\n\\n**Response 6**: Thank you for your question. If we understand correctly, you are drawing a connection between the hallucination heads identified in LVLMs and their behaviors in LLMs, and asking whether our conclusions extend to LLMs. Please let us know if we\\u2019ve misunderstood your intent.\\n\\nTo clarify, hallucinations in LVLMs are fundamentally different from those in LLMs, and our conclusions do not directly apply to hallucinations in LLMs. Specifically, LVLMs generate responses by integrating visual and textual information. The hallucinations we observed are caused by an imbalance in attention between image and text tokens in certain hallucination heads. This imbalance likely arises because LVLMs are fine-tuned from LLMs, where language bias tends to dominate during question answering, causing the model to undervalue visual input. Thus, this issue is inherently tied to modality competition between vision and language in LVLMs.\\n\\nIn contrast, hallucinations in LLMs occur in a single-modal setting and are primarily related to factors within in the language model itself, which requires separate investigation. Our findings are specific to multimodal models and do not generalize to LLMs in isolation.\\n\\n*** \\nWe sincerely thank you for your thoughtful review. We hope the revised manuscript and the clarifications provided above have addressed your concerns effectively. If our responses have satisfactorily resolved your concerns, we would greatly appreciate it if you could consider updating your review score. However, if you have any remaining concerns or require further clarification, please do not hesitate to let us know. We are more than willing to provide additional explanations or updates as needed.\"}", "{\"summary\": \"The paper observes that some attention heads cause more changes after deactivating them, and name them hallucination heads. There are several characteristics of these heads: (1) they attend more on the images and (2) their parameters move slower than others. Then, the paper proposes one training-free and one training approach to reduce the attention of the hallucination heads on text parts. The experiments on several benchmarks show the benefits of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The observation and analysis on the hallucination heads are interesting and well-motivated.\\n\\n2. The proposed solutions don't require expensive decoding time.\\n\\n3. The results on several benchmarks look promising.\", \"weaknesses\": \"1. In Section 4.3, the metrics (CHAIR) and the dataset that is evaluated should be explained.\\n\\n2. In Tables 1 and 2, it would be better if there is an additional column indicating which method is training-free and which is not.\\n\\n3. The rest please refer to Questions.\", \"questions\": \"1. How this method can extend beyond object-level hallucination? For example, do equation (1) and (2) only tell if the existence of the objects but cannot capture if the model incorrectly counts the objects? Then the proposed method can not detect the heads which make hallucination on counting.\\n\\n2. How many samples is required to compute equation (1)? And what data split do they come from?\\n\\n3. What's the time required to compute equation (2) to get the scores for all heads?\\n\\n4. Why the Algorithm 1 directly deactivated the whole text attention weights for the hallucination heads? Isn't it too aggressive as we can see the BLEU scores drop by a lot in Figure 6 (c)? Moreover, why the accuracy on general tasks somehow improves when you apply Algorithm 1? I thought the performance should drop based on the observation in Figure 6 (c).\\n\\n5. What is the difference between downscaling weight on text attention and upscaling the weight on image attention (Figure 6 (a) and (b))? I thought they meant the same thing as the softmax in attention would keep the sum of text attention and image attention to be one.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you very much for the additional results. I realize that this must've been difficult to complete so quickly. I've updated my score.\", \"some_further_notes\": \"-It would be interesting to also present the number and ratio to salient hallucination heads in Chameleon at multiple scales (8b, 30b..) to confirm that the pattern is similar to LLaVa.\\n-It would also be interesting to test all of these models on MME to verify how universal the results are and whether the same ratios/patterns seen in CHAIR would replicate with the different benchmark.\\n\\nOver all, I think this is a good paper with a very insightful analysis and I'm thankful to the authors for presenting it and for being so proactive throughout the review process.\"}", "{\"title\": \"Response for author\", \"comment\": \"The author solved my question, the discussion was very beneficial, and I will raise my score.\"}", "{\"title\": \"Response for Reviewer xVEm (Part II)\", \"comment\": \"**Question 4**: Can it be shown that TF-HH works for different projector models like Instruct-Blip2 and Qwen-VL? I'd be happy to raise points if the authors can address these queries of mine above.\\n\\n**Response 4**: Sure. We have successfully applied our method, TF-HH, to the Qwen-VL model. By specifically fine-tuning hallucination heads, we successfully reduced the hallucination rate by 4.4% on CHAIR_S and 2.2% on CHAIR_I, while maintaining a comparable BLEU score. This result demonstrates the effectiveness and general applicability of TF-HH to other models. \\n| Qwen-VL | CHAIR_S | CHAIR_I | BLEU |\\n|------------------|---------|---------|------|\\n| Greedy | 38.6 | 12.1 | 19.5 |\\n| TF-HH (Ours) | **34.2** | **9.9** | **20.3** |\", \"details\": \"We use 1000 samples from COCO to identify hallucination heads and apply TF-HH to fine-tune the top 20 hallucination heads. Our training setup is similar to that used for fine-tuning LLaVA-7B, with the exception that we do not have access to Qwen-VL's training data. Therefore, we use LLaVA\\u2019s training data and convert it into the format required for Qwen-VL. We apply the same learning rate and train for 200 iterations.\\n\\n*** \\nWe hope our responses have sufficiently addressed your questions, and we have updated our manuscript to clarify sections that may have caused confusion. Thank you for your valuable feedback!\"}", "{\"title\": \"Some question remain unsolved\", \"comment\": \"Thanks for the author's reply, but I still have some questions that have not been resolved yet.Q1.Figure 13 defines the left image as an hallucination head, is this scientific?Q2.After visualizing LLaVA through the attention map, it will be found that there are some attention distributions of the mid/deep layers that are similar to the left one, which puzzles me to know how to define these Q3.Are sparsely and unevenly distributed heads unhelpful for inference? Although this is hard to explain for black-box models.Q4.Can it be shown that TF-HH works for different projector models like Instruct-Blip2 and Qwen-VL?\\nI'd be happy to raise points if the authors can address these queries of mine above.\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Question 5**: Can you provide more details on why certain hallucination heads exhibit slow changes during instruction tuning? What factors contribute to this \\\"laziness,\\\" and how might future work address this issue?\\n\\n**Response 5**: Thank you for the thoughtful questions. We address them separately below.\\n\\n1. Why do certain hallucination heads exhibit slow changes during instruction tuning?\\n\\nOur analysis of gradient norms for these hallucination heads revealed that their gradients are consistently smaller compared to non-hallucination heads throughout training. For instance, at iteration 0, the gradient norm for hallucination heads was 1.3 times smaller than that of non-hallucination heads. This trend persisted across multiple iterations (e.g., 500, 1000, 2000, 3000, 4000, and 5000), explaining their slower updates during training. \\n\\nWe hypothesize that this \\\"laziness\\\" arises from over-optimization during pre-training and instruction tuning of the base language model. Over time, certain attention heads may become less responsive to new inputs, such as multi-modal data in LVLMs. Ideally, landscape analysis could provide insight into the sharpness and plasticity of these attention heads. However, existing visualization techniques are not yet scalable to 7B models, limiting this approach.\\n\\n2. How might future work address this issue?\\n\\nAddressing this challenge requires careful fine-tuning, particularly in low-data scenarios where models tend to inherit shortcut patterns from their base language models. These inherited biases can amplify hallucinations in open-ended tasks, making it crucial to adapt LVLMs to specific tasks and datasets effectively.\", \"we_highlight_a_few_future_directions\": \"* From the perspective of training algorithms, future research could explore targeted mitigation strategies inspired by continual learning literature (see e.g., [1]). \\n* From the representation perspective, techniques such as model expansion (see e.g., [2,3]), which introduce more flexible components, could enhance adaptability. \\n* From the data perspective, additional efforts to curate and augment datasets could further improve performance and robustness.\\n\\n[1] Dohare, Shibhansh, et al. \\\"Loss of plasticity in deep continual learning.\\\" Nature 632.8026 (2024): 768-774.\\n\\n[2] Yoon, Jaehong, et al. \\\"Lifelong learning with dynamically expandable networks.\\\" ICLR 2018.\\n\\n[3] Anagnostidis, Sotiris, et al. \\\"Navigating Scaling Laws: Compute Optimality in Adaptive Model Training.\\\" ICML 2024.\\n\\n**Question 6**: Have you evaluated how the proposed interventions affect the overall language generation quality of the models? Specifically, does reducing reliance on text tokens in hallucination heads impact the fluency, coherence, or descriptiveness of the generated captions? It would be helpful to see metrics (human eval) or analyses addressing potential trade-offs.\\n\\n**Response 6**: Thank you for the suggestion. To evaluate the impact of our interventions, we conducted a manual assessment involving two Ph.D. students and one undergraduate student. They evaluated the responses using a 1 to 5 scoring system based on two criteria: (1) Non-hallucination performance, where higher scores reflect fewer hallucinations, and (2) Generation quality, where higher scores indicate more fluent and coherent outputs. \\n\\n| Method | Non-Hallucination Score | Generation Quality Score |\\n|-------------------|-------------------------|---------------------------|\\n| Greedy Decoding | 3.25 | 3.99 |\\n| AD-HH (Ours) | **3.87** | 3.85 |\\n| FT-HH (Ours) | 3.78 | **4.01** |\\n\\nFor both the baseline and our proposed methods, the evaluators assessed a total of 500 generated responses per method, resulting in 1500 responses overall. These results demonstrate that our methods effectively mitigate hallucination while maintaining high generation quality.\"}", "{\"title\": \"Response for Reviewer xVEm (Part I)\", \"comment\": \"Thank you for sharing your valuable feedback! We\\u2019re glad to address your concerns.\\n\\n**Question 1**: Figure 13 defines the left image as an hallucination head, is this scientific?\\n\\n**Response 1**: We believe there may be a potential misunderstanding, and we would like to clarify. You might assume that we define attention heads with attention maps similar to the left part of Figure 13 as hallucination heads. However, this is not the case. In our paper, we define hallucination heads as those most responsible for hallucination behaviors, identified through their contrastive influence scores, as described in Equation (2). This definition is based on counterfactual analysis [1], which we believe is scientifically sound.\\n\\nSubsequently, we analyze the behaviors of these identified hallucination heads and observe their significant over-reliance on text tokens in attention maps (e.g., the left part of Figure 13). It is important to emphasize that we do not arbitrarily classify heads with attention maps resembling the left part of Figure 13 as hallucination heads.\\n\\n[1] J. Pearl. Causality: Models, Reasoning, and Inference. 2000\\n\\n**Question 2**: After visualizing LLaVA through the attention map, I noticed that some attention distributions in the mid/deep layers resemble the one on the left, which leaves me puzzled about how to define these.\\n\\n**Response 2**: We understand your are worring about whether a particular attention head, which displays similarities to the left attention map in Figure 13 (e.g., over-reliance on text tokens), can be classified as a hallucination head. Our answer is no. Let us clarify:\\n- In our paper, hallucination heads are identified based on their causal effects on hallucination behavior, specifically through large contrastive influence values. By definition, these heads directly contribute to hallucinations.\\n- Empirically, we identified attention heads with largest contrastive influence values, especially the top 20 ones in LLaVA-7B to analyze their behaviors. We find that these hallucination heads consistently exhibit over-reliance on text token inputs. However, we emphasize that this over-reliance is a symptom of hallucination heads, not a criterion to identify them.\\n- We note that there may be other attention heads that also exhibit over-reliance on text tokens but do not influence hallucination behavior (e.g., layer 31, head 4 in LLaVA-7B). This occurs because some heads may function as a general-purpose language head, ensuring fluency and coherence in text generation. Although these heads look like the left attention map in Figure 13 and over-rely on text tokens, they are unrelated to hallucination behavior.\\n- To support the above claims, we have conducted a statistical analysis of all 1024 attention heads in LLaVA-7B, 412 attention heads (~40%) exhibit heavy reliance on text tokens (text attention/image attention>3). Of these 412 heads, 18 are identified as hallucination heads based on their causal contribution to hallucination behavior. In total, there are 20 hallucination heads, meaning most (90%) exhibit over-reliance on text tokens.\\n\\nIn summary, hallucination heads often show over-reliance on text tokens. However, not all heads that over-rely on text tokens are hallucination heads. They qualify as hallucination heads only when they are causally responsible for hallucination behavior, indicated by large contrastive influence values. Please let us know whether the response above can address your concerns and we are more than willing to clarify. \\n\\n**Question 3**: Are sparsely and unevenly distributed heads unhelpful for inference? Although this is hard to explain for black-box models.\\n\\n**Response 3**: We understand your concern about whether a particular attention head, which appears sparse and unevenly distributed (similar to the left attention map in Figure 13), is unhelpful for inference. As discussed in Response 2, the answer is no: there is no direct correlation or one-to-one mapping between sparse, uneven attention patterns and a head\\u2019s contribution to response generation.\\n\\nWhile hallucination heads often exhibit sparse and uneven attention distributions, but it does not lead to the conclusion that all heads with sparse and uneven distributions are unhelpful for inference. Sparse and uneven patterns may occur naturally in attention heads that contribute to general language generation purposes, such as ensuring coherence, fluency in generation.\\nTherefore, sparse and uneven attention distribution alone cannot be used as a single criterion to classify a head as unhelpful for inference.\"}", "{\"title\": \"Response (Part I)\", \"comment\": \"Thank you for reading our paper and providing valuable feedback. We greatly appreciate your comments and have addressed your concerns and questions below.\\n\\n**Comment 1**: There is limited discussion on how the proposed interventions affect the model's overall language generation capabilities. Potential trade-offs between reducing hallucination and maintaining fluency or coherence are not thoroughly examined.\\n\\n**Response 1**: Thank you for raising this concern. We acknowledge the potential trade-offs between reducing hallucination and maintaining the overall generation quality. This trade-off was observed when applying the initial naive method in our experiments, as discussed in Section 4.3.1 (refer to Figure 6). \\n\\nTo address this issue, we developed approaches that achieve a more balanced trade-off, such as adaptive deactivation of hallucination heads and targeted fine-tuning. Supplementary results in Table 4, Figures 14 and 15 provide a detailed analysis of these trade-offs, illustrating how varying hyperparameters in our methods impacts both hallucination reduction and generation quality. \\n\\n**Comment 2**: The paper is almost entirely focused on object hallucination.\\n\\n**Response 2**: Thank you for your comment. Object hallucination is a significant challenge for LVLMs in open-ended generation tasks, which is why we chose to focus on it in this study. Nevertheless, our method is not limited to object hallucination and can be extended to address other types of hallucination, such as counting and positional errors. For instance, on the MME benchmark, we observed a 5-point improvement in counting performance (from 148 to 153), a 10-point improvement in positional accuracy (from 128 to 138). Detailed results are reported below:\\n\\n| | existence | count | position | color | posters | celebrity | scene | landmark | artwork | OCR |\\n| ------------ | --------- | ------- | -------- | ----- | ------- | --------- | ------- | -------- | ------- | ------- |\\n| Baseline | 190 | 148 | 128 | 160 | 139 | 133 | 156 | 162 | 122 | 130 |\\n| AD-HH (Ours) | 190 | **153** | **138** | 160 | **142** | **135** | **158** | 162 | 118 | **138** |\\n\\n**Comment 3**: The experiments are conducted on 7B parameter models (LLaVA-7B and MiniGPT-4). Given the trend towards larger models in the field, it would be valuable to assess whether the identified hallucination heads and mitigation strategies are applicable to larger models (e.g., 70B parameters) and whether similar patterns emerge at different scales.\\n\\n**Response 3**: Thank you for this insightful comment. Due to limited computational resources, our initial experiments focused on 7B parameter models. We completely agree that investigating hallucination behaviors in larger models and studying scaling and emergence effects is interesting and important. To address your concern, we extended our experiments to larger models, including Llama-3.2-11B-Vision and LLaVA-v1.5-13B. Unfortunately, resource and time constraints prevented us from exploring 70B parameter models. Nevertheless, our findings indicate that the proposed method is effective on 13B-sized models. Comparative results for hallucination on the COCO dataset are provided below.\\n\\n| Method | Llama3.2-11B-Vision | | LLaVA-v1.5-13B | |\\n|---------------|----------------------|-------|----------------|-------|\\n| | CHAIR_S | CHAIR_I | CHAIR_S | CHAIR_I |\\n| Greedy | 28.4 | 7.4 | 48.6 | 12.4 |\\n| AD-HH (Ours) | **22.6** | **4.9** | **38.8** | **9.4** |\\n\\n**Comment 4**: There are minor issues with the writing and presentation that could be improved for clarity and professionalism. For example, phrases like \\\"Our Method Run Fast in Generation\\\" could be rephrased for better readability.\\n\\n**Response 4**: Thank you for your suggestion. We have revised the paper to enhance clarity.\"}", "{\"title\": \"Response (Part III)\", \"comment\": \"**Questions 7**: Do you think \\\"hallucination heads\\\" exist in the same way in larger scale (eg. 70b) VLMs? Would the same training-free method work similarly on them? Would be interesting to see.\\n\\n**Response 7**: Thank you for raising this thought-provoking question. We have extended our method on larger models including Llama-3.2-11B-Vision and LLaVA-v1.5-13B, which still works on them. It is indeed exciting to consider whether hallucination heads behave differently or even vanish in larger-scale models like 70B. Unfortunately, we currently lack the computational resources and time to study this topic. However, we can share some preliminary thoughts that future work may explore further. \\n\\nFirst, we note that large models differ from small models in both representation power and optimization dynamics. Intuitively, larger models are expected to possess greater representation capacity. Additionally, neural network learning theory, such as the Neural Tangent Kernel framework, suggests that larger models tend to change their parameters slowly at each training step. Building on this foundation, we propose two potential scenarios based on the extent of training:\\n\\n- Short-training regime: In this scenario, larger models may strongly inherit pre-existing language biases, and hallucination heads are likely to exist. This outcome is expected when the model is fine-tuned on small-scale dataset to adapt to specific downstream tasks, especially if the language model backbone has been extensively trained on text data alone.\\n- Long-training regime: With sufficient training, larger models can utilize their superior representation power to learn accurate patterns from extensive datasets\\u2014something smaller models often struggle to achieve. Hallucination heads are expected to disappear in this scenario.\\n\\nIn conclusion, we cannot provide a definitive answer at this time. The persistence or mitigation of hallucination heads likely depends on the scale of training efforts. This relationship is challenging to predict in advance but is an interesting area for future research. We hope our study can provide some insights and plan to study this topic in the future.\\n\\n---\\n\\nWe sincerely thank you for your valuable review comments. We hope the revised manuscript and the clarifications provided have effectively addressed your concerns. If our responses meet your expectations, we would greatly appreciate your consideration in updating your review score. Should you have any remaining questions or require further clarification, please do not hesitate to reach out. We are more than happy to provide additional explanations or updates as needed.\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Question 6**: Why the Algorithm 1 directly deactivated the whole text attention weights for the hallucination heads? Isn't it too aggressive as we can see the BLEU scores drop by a lot in Figure 6 (c)? Moreover, why the accuracy on general tasks somehow improves when you apply Algorithm 1? I thought the performance should drop based on the observation in Figure 6 (c).\\n\\n**Response 6**: Thank you for highlighting these important points. We would like to clarify a potential misunderstanding: Algorithm 1 employs an input-dependent adaptive strategy to selectively deactivate text attention weights for hallucination heads, unlike the naive strategy used in Figure 6(c), which lacks such adaptability. Therefore, the performance of Algorithm 1 cannot be directly inferred from the observations in Figure 6(c).\\n\\nTo address your concerns about aggressiveness, it is important to note that hallucination heads account for only a small subset (less than 3%) of the Transformer's overall attention mechanism. This ensures that other attention heads remain unaffected, preserving overall generation quality. Thus, it is not overly aggressive to fully deactivate the entire text attention weights for hallucination heads. Evidence for this is provided in Table 4 of the Appendix, which shows that the BLEU score of Algorithm 1 (17.8) is comparable to that of normal generation (17.9).\\n\\nFinally, the improvement of accuracy in general tasks, as outlined in Table 2, can be attributed to the mitigation of spurious dependencies introduced by hallucination heads. This ultimately enhances model robustness and improves task-specific performance.\\n\\n\\n**Question 7**: What is the difference between downscaling weight on text attention and upscaling the weight on image attention (Figure 6 (a) and (b))? I thought they meant the same thing as the softmax in attention would keep the sum of text attention and image attention to be one.\\n\\n**Response 7**: In our experiments, scaling is applied *after* computing the softmax attention scores by multiplying a scaling factor to either downscale or upscale specific components. As a result, the sums of text and image attention may not necessarily equal one. This approach allows us to adjust the text attention weights independently without directly affecting the visual information component. Consequently, we can *isolate* and analyze the *separate* contributions of text and visual information more effectively.\\n\\n---\\n\\nWe sincerely thank you for your thoughtful review. We hope the revised manuscript and the clarifications provided above have addressed your concerns effectively. If our responses have satisfactorily resolved your concerns, we would greatly appreciate it if you could consider updating your review score. However, if you have any remaining concerns or require further clarification, please do not hesitate to let us know. We are more than willing to provide additional explanations or updates as needed.\"}", "{\"title\": \"Response for Reviewer Ex6k (Part I)\", \"comment\": \"Thank you for your feedback. We are pleased to hear that our previous responses addressed your concerns. We apologize for the typo of \\\"FT-HH\\\" in the previous response, and we have corrected it. Below, we have provided responses to your new questions:\\n\\n**Question 1**: However, I still believe that showing the scalability of this method (at least to the 30B~) scale would be very substantial. I understand that it may require a lot of VRAM to analyze a 70B, but it would be interesting to analyze a more intermediate model like LLaVa[1] 34B.\\n\\n**Response 1**: We understand your concern and have tried our best to study the LLaVA-v1.6-34B (LLaVA-34B for short) model. We have applied our method to attribute and intervention on this model and found reduced hallucination rate. Detailed results are presented below: \\n\\nFirst, we evaluate the hallucination performance of LLaVA-34B on the COCO dataset. For LLaVA-34B, the hallucination rate is 23.2% for CHAIR_S and 6.4% for CHAIR_I, which are smaller than that of LLaVA-v1.5-7B (LLaVA-7B for short) and LLaVA-v1.5-13B (LLaVA-13B for short). According to the technical report of LLaVA-34B, these improvements can be attributed to a better LLM backbone and an increased input image resolution. We observe a general trend of decreasing hallucination rates as model size increases. However, as noted, this improvement is likely to be influenced by multiple factors.\\n\\n| | LLaVA-7B | LLaVA-13B | LLaVA-34B |\\n|----------------------------|----------|-----------|-----------|\\n| Hallucination Rate (CHAIR_S) | 51.8 | 48.6 | 23.2 |\\n| Hallucination Rate (CHAIR_I) | 13.3 | 12.4 | 6.4 |\\n\\nSecond, we apply our modular attribution to analyze hallucination heads across different scales of LVLMs. To fairly compare the number of hallucination heads across models of varying scales, we identify salient hallucination heads \\u2014those with contrastive influence values exceeding 25% of the maximum contrastive influence value for each model. The 25% threshold is temporarily selected for comparison purpose. We evaluate both their absolute numbers and their ratio relative to all attention heads. \\n\\n| | LLaVA-7B | LLaVA-13B | LLaVA-34B |\\n|-------------------------------|----------|-----------|-----------|\\n| Total Number of Attention Heads | 1024 | 1600 | 3360 |\\n| Number of Salient Hallucination Heads | 42 | 37 | 10 |\\n| Ratio of Salient Hallucination Heads | 4.1% | 2.3% | 0.3% |\\n\\nOur findings above reveal that hallucination heads tend to diminish as model size increases and sufficient post-training is applied. Although we cannot perfectly isolate the contributions of individual factors (e.g., the LLM backbone, data size and sources, image tokenizer), our observations tend to align with our hypothesis: larger models possess stronger representational power to learn correct behaviors from data, whereas smaller models are more susceptible to language bias.\\n\\nFinally, we apply our targeted modular intervention method to LLaVA-34B and find that the hallucination rate can be reduced by 2.8% on CHAIR_S and 0.8% for CHAIR_I. Note that this improvement is not easy given the superior performance of LLaVA-34B. We also perform preliminary generation quality evaluation and found that the BLEU score is comparable with the one before intervetion.\\n\\n| LLaVA-34B | CHAIR_S | CHAIR_I | BLEU |\\n|----------------|---------|---------|-------|\\n| Greedy | 23.2 | 6.4 | 15.1 |\\n| AD-HH (Ours) | **20.4** | **5.6** | **15.2** |\\n\\nThank you for initiating the discussion on model behaviors, particularly regarding model size scaling. We have updated our paper to include these results in Appendix A.3. We believe these findings could help us better understand hallucinations in LVLMs and hope they could be valuable to the community.\"}", "{\"summary\": \"This work empirically studies the hallucination problem of LVLMS via counterfactual analysis. This work reveals multiple interesting findings on why hallucination occurs and how to mitigate it. The analyses are performed with two models (LLaVA-7B and MiniGPT-4) on two benchmark datasets (COCO and MM-Vet).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work shows several interesting observations about hallucination, as summarized in the sentences in the bold font in section 4.\\n\\n2. Based on the findings, this work proposes two simple ideas to mitigate hallucination \\u2013 adaptive deactivation and targeted fine-tuning of hallucination heads.\", \"weaknesses\": [\"1. This work fails to cite several related recent works as follows, to name a few.\", \"They may need to be cited and compared in experiments.\", \"A. Deng et al., Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding, arXiv 2024.\", \"F. Liu et al., Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning, ICLR 2024.\", \"J. Zhang et al., Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models, ECCV 2024.\", \"2. The evaluation only depends on two CHAIR metrics.\", \"3. Some findings look obvious.\", \"It is not so surprising that the multi-head-attention is more critical than feed-forward network, since the former takes a majority of parameters and does much more things than the latter in the transformer.\", \"It is a well-known fact that hallucination is more related to language bias (rather than) image bias and multimodal models often ignore the information from images compared to that from text.\"], \"questions\": \"1. More in-depth analysis would be required to discuss why the fine-tuning approach is not as good as the training-free one.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response for Reviewer pX1o\", \"comment\": \"We understand your concern. Let us clarify the experiment setting.\\n\\nOur experiment setting above is different from what your think. We do not use the extracted object in the caption to apply Equation 1 for this counting task. Instead, we follow the setup in the MME [1], which use prompts like \\\"Are there two dogs on the image? Please answer the question with yes or no.\\\" The model provides a single token \\\"yes\\\" or \\\"no\\\" for response. If the ground truth is \\\"yes\\\" but the model predicts \\\"no\\\". Then, the incorrect token \\\"no\\\" is treated as $y_t$ in Equation (1). In this way, we can continue to apply attribution and intervention methods and observe reduced hallucination rate. \\n\\nWe acknowledge that your proposal is also applicable. In that case, we can use the number \\\"three\\\" as the hallucination token $y_t$, rather than the object \\\"dog\\\" to calculate Equation (1). In this case, it is more laborious to manually check the hallucination tokens, so we use the setup in MME as a practical example.\\n\\nIn summary, the formulation of attribution and counterfactual analysis is quite flexible and can be generally applied to tasks beyond object hallucination. Hope this can address your concerns. \\n\\n[1] C. Fu et al., MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models. Arxiv 2023.\"}", "{\"title\": \"Response for Reviewer Ex6k (Part II)\", \"comment\": \"**Question 2**: Have you also explored \\\"early fusion\\\" approaches such as Chameleon [2] (https://huggingface.co/facebook/chameleon-7b https://huggingface.co/facebook/chameleon-30b) and whether hallucination heads exist there? A negative result would be fine and could make sense.\\n\\n**Response 2**: Thank you for highlighting the \\\"early fusion\\\" model. We appreciate the work on Chameleon, which offers a comprehensive study of early fusion-based multi-modal training, enabling reasoning and generation across modalities using shared representations. Chameleon-30B is easy to use and achieve state-of-the-art performance on many tasks, but we have observed that it still exhibits hallucination behaviors. \\n\\nFirst, we observed that Chameleon-30B has a hallucination rate of 38.0% on CHAIR_S and 12.6% on CHAIR_I on the COCO dataset. Next, we identified the top 10 most salient hallucination heads in Chameleon-30B, as shown in Figure 16(c) in the Appendix. By deactivating these identified hallucination heads, we successfully reduced the hallucination rate on CHAIR_S from 38.0% to 34.8%, with a comparable BLEU score.\\n\\n| Chameleon-30B | CHAIR_S | CHAIR_I | BLEU |\\n|---------------|---------|---------|-------|\\n| Greedy | 38.0 | 12.6 | **10.9** |\\n| AD-HH (Ours) | **34.8** | **12.5** | 10.8 |\\n\\n**Question 3**: Are you willing, upon paper acceptance, to release open-source code for AD-HH, FT-HH, as well as for the analysis shown in Sec. 4 of your paper so that future works may further investigate the very interesting phenomenon you present in this paper? Open source weights for FT-HH models would also be interesting.\\n\\n**Response 3**: Absolutely! Upon paper acceptance, we will release the code and model weights necessary to reproduce our analysis and experiment results. We hope this will benefit the research community and inspire further investigation.\\n\\n***\\nWe sincerely thank you again for your valuable review comments. We hope our responses have adequately addressed your questions, and we are happy to provide further clarification if needed.\"}", "{\"title\": \"Thanks for your positive feedback\", \"comment\": \"Thank you for your positive feedback, which is greatly appreciated! We have put significant effort into studying 30B-size models, and the empirical results are exciting for us. We are currently working to address the concerns raised by other reviewers. We will explore the Chameleon-7B model and include a more detailed analysis in the future, as we recognize the importance of these results. Thank you once again for your understanding and support!\"}", "{\"title\": \"Thanks for your positive feedback\", \"comment\": \"Thank you so much for the positive feedback, we are more than appreciated! Your valuable participation and suggestions have contributed to improving the quality of our paper. Thank you once again for your time and effort!\"}", "{\"title\": \"Response (Part I)\", \"comment\": \"We sincerely thank you for taking the time to review our paper and for offering constructive feedback. We truly value your effort and have carefully addressed your concerns and questions below.\\n\\n**Comment 1**: The training-free method is plug and play, but the methods applied in this paper are too few, only LLaVA and Minigpt4 are included, and the results of other models should be supplemented.\\n\\n**Response 1**: Thank you for your valuable feedback. To address your concerns, we have extended our experiments to include additional models, such as LLaVA-13B and Llama-3.2-11B-Vision, which are both modern and representative, with Llama-3.2-11B-Vision released just two months ago. Using the same settings on the COCO dataset, our method demonstrated significant improvements, reducing hallucination rates by approximately 6 points for Llama-3.2-11B-Vision and achieving a 10-point improvement in CHAIR_S for LLaVA-v1.5-13B. Here are the detailed results:\\n\\n| Method | Llama3.2-11B-Vision | | LLaVA-v1.5-13B | |\\n|---------------|----------------------|-------|----------------|-------|\\n| | CHAIR_S | CHAIR_I | CHAIR_S | CHAIR_I |\\n| Greedy | 28.4 | 7.4 | 48.6 | 12.4 |\\n| AD-HH (Ours) | **22.6** | **4.9** | **38.8** | **9.4** |\\n\\nWe are actively working on expanding our experiments to include additional models and plan to incorporate these results in a future revision of the paper. \\n\\n**Comment 2**: The paper discusses the attention-map relationship of text tokens between LVLMs/LLM. Although the setting is a hallucination problem, this mechanism should be attributed to the cause of LLM. So can datasets be extended to other datasets like ScienceQA, GQA, textVQA, POPE, mmbench, etc.?\\n\\n**Response 2**: Thank you for your insightful question. The datasets you mentioned primarily involve short-form tasks with a focus on comprehension and discriminative features. For instance, POPE includes questions such as determining whether a chair or car exists in an image with a \\\"yes\\\" or \\\"no\\\" answer, where the object in question is explicitly mentioned in the prompt. These tasks differ fundamentally from open-ended generation tasks, where models must sequentially generate a series of tokens to complete the task without such explicit guidance in the prompt. We applied our method, AD-HH, to these discriminative tasks and observed some improvement, though the gains were relatively modest:\\n\\n| Method | ScienceQA | GQA | TextVQA | POPE |\\n|---------------|-----------|--------|---------|--------|\\n| Greedy | 70.15 | 79.18 | 58.22 | 86.88 |\\n| Ours | **70.34** | **79.39** | 58.22 | **87.27** |\\n\\nThe limited improvement may be attributed to the inherent differences between discriminative and generation tasks. In open-ended generation tasks, language bias becomes increasingly pronounced during the later stages of token generation [1][2], as the generated tokens deviate further from the input image tokens, leading to hallucinations. In contrast, in discriminative tasks, the generated tokens' position is closer to the image tokens and are less affected by such biases. This suggests that interventions targeting attention heads, like AD-HH, may not yield significant benefits for these tasks.\\n\\n[1] Yiyang Zhou et al. Analyzing and mitigating object hallucination in large vision-language models. ICLR, 2024\\n\\n[2] Qidong Huang et al. Opera: Alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. CVPR, 2024.\\n\\n**Question 3**: If it is a task like VQA, the generated text token is only one, and only ABCD is answered, whether this hallucination pattern does not exist. If it is the SFT method, then this result should also be applicable to VQA tasks?\\n\\n**Response 3**: Thank you for your question. We have included experimental results and a discussion on VQA tasks in Response 2.\\n\\n**Question 4**: It is written in the paper that the author has hallucinatory attention-head in the middle layer or deep layer. Why is this? The complete distribution of 32 heads was not seen in the supplementary materials.\\n\\n**Response 4**: Thank you for your question. We are happy to clarify. Our conclusion that hallucination heads predominantly occur in the middle or deep layers is based on the complete distribution of all attention heads, as shown in Figure 2 (LLaVA-7B) and Figure 10 (MiniGPT-7B). These figures depict the contributions of all 1024 attention heads (from 32 layers with 32 heads per layer). The color coding in these figures represents each attention head's contribution to hallucination behavior. From the color distribution, it is evident that hallucination heads are concentrated in the middle and deep layers.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you very much for the additional experiments and commentary. I feel more confident now about the results presented in the paper. I feel like the rating for this paper now may be updated to 7 which is unfortunately not an option here.\\n\\n1- However, I still believe that showing the scalability of this method (at least to the 30B~) scale would be very substantial. I understand that it may require a lot of VRAM to analyze a 70B, but it would be interesting to analyze a more intermediate model like LLaVa[1] 34B\\n\\n2- Have you also explored \\\"early fusion\\\" approaches such as Chameleon[2] (https://huggingface.co/facebook/chameleon-7b https://huggingface.co/facebook/chameleon-30b) and whether hallucination heads exist there? A negative result would be fine and could make sense.\\n\\n3- Are you willing, upon paper acceptance, to release open-source code for AD-HH, FT-HH, as well as for the analysis shown in Sec. 4 of your paper so that future works may further investigate the very interesting phenomenon you present in this paper? Open source weights for FT-HH models would also be interesting.\\n\\nIf any one of those points are addressed I would be more than happy to update my score.\\n\\n[1] Liu, Haotian, et al. \\\"Visual instruction tuning.\\\" Advances in neural information processing systems 36 (2024).\\n[2] Team, Chameleon. \\\"Chameleon: Mixed-modal early-fusion foundation models.\\\" arXiv preprint arXiv:2405.09818 (2024).\"}", "{\"comment\": \"Also, in the table above on OpenReview you refer to the method as FT-HH while in the paper it is TF-HH. I assume those are the same thing and I wrote my comment with that assumption.\"}", "{\"title\": \"General Response and Summary of Changes\", \"comment\": [\"We sincerely thank all reviewers and area chairs for their efforts in reviewing our paper and providing valuable feedback. We have carefully addressed each concern raised by the reviewers and made corresponding revisions to our paper. We highlight the revision in blue. Below, we summarize the key changes in our revision:\", \"**Section 2 (Page 2)**: We added a discussion of related literature, including [Deng et al., 2024], [Liu et al., 2024], and [Zhang et al., 2024]. Different from these works, our work investigates hallucination through lens of attribution and intervention and designs targeted mitigation strategies.\", \"**Section 4.3 (Page 6)**: We included descriptions of the metrics (CHAIR) and the dataset used to produce Figure 6, enhancing the figure's readability.\", \"**Section 5.2 (Table 1,2)**: We added symbols to denote whether methods are training-free or training-based, improving the clarity of the tables.\", \"**Appendix A.2**:\", \"Complete attention maps (Page 16, Figure13): Figure 13 compares the complete attention maps of two typical hallucination and non-hallucination heads across system, image, question, and output tokens. The hallucination head exhibits a clear over-reliance on text tokens. We also add an discussion on the relationship between the text-token over-reliance behaviour and hallucination heads: while hallucination heads frequently show over-reliance on text tokens, not all heads that over-rely on text tokens are hallucination heads.\", \"**Appendix A.3**:\", \"Human-evaluated generation quality (Page 18, Table 6). Table 6 presents the results of a human-based evaluation, focusing on hallucination and generation quality. In this evaluation, our method maintains consistent generation quality comparable to the baseline, while also showing improvements in hallucination reduction.\", \"Comparison with additional baselines (Page 18, Table 8). Table 8 provides comparisons with additional three baseline methods, including [Deng et al., 2024], [Liu et al., 2024], and [Zhang et al., 2024]. Although these baselines also achieve a reduction in hallucination rates compared to greedy decoding, our approach, leveraging targeted interventions, demonstrates a greater reduction in hallucinations compared to these baselines.\", \"Evaluation on larger and more recent LVLM models (Page 19, Table 9). Table 9 extends the evaluation to larger and more recent models, including Llama-3.2-11B-Vision, LLaVA-v1.5-13B, Chameleon-30B, and LLaVA-v1.6-34B. Our method achieves up to 10 points in hallucination reduction on these models.\", \"Hallucination behaviour across different scales of LVLMs (Page 19, Table 10). Table 10 provides an empirical results on the number and ratio of salient hallucination heads across different scales of LVLMs. The findings indicate that hallucination heads tend to diminish as model size increases and sufficient post-training is applied.\", \"We believe the revisions outlined above have significantly enhanced the quality of our paper, thanks to the insightful feedback from the reviewers. We hope these updated results address the reviewers' concerns and further strengthen the contributions of our work. Thank you once again for your valuable feedback.\"]}", "{\"title\": \"Response (Part I)\", \"comment\": \"Thank you for reviewing our paper and for your positive feedback. We truly appreciate your thoughtful comments and have addressed your concerns and questions below.\\n\\n**Comment1**: In Section 4.3, the metrics (CHAIR) and the dataset that is evaluated should be explained.\\n\\n**Response1**: Thanks for this suggestion. We have added more explanation about the CHAIR metrics and te dataset in Section 4.3 to improve readability. \\n\\n**Comment 2**: In Tables 1 and 2, it would be better if there is an additional column indicating which method is training-free and which is not.\\n\\n**Response 2**: Thanks for this suggestion. We have added symbol \\u2020 to denote training-free and symbol * to denote training-based in our paper to make this clear.\\n\\n**Question 3**: How this method can extend beyond object-level hallucination? For example, do equation (1) and (2) only tell if the existence of the objects but cannot capture if the model incorrectly counts the objects? Then the proposed method can not detect the heads which make hallucination on counting.\\n\\n**Response 3**: Thanks for raising this discussion. We would like to clarify that our framework is flexible and can be extended to attribute errors beyond object-level hallucinations, including errors related to counting, positional reasoning, and other aspects.\\n\\nTake the hallucination by counting the example you mentioned, we have conducted the experiment on the MME dataset. We take the wrong answer as hallucination token and correct answer as non-hallucination, and identify heads that are mostly associated with counting hallucination applying equation (1) and (2). The counting performance is improved by 5 points from 148 to 153. Besides, many other aspects such as position, posters, OCR are also improved by adaptively deactivating hallucination heads. Detailed results are reported below:\\n\\n| | existence | count | position | color | posters | celebrity | scene | landmark | artwork | OCR |\\n|-------------|-----------|-------|----------|-------|---------|-----------|-------|----------|---------|-----|\\n| Baseline | 190 | 148 | 128 | 160 | 139 | 133 | 156 | 162 | 122 | 130 |\\n| AD-HH (Ours)| 190 | **153** | **138** | 160 | **142** | **135** | **158** | 162 | 118 | **138** |\\n\\n**Question 4**: How many samples is required to compute equation (1)? And what data split do they come from?\\n\\n**Response 4**: We randomly select 1500 samples from the COCO training split to compute equation (1). This detail is provided in the main text (Second paragragh in Section 4.1). \\n\\n**Question 5**: What is the time required to compute Equation (2) to obtain the scores for all heads?\\n\\n**Response 5**: Computing Equation (2) for all heads takes about about 1 minutes per sample. We understand your concerns regarding the computational complexity associated with the number of attention heads. We would like to highlight that there are faster methods for calculating the influence as in Equation (2). These methods leverage techniques like Taylor expansion to approximate the influence of all attention heads in a single backpropagation step (e.g., as described in [1]). Such approaches are highly efficient, requiring only a single forward and backward pass, and the computation time is independent of the number of heads. These approaches can be explored in future work. \\n\\n[1] Achtibat, Reduan, et al. \\\"Attnlrp: attention-aware layer-wise relevance propagation for transformers.\\\" arXiv preprint arXiv:2402.05602 (2024).\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Comment 3**: Some findings look obvious.\\n\\na) It is not so surprising that the multi-head-attention is more critical than feed-forward network, since the former takes a majority of parameters and does much more things than the latter in the transformer. \\n\\nb) It is a well-known fact that hallucination is more related to language bias (rather than) image bias and multimodal models often ignore the information from images compared to that from text.\\n\\n**Response 3**: Thank you for the discussion. We address parts (a) and (b) separately:\\n\\n* For part (a): We would like to respectually point out a factual inaccuracy in your claim. In transformers, the MLP modules usually contain more parameters than the multi-head attention (MHA) modules. For instance, in LLaMA-2-7B, MLP modules account for approximately 65% of the total parameters, whereas MHA accounts for only 33% (roughly half of the MLP parameters). Therefore, the assertion that MHAs are more critical solely because they contain more parameters is incorrect. \\n\\n * Our findings demonstrate that despite having fewer parameters, MHA plays a disproportionately significant role in affecting hallucination. This insight is new and important, leading deeper investigations into the behavior of MHAs, as detailed in Section 4.2 of our paper.\\n\\n* For part (b): We assume you are referencing prior works (e.g., Leng et al., 2024; Huang et al., 2024), which have also observed that LVLMs tend to underutilize visual information during text generation. We acknowledge and appreciate the contributions of these studies. However, our work focuses on identifying **specific** network components responsible for hallucination, moving beyond the **broad** notion of language bias.\\n\\n * Our findings highlight that not all MHAs are equally implicated in hallucination; rather, a small subset (fewer than 3%) is primarily responsible. Furthermore, we demonstrate that targeted interventions on these components are effective in mitigating hallucination.\\nIn summary, our work offers new and non-obvious insights into the mechanisms of hallucination in LVLMs and provides actionable strategies to address it. We believe this makes a meaningful contribution to the field.\\n\\n**Question 4**: A more in-depth analysis is required to explain why the fine-tuning approach is not as effective as the training-free approach.\\n\\n**Response 4**: We would like to clarify that in our experiments, the fine-tuning approach (TF-HH) performs comparably to the training-free approach (AD-HH). This is evident in Tables 1 and 2. While AD-HH achieves a slightly better average score in Table 1 (24.06 vs. 24.45 for TF-HH), TF-HH slightly outperforms AD-HH in Table 2 (29.9 vs. 29.05).\\n\\nTheoretically, the training-free approach operates directly in the function space of the Transformer by manipulating the outputs of attention heads (e.g., setting the outputs to zero). In contrast, the fine-tuning approach operates in the parameter space, where the final result is determined by the training method and the model's parameter capacity. This distinction makes it more challenging for the fine-tuning approach to achieve precise manipulations, such as setting outputs to zero, as effectively as the training-free method. However, the fine-tuning approach offers greater flexibility in preserving generation quality by incorporating richer, data-driven adjustments.\\n\\nWe believe both approaches have unique advantages and limitations. In practice, we observe that neither dominates the other, and both achieve comparable performance under different scenarios.\\n\\n***\\n\\nWe sincerely thank you for your thoughtful review. We hope the revised manuscript and the clarifications provided above have addressed your concerns effectively. If our responses have satisfactorily resolved your concerns, we would greatly appreciate it if you could consider updating your review score. However, if you have any remaining concerns or require further clarification, please do not hesitate to let us know. We are more than willing to provide additional explanations or updates as needed.\"}", "{\"summary\": \"The paper addresses the issue of hallucination in Large Vision-Language Models (LVLMs), specifically focusing on why these models generate content that deviates from the provided image information in open-ended tasks like image captioning. The authors conduct a systematic investigation using causal mediation analysis and counterfactual edits to identify the internal components responsible for hallucination. They find that Multi-Head Attention (MHA) modules contribute more to hallucination than Multi-Layer Perceptron (MLP) modules, and within MHAs, certain attention heads\\u2014termed \\\"hallucination heads\\\"\\u2014are primarily responsible. To mitigate hallucination, the paper proposes two methods: (1) an adaptive deactivation of hallucination heads during decoding, which is training-free and can be applied directly, and (2) targeted fine-tuning of hallucination heads to reduce their reliance on text tokens. Both methods demonstrate significant reductions in hallucination rates on benchmark datasets like COCO captioning and Nocaps, outperforming existing baselines.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper deploys a systematic step-by-step approach to identify the components responsible for hallucinations. The insights are valuable and the analysis seems sensible.\", \"Identifying that only specific attention heads seem to contribute to hallucinations is a novel finding. The additional analysis they did (fig 5) is really interesting.\", \"The mitigation strategies shown (both training-based and training-free) seem sensible and seem to work well on the benchmarks.\", \"They show the method working with multiple datasets and multiple models.\"], \"weaknesses\": [\"There is limited discussion on how the proposed interventions affect the model's overall language generation capabilities. Potential trade-offs between reducing hallucination and maintaining fluency or coherence are not thoroughly examined.\", \"The paper is almost entirely focused on object hallucination.\", \"The experiments are conducted on 7B parameter models (LLaVA-7B and MiniGPT-4). Given the trend towards larger models in the field, it would be valuable to assess whether the identified hallucination heads and mitigation strategies are applicable to larger models (e.g., 70B parameters) and whether similar patterns emerge at different scales.\", \"There are minor issues with the writing and presentation that could be improved for clarity and professionalism. For example, phrases like \\\"Our Method Run Fast in Generation\\\" could be rephrased for better readability.\"], \"questions\": [\"Can you provide more details on why certain hallucination heads exhibit slow changes during instruction tuning? What factors contribute to this \\\"laziness,\\\" and how might future work address this issue?\", \"Have you evaluated how the proposed interventions affect the overall language generation quality of the models? Specifically, does reducing reliance on text tokens in hallucination heads impact the fluency, coherence, or descriptiveness of the generated captions? It would be helpful to see metrics (human eval) or analyses addressing potential trade-offs.\", \"Do you think \\\"hallucination heads\\\" exist in the same way in larger scale (eg. 70b) VLMs? Would the same training-free method work similarly on them? Would be interesting to see.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (Part I)\", \"comment\": \"Thank you for taking the time to review our paper and provide valuable feedback. We greatly appreciate your efforts and have addressed your concerns and questions below.\\n\\n**Comment 1**: This work fails to cite several related recent works. They may need to be cited and compared in experiments.\\n\\n**Response 1**: Thank you for bringing these works to our attention. We have added a discussion of these studies to the related work section and appreciate their contributions: \\n\\n- [Deng et al., 2024] proposed a CLIP-guided decoding approach that utilizes CLIP as an external tool to alleviate hallucination during decoding. \\n- [Liu et al., 2024] addressed the issue by constructing the Large-scale Robust Visual (LRV)-Instruction dataset, which includes both positive and negative instructions to enhance the robustness of visual instruction tuning. \\n- [Zhang et al., 2024] introduced a large-scale instruction-tuning dataset name REVERIE with reflective rational annotations, to enable the model to justify whether the reponses are correct or incorrect.\\n\\nDifferent from these works, our work takes a different direction by specifically identifying, analyzing, and adapting the components within the model responsible for hallucinations. Additionally, we have conducted experiments to empirically compare our method with these baseline approaches on the LLaVA model (see Table 8 in the Appendix ). The results below demonstrate that, although these baselines also help reduce hallucination errors, our method, which employs targeted interventions, is more effective in mitigating object hallucinations in open-ended generation tasks. \\n\\n| | Greedy | GCD [Deng et al., 2024] | LRV [Liu et al., 2024] | REVERIE [Zhang et al., 2024] | AD-HH (Ours) | TF-HH (Ours) |\\n|-----------|--------|--------------------------|-------------------------|-----------------------------|--------------|--------------|\\n| CHAIR_S | 51.8 | 39.2 | 39.4 | 49.6 | **29.6** | 35.0 |\\n| CHAIR_I | 13.3 | 10.8 | 13.1 | 12.7 | **8.0** | 8.7 |\\n\\nOur current evaluation focuses on the LLaVA model, as the results of these methods are not directly applicable to our settings due to differences in model versions and evaluation protocols (details provided in the Appendix A). Consequently, re-implementing these baselines is necessary, and some methods, such as LRV and REVRIE, require extensive training that demands significant computational resources. We are actively conducting further assessments with additional methods and settings and plan to share updated findings later. Nevertheless, the current evidence strongly supports the effectiveness of our proposed methods in mitigating hallucinations.\\n\\n\\n**Comment 2**: The evaluation only depends on two CHAIR metrics.\\n\\n**Response 2**: Thank you for raising this concern. To address your concern, in addition to the two CHAIR metrics used to evaluate the effectiveness of our method in mitigating hallucination, we have also conducted a human evaluation as suggested by **Reviewer Ex6k**.\\n\\n| Method | Non-Hallucination Score | Generation Quality Score |\\n|-------------------|-------------------------|---------------------------|\\n| Greedy Decoding | 3.25 | 3.99 |\\n| AD-HH (Ours) | **3.87** | 3.85 |\\n| TF-HH (Ours) | 3.78 | **4.01** |\\n\\nSpecifically, we asked two Ph.D. students and one undergraduate student to manually evaluate the responses. They were instructed to score each response on a scale of 1 to 5 based on two criteria: (1) non-hallucination performance, with higher scores reflecting fewer hallucinations, and (2) generation quality, with higher scores indicating more fluent responses. For both the baseline and our proposed methods, the evaluators assessed a total of 500 generated responses per method, resulting in 1500 responses overall. These results demonstrate that our methods effectively mitigate hallucination while maintaining high generation quality.\\n\\nFurthermore, we also evaluated our approach using **MM-Vet**, as presented in Table 2. The results confirm that intervening on hallucination heads leads to improved performance in general tasks, further validating the effectiveness of our approach.\"}", "{\"summary\": \"This study explores the causes of hallucination in large visual language models (LVLMs) during complex visual tasks and proposes mitigation measures. First, a causal analysis of specific modules of LVLM based on counterfactual editing found that the multi-head attention (MHA) module contributed more to the generation of hallucinatory words than the multi-layer perceptrons module. The study further identified attentional heads associated with hallucination, which are concentrated in the middle and deep layers of the model and show a strong attentional bias towards text markers. In addition, the patterns of these attentional heads remain relatively similar to the underlying language model and change more slowly during instruction tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) It is the first work to analyze the attention map of the output text token of LVLMs and the attention-map of the NLP model, which is enlightening.\\n\\n(2) This paper proposes two methods to alleviate the hallucination of multi-modal large model. One is to restrict the probability of generating tokens by improving transformer's decoder, and the other is to reduce over-reliance on text tokens by SFT. And the experiment proves that the two methods proposed in this paper are excellent.\", \"weaknesses\": \"(1) The training-free method is plug and play, but the methods applied in this paper are too few, only LLaVA and Minigpt4 are included, and the results of other models should be supplemented.\\n\\n(2) The paper discusses the attention-map relationship of text tokens between LVLMs/LLM. Although the setting is a hallucination problem, this mechanism should be attributed to the cause of LLM. So can datasets be extended to other datasets like ScienceQA, GQA,textVQA, POPE, mmbench, etc.?\", \"questions\": \"(1) If it is a task like VQA, the generated text token is only one, and only ABCD is answered, whether this hallucination pattern does not exist. If it is the SFT method, then this result should also be applicable to VQA tasks?\\n\\n(2) It is written in the paper that the author has hallucinatory attention-head in the middle layer or deep layer. Why is this? The complete distribution of 32 heads was not seen in the supplementary materials.\\n\\n(3) Has it analyzed the difference between system_token+image token+prompt+output_token hallucination attention head and regular attention head? Is there any difference? Or can it only be observed through output-tokens?\\n\\n(4)What about the distribution of attention when judging LLMs hallucinations? LLMs unable to generate a caption to an image, right? Therefore, I don't think this kind of attention head observation at the decoder terminal is very reasonable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
Bjerq2n9h3
MatPool: Matrix-pattern-oriented Pooling for Graph Property Prediction
[ "Zonghai Zhu", "Ying Huang", "Huanlai Xing", "Li Feng", "Yuge Xu" ]
Graph property prediction usually involves using a model to predict the label for the entire graph, which often has complex structures. Because input graphs have different sizes, current methods generally use graph pooling to coarsen them into a graph-level representation with a unified vector pattern. However, this coarsening process can lead to a significant loss of graph information. In this work, we explore the graph representation by using a matrix pattern and introduce an algorithm called Matrix-pattern-oriented Pooling (MatPool) that provides a unified graph-level representation for different graphs. MatPool multiplies the transposed feature matrix by the feature matrix itself and then conducts an isomorphic mapping to create a Matrix Representation (MR) that preserves the graph information and satisfies permutation invariance. Since the multiplication operation calculates the relationships between each feature, MR exhibits row-column correlations under the matrix pattern. To match this correlation, MatPool uses a novel and efficient Matrix Neural Network (MNN) with two-sided weight matrices to match the row-column correlation under the matrix pattern. We provide theoretical analyses to reveal the properties of MatPool and explain why it can preserve graph information and satisfy the permutation invariance. Extensive experiments on various graph property prediction benchmarks show the efficiency and effectiveness of MatPool.
[ "Graph Pooling", "Matrix-Pattern-Oriented", "Matrix Neural Network", "Graph Neural Network" ]
https://openreview.net/pdf?id=Bjerq2n9h3
https://openreview.net/forum?id=Bjerq2n9h3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w9U3Exr6gq", "w3SrY9Acer", "dZ597JUFjD", "WHEUQu3kc1", "U5U6rU01YC" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730569894456, 1730442396286, 1730899611011, 1737472553238, 1730637455056 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6707/Reviewer_V2yt" ], [ "ICLR.cc/2025/Conference/Submission6707/Reviewer_6EYb" ], [ "ICLR.cc/2025/Conference/Submission6707/Reviewer_rKYK" ], [ "ICLR.cc/2025/Conference/Submission6707/Authors" ], [ "ICLR.cc/2025/Conference/Submission6707/Reviewer_3ty4" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a message passing approach, which is slightly different than classical GCN denoted PEM, which is basically (and not quite clearly why) adding the sum to the diagonal (opposite to classical Laplacian). They then propose a version of quadratic representation of the graph (again slightly different from current quadratic networks), but the main difference is a multiplication by M as defined in line 265 (between proposition 3.7 and 3.8). The result is as in any other quadratic approach a constant side representation that is then used as the input for classification\\nThe author then present some comparison of multiple methods they proposed. They also compare to other methods in Table 7 in the appendix, but the table show no significant difference between the methods proposed here and previous methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The method is one more method for presentation of graphs in constant dimensions\\nThe projection is one to one from the graph to the presentation.\\nThere reported performance are as good as the state of the art\", \"weaknesses\": \"The paper has many of the important details in the appendix, making it de-facto very long. It is impossible to understand without the appendix\\nThe logic of PEM is not very clear\\nThey are far from being the first to propose quadratic methods, but this is not reported.\", \"questions\": \"An explanation why PEM is better than other methods (or a clearer explanation the idea of putting all the edges in the diagonal) would be more than welcome\\nIt is not clear how is this work better than the existing methods (it is clearly not worse, but does not seem to be better).\\nThe method assumes that the edge feature and vertex feature dimensions are equal. This seems unrealistic. How is this handled in real-world graph datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes Matrix-pattern-oriented Pool (MatPool), a framework for graph representation learning. MatPool leverages Matrix Representation and a Matrix Neural Network to predict graph properties across varying graph sizes, maintaining comprehensive graph information throughout. Experimental results demonstrate the effectiveness of MatPool in diverse graph-based tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"\\u2022 The proposed PEM effectively reconstructs the adjacency matrix to ensure positive eigenvalues, which enhances the propagation capacity of primary nodes.\\n\\u2022 The Matrix Neural Network (MNN) functions as a global pooling mechanism, offering a straightforward implementation and fast training speed. This approach demonstrates superior performance compared to existing global pooling methods.\", \"weaknesses\": \"\\u2022 Certain aspects of MatPool require further clarification. For example, the initialization of the M matrix in Equation 9 is unclear. While it appears to function as a learnable matrix, it is not specified as a learnable parameter in Algorithm 1. Given the importance of matrix M, as demonstrated in Table 3, a more detailed description of its initialization and role in the framework would be beneficial.\", \"questions\": \"\\u2022 While graph sizes for graph-level representation tasks are typically small, the Mat(G_N) matrix in MatPool may still demand substantial GPU memory. An analysis of GPU memory usage during training would be helpful for understanding the framework\\u2019s computational requirements.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose MatPool, a novel algorithm for graph property prediction in the context of Graph Neural Networks (GNNs). The method is based on two main components: a new message-passing scheme, called Positive Eigenvalue Mapping (PEM), and a neural network designed to process the resulting matrix-level representation.\\n\\nFirst, the authors introduce PEM, a technique that keeps the eigenvalues of the adjacency matrix positive, enhancing the influence of primary nodes and facilitating the aggregation of node features. This process leads to the construction of a graph-level representation, the Matrix Representation (MR), which preserves the structural information of the graph.\\n\\nFinally, they present the Matrix Neural Network (MNN), designed to extract deeper features from the MR. This new architecture should offer advantages in both execution speed and overall model performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The construction of the MR is particularly interesting for its ability to preserve information.\", \"weaknesses\": \"The method is interesting, but the authors should clarify the different contributions and their specific roles starting from the abstract (and in the relevant sections). To be clear, I\\u2019m not saying the contributions weren\\u2019t listed, but the way and place they were presented could sometimes confuse the reader. For example, PEM isn\\u2019t mentioned in the abstract. Moreover, in section 3.2, PEM seems to focus only on aggregation. So, how does pooling fit into this, and how is it defined? It might be helpful to include pooling in equation 9 and also mention it in figure 2. Both sections could be confusing for the reader, so I\\u2019d suggest reorganizing sections 3.2 and 3.3 for clarity.\\n\\nFurthermore, I couldn\\u2019t find any explanation for why the method doesn\\u2019t perform particularly well on certain datasets, such as Letter-med and Letter-high.\\n\\nIt would have been interesting to consider the use of the normalized Laplacian as well. A comparison with the adjacency matrix, or at least an explanation of why the adjacency matrix was chosen, would have added value to the work.\\n\\nAnother aspect that needs clarification is the model\\u2019s scalability: it\\u2019s not clear if it can handle very large graphs or networks with millions of nodes and edges.\\n\\nThe paper states that PEM \\u201cenhances the influence of primary nodes.\\u201d It would be helpful to see experiments (or toy examples) demonstrating the concrete effect of this reinforcement on the model.\\n\\nFinally, regarding execution speed, I expected a more substantial improvement. Looking at figure 5, the method is almost always on par with other approaches like SOPool or GA.\", \"questions\": \"It\\u2019s unclear whether directed graphs were used in the experiments. If they were not included, it would be helpful to explain why and to add experiments that incorporate them.\\n\\nThe caption for \\u201cTable 4: Experimental results (%) for all pooling methods using PEM as the message-passing way are reported here\\u201d is somewhat unclear. In section 4.3, it states, \\u201cTable 4 shows the experimental results of all pooling methods across the graph datasets used. MatPool achieves the highest results on 11 out of 20 datasets and has the highest average result. Overall, global pooling methods outperform hierarchical pooling methods, and MatPool performs better than other global pooling methods.\\u201d This raises the question of how PEM was used; I\\u2019d suggest clarifying this point.\\n\\nHow generalizable is the MNN?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The article introduces a novel approach to graph property prediction using a matrix-pattern-oriented pooling algorithm, MatPool. Unlike traditional methods that often lose information through graph pooling, MatPool generates a Matrix Representation (MR) by multiplying the feature matrix with its transpose, preserving graph information and ensuring permutation invariance. It employs a Matrix Neural Network (MNN) with two-sided weight matrices to align with row-column correlations. Theoretical analyses support the method's efficacy, and extensive experiments demonstrate its efficiency and effectiveness across various benchmarks, offering a valuable new perspective for graph property prediction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experiments in this paper are comprehensive, conducted across a total of 20 datasets, providing a high level of credibility.\\n2. The paper includes solid theoretical derivations, enhancing the model's reliability. \\n3. The paper appears to be easy to follow.\", \"weaknesses\": \"1. While the Matrix Neural Network (MNN) can enhance computational efficiency through matrix operations, a practical issue arises when multiple graphs are inputted; calculating the Matrix Representation (Mat) may involve redundant computations. (There are computations between different graphs) Are there solutions to address this?\\n2. The proposed MNN framework can be categorized within the Kernel framework presented in [1], suggesting it functions as a column-based kernel computation. The authors should consider comparing their method with [1] as a baseline.\\n3. Since M is a learnable matrix, does its initialization method significantly impact the results? Have the authors attempted to incorporate any prior knowledge into the initialization of this matrix? \\n[1] Yu J, Wu Z, Cai J, et al. Kernel Readout for Graph Neural Networks.\", \"questions\": \"1. How can the redundancy in computations be minimized when multiple graphs are inputted into the MNN model to calculate the Matrix Representation (Mat)?\\n2. How does the initialization method of the learnable matrix M affect the results? Have the authors explored incorporating prior knowledge into its initialization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BjaHYhr7VS
Not all parameters are equal: a Hessian informed differential learning rate for deep learning
[ "Shiyun Xu", "Zhiqi Bu", "Yiliang Zhang", "Ian Barnett" ]
Differential learning rate (DLR), a technique that applies different learning rates (instead of a single one) to different model parameters, has been widely used in deep learning and achieved empirical success via its various forms. For example, parameter-efficient training (PET) applies zero learning rates to most parameters so as to significantly saves the computational cost; adaptive optimizers such as Adam apply the coordinate-wise learning rate to accelerate the convergence. At the core, DLR leverages the observation that different parameters can have different loss curvature, which is hard to characterize in general. We propose the Hessian-informed differential learning rate (Hi-DLR), an efficient approach that captures the loss curvature of parameters for any model and optimizer adaptively. Given a proper grouping of parameters, we empirically demonstrate that Hi-DLR can improve the convergence by dynamically determining the learning rates during the training. Furthermore, we can quantify the influence of different parameters and freeze the less-contributing parameters, which leads to a new PET that automatically adapts to various tasks and models.
[ "Differential learning rate", "Newton's method", "parameter-efficient training" ]
https://openreview.net/pdf?id=BjaHYhr7VS
https://openreview.net/forum?id=BjaHYhr7VS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zRGM7vkVju", "xTVJU18zfI", "u8WJxbM0k9", "tAMl14sufo", "oizqNh4THh", "gqp42zVmVv", "brwmCI1TsD", "XBxLW56rEQ", "VdrmPhUDYu", "QkuHzBug1A", "Jbf8VCDpjO", "FrzyFU7xp8", "Brd3tuTOWm", "BHXKVa3RS2", "AkIVjq3lpW", "7Fr573BAVL", "2ibiULuzvQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732316147962, 1732544805172, 1732339534013, 1732332000064, 1732655255056, 1732561485564, 1732550177927, 1732225467382, 1737395147179, 1732507786994, 1732314518515, 1730572324122, 1732469482288, 1732594914125, 1730662508628, 1730518150880, 1730784464329 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_dT3U" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_gPHg" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_dT3U" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_gPHg" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_dT3U" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Authors" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_DnhU" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_gPHg" ], [ "ICLR.cc/2025/Conference/Submission5058/Reviewer_sBXW" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for the comments! We address them below and welcome more feedbacks. We would appreciate it if the reviewer could raise the score if satisfied.\\n\\n1. *This method includes other time-consuming steps and is incomplete without measurement.*\\n\\nThank you for mentioning this. We have added a detailed complexity analysis in Appendix B in our revised version. \\n\\n*Under the same conditions, would a standard AdamW approach achieve a similar result?*\\n\\nEmpirically we have shown that a standard AdamW can achieve similar (but maybe worse) results in many tasks (e.g, table 1 and 2). However, we note that this only holds when AdamW is using a proper learning rate, which takes time to search or tune. Under the same condition (i.e. using K-group DLR), if we use grid search, the learning rate searching time for a standard AdamW would be $O(K^2)$. Our algorithm only takes $O(K/ \\\\Phi)$ and can be further compressed to $O(1)$ if we choose $\\\\Phi=O(k)$. Besides, Hi-DLR can analyze the parameter importance by PPI while using standard AdamW cannot achieve this.\\n\\n2. *In the paper, parameter groups are adjusted manually, which requires explanation. Additionally, for large models, the hyperparameter K warrants further discussion.* \\n\\nOur main focus is given a grouping, how to design proper DLR for it, instead of how to design the grouping. In this work, we also explore the second question in two ways: (1) We chose application areas that the grouping strategies come naturally, such as multi-task learning, NAM and PET, e.g. PET naturally has two groups, the frozen one and the trainable one. (2) We construct a large enough set in Section 5 (say K=6, which gives $2^6$ sub-groupings) and use PPI to select the optimal sub-grouping. These two ways can extend to larger models without reconsidering K. For instance, in Table 7, we select the grouping on GPT-small and directly apply to GPT-large.\\n\\nWe have added a new section to discuss our limitations and future directions in Appendix C. \\n\\n\\n3. *When group number K is a large number, the update period will increase as its learning rate update every \\\\phi iterations, which might influence the convergence.*\\n\\nWe agree the training time will increase as K becomes large, if we don't use other tricks. In our paper, we have experimented with K up to 40 in CelebA. For even larger K, one remediation is to use linearly larger $\\\\Phi$ as we explained in Section 3 \\\"When to derive\\\".\\n\\n*Could the authors provide the source code for reproduction?*\\nWe will release the code upon acceptance.\"}", "{\"comment\": \"I appreciate the authors' response to my questions. Some of my coments have been addressed, but I am still not convinced about the novelty and significance of this paper. Moreover, the algorithms that are compared with Hi-DLR, such as Prodigy and D-adaptation, are with theroetical guarantees (and if you check the papers that proposed Prodigy and D-adaptation, a significant proportion of the contents are to develop theoretical guarantees). Therefore, I feel that this paper still has significant disadvantages compared with existing works. Finally, I still feel that this paper needs a thorough revision before publication. I feel that the introduction section does not give a smooth logic, and descriptions/discussions about experiments are not very well written (for example, what do \\\"Constant\\\", \\\"Linear decay\\\", \\\"Cosine decay\\\" mean in Table 1, and what algortihm are these schedules referring to? I think this type of information has to be provided in the main paper).\"}", "{\"comment\": \"Thank you for your response.\\n\\nRegarding the evaluation of Hi-DLR, the results in Table 1 suggest that the performance gain brought by Hi-DLR is relatively limited. This raises the question of whether the 40% (i.e., 1/(1-0.3)-1) increase in training time is justified. To better understand and evaluate the practical benefits of Hi-DLR, it would be very helpful to see how the accuracy progresses with respect to wall-clock time, rather than just iterations. This comparison would provide a clearer picture of the trade-offs in real-world scenarios.\\n\\nThe reviewer plans to maintain the score for now and will discuss further with the other reviewers during the discussion phase before finalizing the evaluation.\"}", "{\"comment\": \"We thank the reviewer for the comments! We address them below and welcome more feedbacks. We would appreciate it if the reviewer could raise the score if satisfied.\\n\\n*Although everything diagonalized, the computational cost of the search is relatively high when fitting the second-order model, especially when $K$ is large.*\\n\\nWe have added a detailed complexity analysis in Appendix B. In a full-parameter training on a single GPU, the relative training time of Hi-DLR is $\\\\frac{1}{1+\\\\frac{4K}{3\\\\Phi}}$. We agree the training time will increase as K becomes large, if we don't use other tricks. In our paper, we have experimented with K up to 40 in CelebA. For even larger K, one remediation is to use linearly larger as we explained in Section 3 \\\"When to derive\\\".\\n\\n*I wonder if the algorithm can actually be designed more directly, for example instead of randomly sampled $\\\\eta$s, fix several points on K lines, and than estimate curvature, make the learning rate on a line the negative curvature, etc.*\\n\\nYes, there are some flexibility of designing the algorithm. We have tested fitting points $[-2,-1,1,2]*\\\\eta_k$ for $k=1,...,K$ and the results are indistinguishable. We'll add this to algorithm 1. \\n\\n\\n*The authors discussed the empirical comparison with other adaptive learning rate algorithms like prodigy and adaptation, etc. however, they did not discuss the relationship of their adaptation with previous adaptive algorithms in principle.*\\n\\nWe have mentioned Prodigy and Dadaptation in paragraph \\\"Automatic ULR\\\" in line 105. These methods in their current form only work in ULR. Hence, they cannot solve the DLR problem as we proposed.\\n\\n\\n*$d_k $ is used both as an update direction, and as a notation of dimensionality in your paper.*\\n\\nSorry for the confusion. $d_k$ represents the number of parameters in group k but $\\\\mathbf{d_k}$ (in bold) is an update direction. We'll change the notation in camera ready. \\n\\n*In Equation (3.1), and when fitting the quadratic function...*\\n\\nWe use the corrected/pre-conditioned gradient when using Adam. Yes, the second order Taylor expansion is on any direction. We haven't studied the relationship but we know our adaptive gradient is positively correlated to the corrected gradient, regardless of Adam's momentum, and adaptive schedule. To see this, assume $[g_1, g_2]$ is the gradient of Adam grouped into two groups. Then Hi-DLR leads to $[\\\\eta_1 g_1,\\\\eta_2 g_2]$. It is obvious that the inner product $\\\\eta_1 ||g_1||^2+\\\\eta_2 ||g_2||^2>0$ which always holds because Hi-DLR gives positive learning rates.\\n\\n\\n*For LoRA, the optimal learning rates ratios for matrix A and B can be fixed (Hayou et al., 2024) . How does your schedule interact with this ratio? If $\\\\lambda$ is fixed, then your algorithm cannot work further on Lora+?*\\n\\nOur algorithm is compatible with Lora+. If the $\\\\lambda$ is fixed, the optimal $\\\\eta$ can be found by Hi-ULR because the task changes from finding optimal $[\\\\eta_1,\\\\eta_2]$ to $[1, \\\\lambda]*\\\\eta$.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe hope you are satisfied with our point-to-point response. Please kindly let us know whether we can improve in the last day of rebuttal. It would be greatly appreciated if you could consider raising the score.\"}", "{\"comment\": \"Regarding the learning rate schedules, what I meant to ask was the base algorithm. Now I see that in the caption it was mentioned that the optimizer is AdamW so I have no more questions about it. However, I do feel that in general, additional paragraphs discussing and introducing some basic experiment setups may be very helpful.\\n\\nRegarding the introduction section, the current paper begins with several paragraphs that have bolded headings. This approach seems somewhat atypical to me. The opening sentences provide direct definitions of DLR, parameter group, and ULR, without offering any context or discussion. I believe it would be beneficial to improve this section by adding more context and a smoother introduction.\\n\\nRegarding your comment that \\\"Prodigy and Dadaptation only work in ULR. Hence, they cannot solve the DLR problem as we proposed.\\\", I find that it contradicts the comment at lines 77-78 \\\"we can view the adaptive optimizers including Adam as SGD with coordinate-wise DLR\\\". First of all, it is not very clear what do you mean by the \\\"DLR problem\\\". My understanding is that DLR is a concept highlighted in this paper that can potentially help improving efficiency, but I am not sure if it is appropriate to formulate any concrete \\\"DLR problem\\\". In addition, since Prodigy and Dadaptation have Adam versions, and Adam can be treated as coordinate-wise DLR, it does not seem straightforward that Prodigy and Dadaptation cannot solve the \\\"DLR problem\\\".\"}", "{\"comment\": \"Thank you for your reply.\\n\\n*the algorithms that are compared with Hi-DLR, such as Prodigy and D-adaptation, are with theroetical guarantees*\\n\\nAs we stated in the rebuttal and paper, methods like Prodigy and Dadaptation only work in ULR. Hence, they cannot solve the DLR problem as we proposed. \\n\\n*I feel that the introduction section does not give a smooth logic*\\n\\nCan you be more specific on where we can improve on this? Thanks.\\n\\n*descriptions/discussions about experiments are not very well written (for example, what do \\\"Constant\\\", \\\"Linear decay\\\", \\\"Cosine decay\\\" mean in Table 1, and what algortihm are these schedules referring to*\\n\\nThese are heuristic learning rate schedulers that are widely used in DL. We added a sentence to introduce them in the main text. And we kindly provide the following links for your reference:\\n\\n\\\"Constant\\\": a constant learning rate. \\n\\n\\\"Linear decay\\\": https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html\\n\\n\\\"Cosine decay\\\": https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html\\n\\nWe'll put these links in the appendix if needed.\\n\\nWe hope our answers can resolve some of your concerns.\"}", "{\"comment\": \"We thank the reviewer for all the constructive feedback and the well-grounded questions. We will address your comments one by one. We would sincerely appreciate it if the reviewer could provide more feedback or questions!\\n\\n1.*\\\"The idea is very similar to the derivation of Newton\\u2019s method\\\"* \\n\\nwe have built the connection between our method and Newton in line 179-181 but also stated the differences in line 182-190. The Newton\\u2019s method requires the inverse of Hessian, which is hard to compute in large-scale model trainings.\\n\\n*\\\"the approximation of the Hessian with the diagonal matrix is not novel either\\\"*\\n\\nWe would like to stress that we\\u2019re not diagonalizing Hessian/Fisher. We have stated many classic methods like Adam that diagonalize Hessian/Fisher by introducing preconditionings in line 182-190. As a comparison, our diagonalization trick is applied to gHg, which is an important componant of our Hessian-informed adaptive learning rate. Hence it can be combined with any general optimizer that may or may not use the diagonalization of Hessian/Fisher. We provide the following simplified formula to highlight the difference between our framework and others. (Equation 2.2, 2.3 give a more sophisticated expression of our diagonalization trick.)\", \"sgd\": \"$\\\\eta * I$\", \"newton\": \"$1*H^{-1}$\", \"adahessian\": \"$\\\\eta*diag(H)^{-1}$\\n\\nAdagrad/Adam: $\\\\eta*diag(P)^{-1}$\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\", \"hi_dlr_sgd\": \"$\\\\frac{Gg}{ diag(gHg)}*I$\", \"hi_dlr_adam\": \"$\\\\frac{Gg}{diag(gHg)}*diag(P)^{-1}$\\n\\n\\n2. From the mathematic viewpoint, our implementation on AdamW is $Gg/diag(gHg)*diag(P)^{-1}$. The entry-wise adaptive learning rates of AdamW is presented in $diag(P)$ and our method adds another level of control (by grouping the d entry-wise learning rates into K groups, each with a meta-learning rate that we design). In contrast, the traditional learning rate for Adam is one single meta-learning rate that lacks the degree of freedom.\\n\\nFrom the algorithmic viewpoint, our meta-framework takes an optimizer as a black box, as presented in our algorithm 1. We only use some extra forward functions to decide the optimal learning rates given the parameter grouping.\\n\\n3. Sorry we missed this. We kindly refer to Appendix A.6 for the experimental details of ViT classification. Please let us know if this is sufficient or if you have questions about the details of other experiments. We will release the code upon acceptance.\\n\\n4. We did experiment on pretraining one regression and one classification tasks in section 4.3. However, we are limited in computational resource for further pretraining experiments, which we leave for future work.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thanks. The reviewer will take this into final consideration.\"}", "{\"comment\": \"We thank the reviewer for the comments! We address them below and welcome more feedbacks. We would appreciate it if the reviewer could raise the score if satisfied.\\n\\n*Evaluation of Hi-DLR*\\n\\nThank you for mentioning this. We have added a detailed complexity analysis in Appendix B in our revised version. In a full-parameter training on a single GPU, the relative training time of Hi-DLR is $\\\\frac{1}{1+\\\\frac{4K}{3\\\\Phi}}$. For instance when $K=3,\\\\Phi=10$, Hi-DLR is 70\\\\% as fast as a base optimizer.\\n\\n*The authors state that they consistently observe existing PET methods selecting highly influential parameters, ... Why is this the case?\\nThe PPI metric is an estimation of parameter influence, so what is the corresponding ground truth for parameter influence? Without ground truth, how can we be certain that PET methods have indeed selected the highly influential parameters?*\\n\\nIt remains an open question (that may be out of the scope of this paper) why certain PET methods are effective and select highly influential parameters. Some theories have reasoned that the fine-tuning information is low-rank so a small portion of trainable model parameters can capture it. We are happy to include some references if the reviewer is interested. In deep learning, unfortunately we don't have the ground truth of the true parameter influence. Our approach is to first identify the highly influential parameters through our PPI metric, then determine which PET methods include these parameters. E.g. in Figure 7 third sub-plot, LoRA includes such parameters but BitFit does not. We then validate our determination on the small models by directly trying it on larger models (see Table 3,4), and if the new PET is effective (i.e. comparable to the full fine-tuning), then we are more confident about the selection.\"}", "{\"summary\": \"This paper proposes Hessian informed differential learning rates for deep learning, which is derived based on the second-order Taylor approximation of the loss function. Experiments on synthetic data and real data are conducted to demonstrate the effectiveness of the proposed learning rates.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors conducted extensive experiments covering multiple different settings, from synthetic data and toy objective functions to various advanced practical learning tasks.\", \"weaknesses\": \"1. The novelty of the proposed method is questionable. The idea is very similar to the derivation of Newton\\u2019s method, and the approximation of the Hessian with the diagonal matrix is not novel either, for example\\n\\nAndrei, N. A diagonal quasi-Newton updating method for unconstrained optimization. Numer Algor 81, 575\\u2013590 (2019). https://doi.org/10.1007/s11075-018-0562-7\\n\\n2. The logic of the proposed method is not convincing. The authors mentioned in the caption of Table 1 that they apply the proposed learning rates to AdamW. However, the proposed learning rates are derived based on equations (2.1) and (2.3), where the vector $\\\\mathbf{d}$ in (2.1) is replaced by $ \\\\boldsymbol{\\\\eta}\\\\_{[K]} \\\\mathbf{g}\\\\_{[K]}^{\\\\mathrm{optim}} $. If the learning rates are applied to AdamW, why shouldn\\u2019t we set the vector $\\\\mathbf{d}$ in (2.1) to be the actuarial difference between iterates of AdamW, taking the entry-wise adaptive learning rates of AdamW into consideration? I think the notation $ \\\\mathbf{g}\\\\_{[K]}^{\\\\mathrm{optim}} $ is not explained well either. I suggest that the authors should clarify how exactly their method integrates with AdamW.\\n\\n3. The paper does not provide sufficient experimental details. Although some are provided in the appendix, the information provided are insufficient to reproduce the results. For example, I do not find any experimental details about the training of the ViT. For example, did the authors consider data augmentation? What is the batch size for training ViT, and what exact version of ViT is used on the various data sets? (e.g., what is the classifier, what are the dimensions of heads, is dropout used, how many patches are each image split into, etc). The authors provide no codes either. \\n\\n4. For relatively large models, as far as I can tell, the paper only considers applying the proposed learning rates for fine-tuning. It is important to also demonstrate the performance of the proposed method in pre-training. Therefore, I suggest that the authors should include experiments on pre-training certain models such as ViTs.\\n\\n5. The presentation of the paper is not clear enough, and the paper needs some significant revision. As mentioned in the points above, there are notations and experimental setups that are not explained clearly. Moreover, I feel that this paper lacks a proper introduction section.\", \"questions\": \"I suggest that the authors should respond to the weaknesses pointed out above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your quick response. We added Figure 14 that compared Hi-DLR's performance with a cosine decay ULR (which cannot be easily tuned as it requires grid search for the peak lr) in wall-clock training time. We observe that the training time overhead is 15\\\\% (would vary in different models/datasets), which is well-characterized by our new complexity analysis in Appendix B (which states the upper bound of overhead to be 16%). Additionally, we kindly remind that even if Hi-DLR has a similar performance than the optimal ULR, it allows an additional advantage of parameter importance that has been highlighted in Sec 5, without extra overhead.\\n\\nWe hope the reviewer can consider raising the score or let us know how we can further improve!\"}", "{\"comment\": \"We appreciate your feedback on the DLR narratives and have revised the presentation from the angle of \\\"degree of freedom\\\". Prodigy and Dadaptation on Adam are studying the DLR problem at the degree of freedom 1. However, our method can study the DLR problem at the degree of freedom that is larger than 1.\\n\\nTo illustrate this briefly, we consider vanilla SGD and SignSGD (which is a special case of Adam) on 2 parameters.\\n\\nSGD(g): $w_t-w_{t+1}=\\\\eta g=[\\\\eta g_1,\\\\eta g_2]$\\n\\nSignSGD(g): $w_t-w_{t+1}=\\\\eta sign(g)=[\\\\eta sign(g_1),\\\\eta sign(g_2)]$\\n\\nIn SignSGD, the learning rate is $\\\\eta$, which applies to $sign(g)\\\\in R^2$; equivalently, this SignSGD-ULR can be viewed as SGD-DLR, which applies $\\\\eta_1:=\\\\eta/|g_1|$ to $g_1$ and $\\\\eta_2:=\\\\eta/|g_2|$ to $g_2$. Hence SignSGD is a special case of SGD with coordinate-wise learning rates. However, the two learning rates $\\\\eta_1,\\\\eta_2$ are governed by one single hyperparameter $\\\\eta$, meaning **the degree of freedom in hyperparameters is always 1** even in the DLR method. If we denote the degree of freedom in parenthesis, then Adam-ULR(1) is equivalent to SGD-DLR(1).\\n\\nIn contrast, our formulation in Line 147, is studying SGD/Adam-DLR(K), with multiple degress of freedom in hyperparameters that Hi-DLR can give suggestion on (again, Prodigy/D-adaptation only gives suggestion on 1 $\\\\eta$, hence degree of freedom is 1).\\n\\nWe reformulate the definition of the \\\"DLR problem\\\" that we are actually solving in section 2.2. Please let us know if it is clear now.\"}", "{\"summary\": \"The paper introduces a novel method called Hessian-informed Differential Learning Rate (Hi-DLR) to optimize the training of neural networks by adapting learning rates based on the curvature of the loss function, which enhances convergence across various tasks. It highlights the limitations of existing Parameter-Efficient Tuning (PET) methods, demonstrating that no single PET method is universally effective, as performance varies significantly with different datasets and model architectures. The authors propose a flexible, model-agnostic meta-framework that adaptively selects the most effective PET methods and parameter groups based on their Per-Parameter Influence (PPI) during training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents the Hessian-informed Differential Learning Rate (Hi-DLR) method, which enriches the approximation of Hessian information to leverage the varying loss curvature of different parameters through adaptive learning rates, enhancing the training efficiency of deep learning models .\\n\\nThe authors propose an efficient algorithm for computing Hi-DLR, incorporating a novel diagonalization technique that significantly reduces computational costs while effectively separating the contributions of different parameter groups, thus facilitating faster training without sacrificing performance .\\n\\nThe paper introduces a flexible, model-agnostic meta-framework for Parameter-Efficient Tuning (PET) that utilizes per-parameter influence to dynamically select trainable parameters. This adaptive approach allows for improved performance across various tasks and models, addressing the limitations of existing PET methods\", \"weaknesses\": \"1. This method includes other time-consuming steps and is incomplete without measurement. Under the same conditions, would a standard AdamW approach achieve a similar result?\\n\\n\\n2.The effectiveness of Hi-DLR depends on appropriate parameter grouping; suboptimal groupings can lead to less effective learning rate adjustments, potentially hindering performance. In the paper, parameter groups are adjusted manually, which requires explanation. Additionally, for large models, the hyperparameter K warrants further discussion.\\n\\n3.When group number K is a large number, the update period will increase as its learning rate update every \\\\phi iterations, which might influence the convergence.\", \"questions\": \"Could the authors provide the source code for reproduction?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a differential learning rate strategy called Hi-DLR. The algorithmic novelty of Hi-DLR lies in diagonalizing the Hessian matrix $H$ and extending the first-order approximation of $\\\\mathbf{G}_t^{\\\\top} \\\\mathbf{g}_t^{\\\\text{optim}} / \\\\left(\\\\mathbf{g}_t^{\\\\text{optim}}\\\\right)^{\\\\top} \\\\mathbf{H}_t \\\\mathbf{g}_t^{\\\\text{optim}}$ from [1] from ULR to Differential Learning Rate (DLR). The authors also adopt a per-parameter influence derived from Hi-ULR to select influential parameters for parameter-efficient training. From the reviewer's perspective, this definition of parameter influence is novel, although it is the optimal solution of equation 3.1 with a diagonalized version of the Hessian matrix. The reviewers welcome discussions with other reviewers and the Area Chair to determine if this definition of influence demonstrates conceptual novelty.\\n\\n[1] Automatic gradient descent with generalized Newton\\u2019s method. arXiv, 2024.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-motivated and designed to create suitable learning rates for different parameters.\", \"The authors also provide applications of the proposed Hi-DLR to NAM and LoRA.\", \"The per-parameter influence metric is interesting.\"], \"weaknesses\": \"**Evaluation of Hi-DLR**: All training loss figures in this paper are plotted with respect to iterations (e.g., Figures 4 and 5). Could the authors provide wall-clock time comparisons between Hi-DLR and baseline methods, in addition to the iteration-based plots. This would help demonstrate whether Hi-DLR provides real-world speedups.\", \"questions\": \"The authors state that they consistently observe existing PET methods selecting highly influential parameters, which have approximately ($10^4 \\\\times$ ) higher PPI than the majority of model parameters. Why is this the case? The PPI metric is an estimation of parameter influence, so what is the corresponding ground truth for parameter influence? Without ground truth, how can we be certain that PET methods have indeed selected the highly influential parameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The main idea of this paper is to use different learning rates for different parameter groups. This is a common knowledge among optimizer designs that for different part of the model, e.g., the weight, bias, head, norm, etc., need different learning rates, however, how to search for the best learning rates remains open. This paper proposes to use the second-order approximate of the loss function to solve for the adaptive learning rates. In addition, the second-order approximation are done by regression on random sampled directions. The proposed algorithm significantly enhances training efficiency. Moreover, the paper introduces a per-parameter influence index, which identifies the most influential parameters, facilitating more efficient parameter-specific training.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Well-written, the intuition of this paper is clear, if the shape of a convex optimization surface is given, we can obtain the optimal learning rates for each directions directly. This can be done on top of any gradient momentum, and preconditioning regularizations.\\n\\nAlthough the dimension of the parameters are large, the authors pick a few directions in the space (K directions) to get a low dimensional second-order approximation for learning rate searches.\\n\\nThe idea is simple but effective.\\n\\nThe authors also designed a per-parameter influence, that can differentiate the most influencial parameters for training. By freezing all other parameters under a PPI threshold, the authors can largely reduce the training cost of models, while preserving most of the testing accuracies.\", \"weaknesses\": \"Although everything diagonalized, the computational cost of the search is relatively high when fitting the second-order model, especially when $K$ is large.\\n\\nIt seems to me that the algorithm in principle is partly based on search, that you first move towards K directions defined by the split of K parameter groups, and than decides the curvature of each direction (fit a second order function) to get a learning rate on the parameter groups. I wonder if the algorithm can actually be designed more directly, for example instead of randomly sampled $\\\\eta$s, fix several points on K lines, and than estimate curvature, make the learning rate on a line the negative curvature, etc. \\n\\nThe authors discussed the empirical comparison with other adaptive learning rate algorithms like prodigy and adaptation, etc. however, they did not discuss the relationship of their adaptation with previous adaptive algorithms in principle.\", \"questions\": \"$d_k$ is used both as an update direction, and as a notation of dimensionality in your paper.\\n\\nIn Equation (3.1), and when fitting the quadratic function, are you using the gradient calculations, or the corrected gradients like the momentum gradients in Adam? Does that mean second order Taylor expansion on any direction? It would be better if the authors could discuss how does their adaptive gradient interact with Adam's momentum, and adaptive schedule. Are they completely orthogonal, or how they influence each other?\\n\\nFor LoRA, the optimal learning rates ratios for matrix A and B can be fixed (Hayou et al., 2024) . How does your schedule interact with this ratio? If $\\\\lambda$ is fixed, then your algorithm cannot work further on Lora+?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BjZP3fTlVg
Efficiently Deploying LLMs with Controlled Risk
[ "Michael J. Zellinger", "Matt Thomson" ]
Deploying large language models in production requires simultaneous attention to efficiency and risk control. Prior work has shown the possibility to cut costs while maintaining similar accuracy, but has neglected to focus on risk control. By contrast, here we present hierarchical chains with multi-level abstention (HCMA), which use model-intrinsic uncertainty to delegate queries along the LLM intelligence hierarchy, enabling training-free model switching based solely on black-box API calls. Our framework presents novel trade-offs between efficiency and risk. For example, deploying HCMA on MMLU cuts the error rate of Llama3 405B by 30\% when the model is allowed to abstain on 20\% of the queries. To calibrate HCMA for optimal performance, our approach uses data-efficient logistic regressions (based on a simple nonlinear feature transformation), which require only 50 or 100 labeled examples to achieve excellent calibration error (ECE), cutting ECE by 50\% compared to naive Platt scaling. On free-form generation tasks, we find that chain-of-thought is ineffectual for selective prediction, whereas zero-shot prompting yields drives error to 0\% on TruthfulQA at high abstention rates. As LLMs are increasingly deployed across computing environments with different capabilities (such as mobile, laptop, and cloud), our framework paves the way towards maintaining deployment efficiency while putting in place sharp risk controls.
[ "natural language processing", "selective prediction", "uncertainty quantification", "large language models", "compound AI systems" ]
https://openreview.net/pdf?id=BjZP3fTlVg
https://openreview.net/forum?id=BjZP3fTlVg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hxf1wAgDSn", "btw80gjq5K", "MDgNiXDaqQ", "C9Eg6yNpOB" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730688592502, 1730122328296, 1732755651393, 1730702100265 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9881/Reviewer_AoHH" ], [ "ICLR.cc/2025/Conference/Submission9881/Reviewer_31BM" ], [ "ICLR.cc/2025/Conference/Submission9881/Authors" ], [ "ICLR.cc/2025/Conference/Submission9881/Reviewer_Pd96" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a new algorithm to route a given query to the model chain comprised of models of the same family but varying sizes such as 1B, 7B, 13B etc. These models are arranged in the increasing order of model size. As inference on larger models is expensive, the query is first routed it to the smallest model in the chain. This model then decides if it should abstain from answering the query altogether on behalf of all the models in the chain, or if it should delegate the query to the next larger model or if it is should answer the query. This routing is determined by the probability of the correctness of that model. If the probability is less than the rejection threshold, then it abstains from answering on behalf of all the entire chain, if it is between the rejection and the acceptance threshold, it delegates the query to the next model in the chain and if it is greater than the acceptance threshold, it answers the query. Thus by using smaller LLMs whenever possible in lieu of using the largest model all the time, they reduce the inference time and the cost for each query.\\nIn order to make sure that the probability computation is calibrated, they employ Platt scaling and adapt it to avoid clustering around probability value of 1.0.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The related work is covered in great detail.\\n2. This paper tries to reduce the cost by utilizing smaller LLMs if they answer the given query correctly rather than always using larger LLMs for each query. They delegate the more difficult queries to larger LLMs or abstain from answering the query altogether if they are not confident enough.\", \"weaknesses\": \"While the problem they are tackling is quite relevant, the paper lacks sufficient experiments and baselines to demonstrate the efficacy of the proposed method. I have listed a few of my concerns below.\\n1. How does the modified Platt scaling work in comparison to other uncertainty quantification and probability calibration techniques such as semantic entropy (Kuhn et al.), P_true (Kadavath et al., 2022), Eigen values, Degree, Eccentricity (Lin et al. 2024) and other works listed in the uncertainty quantification part of the related work section. While the authors are performing probability calibration, it is also comparable to uncertainty estimation as the query can be rejected when the uncertainty is high.\\n2. While you showed the results for various threshold values on one dataset, in a real world scenario, how would one go about in setting the acceptance, rejection threshold values so that it works well for various kinds of queries?\\n3. In order to demonstrate the generalization, please evaluate the figure 3 and table 1 on more datasets such as helaswag (https://huggingface.co/datasets/Rowan/hellaswag), SQUAD (https://huggingface.co/datasets/rajpurkar/squad) etc and other families of models such as Mistral, Flan-t5, Gemma etc. It is also important to verify if this algorithm works when the differences between model sizes are large or if it would work when the model sizes are opt 350m, opt 1.3b, opt 2.7 b etc.\\n4. Could you also tabulate the regret wrt accuracy and cost? That is given a model chain comprised of llama 8B, 70B and 405B, we need the ground truth of the smallest possible model that could answer the question and if the model chain should reject the query. Based on this ground truth, you could compute the error and the cost introduced by routing it the wrong model. If the cost is lower than the ground truth, then u can use 0. Only routing it to more expensive models would be penalized. While the plot in Figure 3 demonstrates if the delegated model is correct or not, we are interested in understanding if the algorithm is indeed routing it to the right model.\", \"questions\": \"1. In general, if more than one family of LLMs exist of various sizes, such as Mistral 7B, llama 2 7B, Gemma 2b, Opt models listed above as well, how would this work generalize to that setting?\\n2. As this kind of delegation is more expensive during runtime than training a meta-model to route the query to right model, can you elaborate what are the scenarios where this method could be preferred over that? Perhaps this generalizes better on the domains that are out of the domain of the meta-model\\u2019s training data? It would also be nice to bolster your argument with relevant empirical evidence.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a cascade of LLMs where each stage can either reject an input, defer it to the next larger LLM, or answer it, depending on their answer token/P(True) probability. The probability is recalibrated using logistic regression and nonlinear transformations. The cascade shows a more favorable error-per-cost tradeoff than using single LLMs on MMLU.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The topic of using model cascades to cut costs is of practical relevance.\", \"The paper uses cross-validation across 100-500 seeds.\"], \"weaknesses\": [\"The font size is reduced and the margins are made smaller. This is a potential breach of the ICLR guidelines.\", \"Conceptually, I cannot follow why it is required to recalibrate the LLM token / P(True) probabilities via a logistic regression with a nonlinear transformation of probabilities. All that the method uses in the end are the two threshold values. Since all transformations are monotonic, the thresholds could have also been computed for original probabilities.\", \"There is no comparison against the routing / abstained prediction SOTA that the paper cites.\", \"The method works only on one dataset (MMLU) and one cascade of models (Llama 3 models). It fails on TruthfulQA.\", \"The method requires searching 39^5 (= 90M) hyperparameters. This could be greatly reduced by excluding impossible combinations and using Bayesian optimization.\", \"Figure 1 has an odd choice of the x-axis values (0, 0.86, 0.982, 0.998, 1.0, 1.0, 1.0, 1.0, 1.0) and does not show any data, just the estimate logistic curves. This makes it hard to tell if the data actually supports the interpretation made from the figure, namely that \\\"differently sized models share a common notion of difficulty\\\"\", \"Some statements are heavily marketed and oversold. E.g.,\", \"\\\"we introduce a nonlinear feature transformation that makes Platt scaling a highly effective calibration technique for LLM token probabilities, providing an alternative to temperature scaling grounded in a rigorous statistical model\\\"\", \"\\\"which require only 50 or 100 labeled examples to achieve excellent calibration error (ECE), cutting ECE by 50% compared to naive Platt scaling\\\", I'd advice to remove \\\"excellent\\\" and \\\"only\\\" to make this more objective, since a remaining ECE of 0.05-0.07 is far from excellent. I'd suggest to change 50% to \\\"between 17% and 55%\\\".\", \"I cannot follow why an arbitrary nonlinear transformation is labeled as statistically grounded (\\\"our nonlinear transformations make Platt scaling much more effective in calibrating LLM output probabilities, yielding a statistically grounded way of performing LLM calibration\\\". Because of the follow-up logistic regression? A logistic regression has no guarantees to produce calibrated values, it just minimizes its loss)\", \"There is no code released that would allow to replicate the experiments.\", \"It would be beneficial to report standard deviations in Table 1, since you already used multiple seeds (which is nice!)\", \"Proposition 1 lacks notation. Is $1_D$ a vector across all decision, with each entry being 1 or 0?\", \"Small notes that did not influence my score and don't need to be rebuttled, I just note them to make the camera-ready better:\", \"Besides font size and margins, I would suggest to reformat the figures. If you could give them a uniform text size, correct aspect ratio, and potentially use tikz where applicable, removing titles from figures (and put them into the captions). That would improve the presentation a lot.\", \"Reformat equation 9\", \"Reformat page 8\", \"The reference section is misformatted (\\\"Can llms express their un-\", \"certainty? an empirical evaluation of confidence elicitation in llms\\\"). Consider adding double brackets to the .bib entries.\", \"Typo in line 21: yields drive\", \"Typo in line 249: hafve\", \"It's more common in ML literature to have the contributions (\\u00a73) be part of the introduction (\\u00a71), that would make it easier to digest for a quick reader\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to thank the reviewers for their feedback. We have decided to withdraw this submission.\"}", "{\"summary\": \"The paper introduces Hierarchical Chains with Multi-Level Abstention (HCMA), a framework aimed at improving both efficiency and risk control in deploying LLMs.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed HCMA method operates independently of model weights, which allows it to function within API-based LLM query setups.\", \"weaknesses\": [\"### Unclear Motivation\", \"The motivation for a method that addresses both efficiency and risk control in LLM deployment simultaneously is not clearly explained. It is unclear why existing methods addressing efficiency or risk control separately are insufficient.\", \"The rationale behind the HCMA approach requires clarification.\", \"The paper would benefit from a stronger scientific argument that demonstrates a common challenge in efficiency and risk control in LLM deployment, justifying the simultaneous attention to efficiency and risk control of the proposed method.\", \"### Questionable Evaluation\", \"The paper\\u2019s evaluation of \\u201crisk control\\u201d is primarily based on performance metrics from tasks like MMLU. This choice raises questions about how HCMA\\u2019s risk control distinguishes itself from other methods that optimize efficiency through similar performance-cost tradeoffs.\", \"No baselines from related works are included, limiting the ability to benchmark HCMA\\u2019s effectiveness against existing approaches.\", \"### Confusing Presentation\", \"In Figure 1, the y-axis appears to change despite a fixed x-axis value of 1.0 on the right. The basis for this plot needs further explanation: Is it an extrapolation based on several sample points?\", \"The text references numerous terms without adequate explanation or citation, such as \\u201cabstention rate\\u201d (L21), \\u201cbased on hidden layer embeddings, repeated sampling, and neural-network correctness predictors\\u201d (L49), \\u201cuncertainty-based delegation\\u201d (L92), and \\\"... Platt scaling ... calibration technique\\\" (L93). A clearer introduction to these terms would enhance readability.\", \"Table 1 lacks a comprehensive analysis of the presented results, leaving the interpretation of findings ambiguous.\", \"In Figure 3, the x-axis for $/Mtok is presented as a variable, though it is typically a fixed cost for each specific model. This discrepancy requires clarification.\"], \"questions\": \"see the questions mentioned above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
BiymAD5ETK
On last-iterate convergence of distributed Stochastic Gradient Descent algorithm with momentum
[ "Difei Cheng", "Ruinan Jin", "Hong Qiao", "Bo Zhang" ]
Distributed Stochastic Gradient optimization algorithms are studied extensively to address challenges in centralized approaches, such as data privacy, communication load, and computational efficiency, especially when dealing with large datasets. However, convergence theory research for these algorithms has been limited, particularly for distributed momentum-based SGD (mSGD) algorithms. Current theoretical work on distributed mSGD algorithms primarily focuses on establishing time-average convergence theory, whereas last-iterate convergence—considered a stronger and more practical definition than time-average convergence—has yet to be thoroughly explored. In this paper, we aim to establish the last-iterate convergence theory for a class of distributed mSGD algorithms with a decaying learning rate. First, we propose a general framework for distributed mSGD algorithms. Within this framework and under general conditions, we have proven the last-iterate convergence of the gradient of the loss function for a class of distributed mSGD algorithms. Furthermore, we have estimated the corresponding last-iterate convergence rate under supplementary conditions. Moreover, we theoretically prove that in the early stage, the adding of a momentum term can make the iterations converge more rapidly to a neighborhood of the stationary point. Some experiments are provided to illustrate the theoretical findings.
[ "stochastic optimization", "convergence analyse", "distributed", "momentum" ]
https://openreview.net/pdf?id=BiymAD5ETK
https://openreview.net/forum?id=BiymAD5ETK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYAljR1W0u", "q8rOOJFjoY", "FLWxiZuDp0", "BGhurJmb0J", "5Ows9FtHOK" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730859377766, 1730666130639, 1731473299551, 1730907532348, 1729503080976 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13490/Reviewer_9eGj" ], [ "ICLR.cc/2025/Conference/Submission13490/Reviewer_zwEu" ], [ "ICLR.cc/2025/Conference/Submission13490/Authors" ], [ "ICLR.cc/2025/Conference/Submission13490/Reviewer_kwt1" ], [ "ICLR.cc/2025/Conference/Submission13490/Reviewer_ETGm" ] ], "structured_content_str": [ "{\"summary\": \"This paper provides a general framework for decentralized SGD with local momentum steps. The authors provide last-iterate convergence analyses and analysis on the effect of momentum coefficient. Experiments demonstrate the effects of momentum coefficients.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper studies an important problem, that is the last-iterate convergence, in contrast to randomized or (min of so far) type of convergence, last-iterate can save computation and is more relevant to practice.\\n2. The paper provides detailed convergence analyses and characterizations of the effects of the momentum conefficients.\", \"weaknesses\": \"1. The literature review part misses some relevant works. In reviewing decentralized SGD, the authors reviewed existing works of decentralized SGD with single local update (line 94-95), and it seems that the authors are missing relevant works that allow multiple local updates such as [1] , and [2] with additional gradient tracking.\\n2. To establish last iterate convergence, this paper assumes bounded stochastic gradient, which is kind of strong, given that even the milder condition of gradient similarity, is not assumed in methods like SCAFFOLD [3], and gradient tracking [2]. What challenges might arise in extending the current analysis under relaxed conditions?\\n3. Could the authors clarify if Lemma A.2 requires convexity? If not, how does it hold for non-convex functions like $\\\\sin x$?.\\n4. The writing of this paper sometimes is confusing to me that some terms coming up without definition. See my questions. \\n5. Could the authors provide a more detailed explanation of how Theorem 2.3 specifically relates to acceleration in the early stages, perhaps with an illustrative example or intuition, since it seems to it holds for all $n$?\\n6. The bounded averaged iterate assumption in Assumption 2.4 also seems to me very strong, the boundedness of the iterates should be the result of analysis, not a prior assumption. Is this assumption common in related works and how can one relax this assumption possibly? \\n7. The framework is proposed to subsume three special cases, so I would suggest that the authors include experiments for the other two cases or explain why chose to focus on only one case in the experiments. This would help readers better understand the generality and applicability of the proposed framework.\\n\\nReferences\\n1. Li, X., Yang, W., Wang, S., & Zhang, Z. (2019). Communication efficient decentralized training with multiple local updates. stat, 1050, 21.\\n2. Ge, S., & Chang, T. H. (2023, December). Gradient Tracking with Multiple Local SGD for Decentralized Non-Convex Learning. In 2023 62nd IEEE Conference on Decision and Control (CDC) (pp. 133-138). IEEE. \\n3. Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020, November). Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning (pp. 5132-5143). PMLR.\\n4. Beck, A. (2017). First-order methods in optimization. Society for Industrial and Applied Mathematics.\", \"questions\": \"1. Can you explicitly add the additional condition to your contribution bullet point 2?\\n2. I think the the second part of assumption 2.1.4 is implied by the boundedness assumptions in 2.1.3, assuming that the authors mean Frobenius norm for the matrix represented stochastic gradients. \\n3. In assumption 2.4, what's the $u$ here? Do you mean for any $u \\\\in \\\\mathbb{R}^d$? Can I take $u = (1/m)\\\\mathbf{1}$, then as long as the global average is bounded during the optimization process, then we can obtain a last iterate rate for the global average? \\n4. In Theorem 2.3, do you mean $x_n^{(i)}$ or $\\\\bar{x}_n$, not clear to me. What is $V_0$?\\n5. In Theorem 2.3, by bounded, do you mean lower bounded or upper bounded?\\n6. Can the authors briefly discuss what tricks used in this paper enable the analysis of last iterate convergence?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper develops last-iterate convergence theory for distributed momentum-based SGD (mSGD) algorithms with a decaying learning rate, addressing limitations of time-average convergence in existing work. The paper established asymptotic convergence of the gradient norm under bounded gradient and Lipschitz continuous gradient assumptions. It also establishes convergence rate results under more strengthened assumptions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"I agree with the paper\\u2019s point that it is a limit of many existing works that the convergence and convergence rates results are often only for average gradient norm or similar. So the motivation and potential of the work seems solid. The paper is also well written and the mathematical results seem correct.\", \"weaknesses\": \"a) To me, the results are not significant enough for justifying a publication in a top ML conference. Firstly, the main results Theorem 2.1 do not provide convergence rate results, it is just asymptotic convergence that is ensured. The convergence rate results in Theorem 2.2 are almost useless, see my comment c). Given that convergence and convergence rate is already established for these algorithms, just showing that the last iterate converges feels marginal.\\n\\nb) Assumption 2.1(b), which states that the gradient is bounded, is quite strong; however, it is not the reason for my low score. But it means, e.g., that quadratic and strongly convex functions are not covered by the analysis.\\n\\nc) The convergence rate results in Theorem 2.2 are under very strange assumptions. Firstly, Assumption 2.4 cannot be checked before running the algorithm, so it seems quite useless. Secondly, by Assumption 2.3 the function should be convex and have a unique optimizer. But it cannot be strongly convex, since by Assumption 2.1 the gradient should be bounded. I am not sure what functions satisfy these assumptions. \\n\\nd) I don\\u2019t see the point with the experiment. The algorithms have already been studied and shown to converge, and numerically investigated, it is unclear what is the message? Also there any baselines and it is unclear if the assumptions of the theorems are satisfied for the considered setups.\", \"questions\": \"In Theorem 2.3, I don\\u2019t understand the definition of \\\\tau^{(a_0)}. I am guessing there is a typo, it should be the set of indexes where the condition holds? Or, alternatively, argmin.\\n\\nIt is difficult to read the results in figures 1 and 2. Firstly, the legends are so small, it is impossible to read without significantly zooming in. Secondly, the figures appear to be excessively large; when I zoom in, my computer struggles to handle the display smoothly, nearly causing it to freeze. \\n\\nIn section 2.4, given that the function is convex with a unique optimal solution, why consider only convergence of the gradient? It should be possible to translate these bounds to ||\\\\theta^k-\\\\theta^|| or objective function value f(\\\\theta^k)-f*. Also, why use \\\\theta for objective value in Section 2.4, when x was used everywhere else?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I am writing to request the withdrawal of our paper due to the presence of numerous typographical errors that have significantly impacted the clarity and accuracy of the content.\\n\\nWe understand the importance of maintaining the integrity and reliability of the scholarly record, and we believe that the best course of action is to retract the article to prevent the dissemination of incorrect information. We apologize for any inconvenience this may cause and appreciate your understanding in this matter.\\n\\nFurthermore, we will take steps to thoroughly review and correct the manuscript before considering resubmission to your esteemed journal or another appropriate publication venue.\\n\\nThank you for your attention to this matter.\"}", "{\"summary\": \"The paper addresses distributed stochastic optimization and introduces a framework for distributed momentum SGD (mSGD) by integrating momentum steps into existing distributed SGD algorithms. The authors provide theoretical results, establishing last-iterate convergence for convex objectives. Additionally, numerical experiments are presented to validate the theoretical findings.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper establishes last-iterate convergence for distributed stochastic optimization, which is a stronger guarantee than the time-average mean-square convergence.\", \"The paper demonstrates that incorporating a momentum term can accelerate the algorithm's rate of convergence during the early stages of optimization.\"], \"weaknesses\": [\"The proposed framework lacks sufficient novelty. It essentially extends existing algorithms by adding momentum steps, which does not introduce a significant new contribution.\", \"The last-iterate convergence results and the observation that the momentum term can accelerate the algorithm in the early stages are not novel and can be traced back to the work of Jin et al. (2022b). It appears that the extension is merely a straightforward modification without the introduction of any new techniques or insights..\", \"Limited theoretical scope: The analysis is restricted to convex functions, which significantly limits the generalizability of the results.\", \"Assumption 2.4 appears overly restrictive and may not hold in many practical scenarios.\", \"The experiments presented in the paper are overly simplistic and lack sufficient depth. The experiment only tests one kind of algorithms and its performance under different $\\\\alpha$.\", \"The clarity of the presentation needs significant improvement. There are numerous typographical errors and unclear formulations throughout the paper\", \"[1] On the convergence of mSGD and AdaGrad for stochastic optimization. In International Conference on Learning Representations, 2022b.\"], \"questions\": [\"It appears that the result presented in Theorem 2.2 does not lead to a linear speedup. Could the authors provide further clarification or additional analysis to explain why this is the case\", \"Refer to the weakness part.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper gives last-iterate convergence results for SGD with momentum in the distributed setting. The two main algorithms studied are D-PSGD and EASGD, but basically any algorithm which fits in the form\\n- update the momentum at each step,\\n- apply the momentum at each step, possibly with (partial/local) averaging,\\nas described by Eq (8) in the paper, is analyzed. Results for convergence in gradient norm are first given (without rates), then a function value result (with rates) is given. Finally, a last result on the hitting time for level sets of gradient norm is given, to demonstrate that the probability that this hitting time is above a given threshold reduces when the momentum increases. Experimental results compare the performance of distributed SGD when varying the momentum parameter and the averaging frequency.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Extends last-iterate convergence results\", \"Seems to fix issues in existing results (paragraph after Theorem 2.1).\"], \"weaknesses\": [\"Many typos (highlighted a few) and loosely defined quantities, which make the paper hard to read.\", \"Results are derived under (very) strong assumptions, assuming both Lipschitzness and boundedness of the gradients for instance, but also that iterates remain bounded almost surely (Assumption 2.4). This should be proven, as it depends on the algorithms, not assumed.\", \"Results are very coarse, in the sense that the impact of the distributed aspect is hardly investigated. Theorems give the same results for both D-PSGD and EASGD. This is because theorems are asymptotic, so that if one combines diminishing step-sizes with strong regularity assumptions then these distributed algorithms asymptotically behave as mini-batch SGD (the algorithms moves more and more slowly due to the step-size, but averaging is still performed at the same frequency).\", \"Results (both theory and experiments) make it look like the higher the momentum the better the results, with no upper bound on this.\", \"Experiments do not really have added value since this is a very classical setup, and the point of the paper is about last-iterate convergence, not comparing different values for momentum.\", \"In the end, there is some value in these results, but rewriting is necessary to clarify the paper, higlight dependencies problem parameters (matrix W, momentum), and better justify the assumptions (is 2.4 technical or necessary? If technical it should be removed, otherwise it should be explained why one needs it). Non-asymptotic guarantees would also be highly appreciated.\", \"Typos/reformulation needed:\"], \"incorrect_citations\": \"no space and ~\\\\citet{} instead of \\\\citep{}\", \"typos\": \"\", \"213\": \"equation equation 7\", \"261\": \"low bound condition\", \"281\": \"equation equation\\nmany other equation equation\", \"definitions_of_convergence\": \"not very rigorous in the way they are defined. For instance I understand that $\\\\epsilon$-TAMS reads: for any given scalar $\\\\epsilon > 0$, there exists $n>0$ (which depends on $\\\\epsilon$) such that... but it's not how it's written.\\n\\nAssumption 2.1: I guess \\\"function\\\" is missing? \\n\\nWhat is u in Assumption 2.4 and Theorem 2.2? \\n\\n$m \\\\geq 1$, so the second term is useless in Theorem 2.2.\\n\\nFigures and their legends are very small and hard to read.\", \"questions\": \"1 ) In the experiments, the best momentum parameter identified is .9, but I guess at some point increasing the momentum degrades performances. What happens with a momentum of 1? Similarly, the way Theorem 2.3 is formulated is sketchy in that regard: it looks like taking $\\\\alpha =1$ just gives \\\"instant convergence\\\", which has to be wrong. I believe this is due to the fact of expressing a hitting time probability with a $O()$ notation (hiding for instance $a_0$), but this should be clarified.\\n\\n2) How does matrix W impact the results? \\n\\n3) Is it possible to lift Assumption 2.4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BipgUWZWNi
Controlling Information Leakage in Concept Bottleneck Models with Trees
[ "Angelos Ragkousis", "Sonali Parbhoo" ]
As AI models grow larger, the demand for accountability and interpretability has become increasingly critical for understanding their decision-making processes. Concept Bottleneck Models (CBMs) have gained attention for enhancing interpretability by mapping inputs to intermediate concepts before making final predictions. However, CBMs often suffer from information leakage, where additional input data, not captured by the concepts, is used to improve task performance, complicating the interpretation of downstream predictions. In this paper, we introduce a novel approach for training both joint and sequential CBMs that allows us to identify and control leakage using decision trees. Our method quantifies leakage by comparing the decision paths of hard CBMs with their soft, leaky counterparts. Specifically, we show that soft leaky CBMs extend the decision paths of hard CBMs, particularly in cases where concept information is incomplete. Using this insight, we develop a technique to better inspect and manage leakage, isolating the subsets of data most affected by this. Through synthetic and real-world experiments, we demonstrate that controlling leakage in this way not only improves task accuracy but also yields more informative and transparent explanations.
[ "interpretable models", "concept bottleneck model", "information leakage", "decision tree" ]
https://openreview.net/pdf?id=BipgUWZWNi
https://openreview.net/forum?id=BipgUWZWNi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zQo5ms9dbH", "vxJPEocaHG", "th8jnhWJTb", "tTa4U69B0T", "q76hKTbncG", "dOfYvVV6y0", "Zc7Y4xJnK2", "Y4phoUkxb4", "XmH9kQd3Kf", "X4aea3UtUJ", "VwAfGjLb0O", "TsniZqEFmu", "ThYPqhFATv", "N0yTqqhWtQ", "Kz96jilDOw", "HRDexuWqJe", "D5CUfsp5zt", "BaE7tlvSiH", "3WnBLLht5g", "2riDuvj5Po", "0pv6VoxMDq" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732506961122, 1732051085144, 1732057105679, 1732140523486, 1732727267044, 1732055269220, 1732114863155, 1730298328932, 1732115520017, 1730675260642, 1732731215577, 1732134802687, 1730724390831, 1732051893429, 1732513290034, 1732121118407, 1732138845821, 1732140599070, 1732055050129, 1729445301970, 1732491401174 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_VU8H" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_MJtJ" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_MYyQ" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_kyiD" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_MJtJ" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Authors" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_VU8H" ], [ "ICLR.cc/2025/Conference/Submission10191/Reviewer_MYyQ" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors' for their response (and encourage them to incorporate these notes in the manuscript), but maintain my rating.\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely thank the reviewer for reading our work and providing comments. We will plan to respond to the highlighted weaknesses and questions in a sequential manner.\\n\\n**Weakness 1**: *\\\"the main issue seems to be a lack of rationale for definition 3.1, which forms the basis for the paper. ... more reliable.\\\"*\\n\\n**Response**: First, we would like to clarify that our method is designed especially for categorical concepts, since the vast majority of real-world concept-annotated datasets have categorical and not continuous concepts, including CUB and MIMIC which are explored in this paper. Morpho-MNIST is an exception and has continuous concepts, which we could have indeed leveraged differently. However, in our case this particular dataset is used as a toy example to demonstrate how are method works. Thus, we perform the binning operation to create the \\u201csmall\\u201d, \\u201cmedium\\u201d and \\u201clarge\\u201d categorical concepts.\\n\\nWe would also like to clarify again the definition of a \\u201chard\\u201d and a \\u201csoft\\u201d categorical concept, following the existing formulation of [1]. A hard categorical concept is a boolean concept, where \\u201c1\\u201d indicates its presence and \\u201c0\\u201d indicates its absence. A soft categorical concept refers to a concept probability, indicating the confidence of the concept predictor about its presence. Thus, a \\u201csoft\\u201d concept is different from a continuous concept.\\n\\n[1] Havasi, M., Parbhoo, S., & Doshi-Velez, F. (2022). Addressing leakage in concept bottleneck models. Advances in Neural Information Processing Systems, 35, 23386-23397\\n\\n**Weakness 2**: *\\\"It is also unclear how quantifying this information leakage is useful....metrics (Table 1)\\\"*\\n\\n**Response**: We believe that quantifying leakage is a very important tool for analyzing the interpretability of Concept Bottleneck Models. \\n\\nTo make this more clear, let us first adapt a practical example of leakage in CBMs described in [1]. Assume we have a CBM performing animal classification. When the concept predictor recognizes an image as a dog, it may predict a slightly higher likelihood to the concept \\u2018tail\\u2019 compared to an image of a cat. Let us assume for example that the concept predictor gives a likelihood around 0.8 to the majority of dogs for the concept \\u2018tail\\u2019, and around 0.6 for the majority of cats. The ground truth concept \\u2018tail\\u2019 for both classes is 1 (meaning that both animals have a tail). Even though cats and dogs are indistinguishable based on this ground truth concept, the label predictor may still compare the two soft concept probabilities (0.6 and 0.8) and make an accurate classification. However, is this concept-based explanation useful? The label predictor relies on a difference in likelihoods that may (or may not) be interpretable to a human. For example, it might be the case that in most images of cats their tail is not clearly shown due to the orientation of the cat, whereas in most images of dogs the tail is clearly visible. This phenomenon is called \\u201cleakage\\u201d because unintended information from the input is captured in the concept likelihood (the orientation of cats and dogs). Thus, the concept predictor assigns a higher likelihood to dogs having this concept. However, this may be just an assumption. Regardless of whether this difference is intuitive or not, we believe **it is crucial to first find a systematic way to isolate instances where the label predictor takes advantage of such differences**, in order for us to then proceed to further analysis. In our paper, a similar case study is performed for the Woodpecker example of Figure 2.\\n\\n**Thus, our work first provides a systematic way to identify those instances affected by leakage using our tree structure**. Specifically, we first train a tree using only the ground truth concepts, which we name the \\u201cglobal tree\\u201d (step 1 in Figure 1). Each leaf node in the global tree corresponds to a subset of examples that are indistinguishable based on their ground truth concepts. For example, if we observe the leftmost decision path in the toy MNIST example of Figure 3, there is a leaf node corresponding to group of 926 digits (278 \\u201c6\\u201ds, 33 \\u201c8\\u201ds and 615 \\u201c9\\u201ds), and all these digits have small length and small thickness. We see that this group was not extended with leakage after performing our algorithm, i.e. a sub-tree was not found (step 2 in Figure 1). This means that the label predictor could not take advantage of any differences in the likelihood of concepts \\u201clength:small\\u201d and \\u201cthickness:small\\u201d to further split this group. On the other hand, the third decision path from the left, which corresponds to the group of digits having small length and large thickness, was extended with two decision rules corresponding to differences in concept likelihoods (leakage): \\u201clength:small < 0.8\\u201d and \\u201cthickness:large < 0.8\\u201d. \\n\\n*(The response continues in the next comment)*\"}", "{\"title\": \"Author Response\", \"comment\": [\"We sincerely thank the reviewer for reading our work and providing comments. We will plan to respond to the highlighted weaknesses and questions.\", \"**Weakness 1**: *I expect to see stronger evaluation of MCBM's utility for information leakage, and their implications in improving task accuracy or explanation quality. Perhaps through human-studies or mining new concepts on one of the tasks such as CUB*.\", \"**Response**: We believe that section 5.3 focuses specifically on the implications of leakage on a) task accuracy and b) explanation quality, and c) it also provides a detailed case study on the CUB dataset. More specifically:\", \"*Impact on Task Accuracy*: As explained in paragraph \\u201cOur method enables inspecting Information Leakage per decision path\\u201d, the task accuracy increased in all three decision paths that were affected by leakage, for the reduced Morpho-MNIST example of Fig. 3. This increase shows exactly the impact of leakage, i.e. the task accuracy increases when a decision path of the Hard CBM is extended with additional decision splits relying on leakage. These numbers and paths are shown in Fig. 5 and Table 3. As an example, consider the decision path with number 14 in Figure 5. In the global tree of the Hard CBM, the decision path would end at the light gray node, which has 571 samples all having large length, large width and medium thickness. Due to majority voting, the hard CBM classifies all those digits as \\u201c8\\u201ds. As shown in Table 3, this leads to an accuracy of 44.91% for this path with number 14. When MCBM-Seq is then applied to investigate if this group is prone to leakage, the algorithm discovers that the group can be further split using the soft concept representation of \\u201clength:large\\u201d. Specifically, it is observed that the concept predictor is less than 90% confident that the majority of \\u201c6\\u201ds have a large length, while it is more than 90% confident that the majority of \\u201c8\\u201ds have this attribute. Thus, the tree-based label predictor takes advantage of this difference in predicted likelihoods, and is able to further split the group of 571 samples into two new groups using the decision rule \\u201clength::large <= 0.9\\u201d. With this new \\u201cleaky\\u201d split, the task accuracy improves to 57.70% for the path, as shown in Table 3.\", \"*Impact on Explanation Quality*: As explained in paragraph \\u201cOur tree-structure allows for meaningful group-specific explanations\\u201d, identifying groups affected by leakage can assist our decision-making when deriving concept-based explanations per decision path. For better clarity, we update this paragraph by defining two scenarios:\", \"If a group (leaf node) **cannot** be further split using leakage, i.e. a sub-tree is not found: This shows that the particular group is not affected by information leakage, which is desirable. In addition, if the task accuracy for this particular group is high, we may consider this an ideal classification setting, because the available concept annotations are sufficient for an accurate distinction. The fact that we can identify and isolate such groups is highlighted as a key advantage of our work. **Unlike a purely soft CBM, leakage will not impact these groups, and thus the concept explanations for those groups are both leakage-free and accurate**. If, on the other hand, the task accuracy of the group is not sufficient, we may flag this group to an expert to either annotate additional concepts or perform an intervention (future work).\", \"If a group (leaf node) **can** be further split using leakage, i.e. a sub-tree is found: Then this group can be flagged for additional analysis. The user has the following options: a) rely solely on the decision process of the global tree to derive a perfectly understandable, leakage-free explanation. In the example of the decision path with number 14 described above, this means terminating the explanation at the light gray node and characterizing all 571 samples as \\u201c8\\u201ds. b) extend the decision process with the sub-tree of MCBM-Seq if this likelihood difference seems intuitive. In the same example, this means incorporating the path extension into our explanation, using the decision rule \\u201clength::large <= 0.9\\u201d. c) use MCBM-Joint\\u2019s less intuitive probabilities for maximum accuracy, d) flag this group to an expert to either annotate additional concepts or perform an intervention (future work).\", \"*Case study on a real-world setting*: We also provided a detailed case study in Appendix A.10, page 24 on the CUB dataset, for distinguishing between the Red Bellied and the Red Headed Woodpeckers. Figure 15 shows the complete decision path (explanation) of the case study both before and after incorporating the path extension, and also highlights that the test accuracy again increases due to leakage for this path.\", \"Are there any specific additional experiments you would like to see? We would be happy to consider them.\"]}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"**Question 3**: *If the argument to be built is that MCBMs are better for post-hoc analysis than prediction, then I would say the paper should focus more on the analysis part and show how MCBMs can be used to lead to actionable changes/edits/insights that improve the underlying model somehow. \\u2026 Therefore, do you have any empirical evidence of actionable changes derived from insights from MCBM that led to a model\\u2019s update improving its performance under a reasonable metric?*\\n\\n**Response**\\n\\nThe argument is indeed that MCBMs are better for post-hoc analysis than prediction, which was clarified in more detail in our previous response. Similar to reviewer VU8H, in this response we will first justify more why we consider leakage inspection useful and then explain potential actionable changes proposed in section 5.3 in more detail.\\n\\nLet us first adapt a practical example of leakage in CBMs described in [1]. Assume we have a CBM performing animal classification. When the concept predictor recognizes an image as a dog, it may predict a slightly higher likelihood to the concept \\u2018tail\\u2019 compared to an image of a cat. Let us assume for example that the concept predictor gives a likelihood around 0.8 to the majority of dogs for the concept \\u2018tail\\u2019, and around 0.6 for the majority of cats. The ground truth concept \\u2018tail\\u2019 for both classes is 1 (meaning that both animals have a tail). Even though cats and dogs are indistinguishable based on this ground truth concept, the label predictor may still compare the two soft concept probabilities (0.6 and 0.8) and make an accurate classification. However, is this concept-based explanation useful? The label predictor relies on a difference in likelihoods that may (or may not) be interpretable to a human. For example, it might be the case that in most images of cats their tail is not clearly shown due to the orientation of the cat, whereas in most images of dogs the tail is clearly visible. This phenomenon is called \\u201cleakage\\u201d because unintended information from the input is captured in the concept likelihood (the orientation of cats and dogs). Thus, the concept predictor assigns a higher likelihood to dogs having this concept. However, this may be just an assumption. Regardless of whether this difference is intuitive or not, we believe **it is crucial to first find a systematic way to isolate instances** where the label predictor takes advantage of such differences, in order for us to then proceed to further analysis.\\n\\n**Thus, our work first provides a systematic way to identify those instances affected by leakage using our tree structure**. Specifically, we first train a tree using only the ground truth concepts, which we name the \\u201cglobal tree\\u201d (step 1 in Figure 1). Each leaf node in the global tree corresponds to a subset of examples that are indistinguishable based on their ground truth concepts. For example, if we observe the leftmost decision path in the toy MNIST example of Figure 3, there is a leaf node corresponding to group of 926 digits (278 \\u201c6\\u201ds, 33 \\u201c8\\u201ds and 615 \\u201c9\\u201ds), and all these digits have small length and small thickness. We see that this group was not extended with leakage after performing our algorithm, i.e. a sub-tree was not found (step 2 in Figure 1). **This means that the label predictor could not take advantage of any differences in the likelihood of concepts \\u201clength:small\\u201d and \\u201cthickness:small\\u201d to further split this group**. On the other hand, the third decision path from the left, which corresponds to the group of digits having small length and large thickness, was extended with two decision rules corresponding to differences in concept likelihoods (leakage): \\u201clength:small < 0.8\\u201d and \\u201cthickness:large < 0.8\\u201d. \\n\\n*(the answer to question 3 continues to the next response)*\"}", "{\"comment\": \"Dear Reviewer MYyQ,\\n\\nWe once again thank you for your comments. First, we would like to provide some further arguments on the problem of task accuracy drop, by referencing again to the state-of-the-art CBM models, and I hope this confusion will be resolved. \\n\\nAccuracy drop is empirically observed in practically **all CBMs** including the state-of-the-art models, when we compare joint to sequential training, and is often even more severe when we use independent training. For example, consider the following paper:\\n\\n\\\"Post-Hoc CBMs\\\" [1] : Since these CBMs first train the backbone independently, they can be considered an improved form of sequential CBM training with many advantages. The authors compared the performance of PCBM and a simple Joint CBM, and they state themselves in page 15 of the Appendix: **\\\"CBMs achieve a slightly better performance than PCBMs, and the original backbone\\\". This fact did not obstruct this paper from achieving a spotlight acceptance at ICLR 2023**, due to the numerous advantages of their architecture such as dealing with missing concept annotations.\\n\\nCarefully reviewing more such papers, the argument that accuracy drops between different modes of training can be claimed for most state-of-the-art CBMs, yet these works were accepted due to dealing with other CBM problems. In this paper, we try to follow a similar logic as well, presenting the advantage of leakage inspection while not observing significant drop between CBMs **with the same training mode**, e.g. MCBM-Seq with Sequential CBM. The task accuracy comparison of MCBM-Seq with Joint CBM is not fair. We could have put the comparison with joint training only in the Appendix similar to Post-hoc CBMs, if this is such an issue. \\n\\nRegarding the concerns we did not yet reply, specifically those regarding **interventions** and **lack of error-bars**, we absolutely agree and we were working on those experiments during the rebuttal period. However, due to the tight deadline of updating the pdf, we were unable to do so and we decided to **withdraw the paper** and include these on a future version of our paper.\\n\\n[1] Post-hoc Concept Bottleneck Models. ICLR 2023\"}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"*(continuing the previous response...)*\\n\\n* *\\u201cAddressing leakage in concept bottleneck models. NeurIPS 2022\\u201d* [3]. This work is closer to our method, in the sense that a) it uses scalar-valued concepts and b) specifically addresses leakage. However, as we mention in our related work, \\u201cHavasi et al. (2022b) tackle missing information with a side channel and an auto-regressive concept predictor, but these approaches struggle with interpretability and disentanglement of residual information (Zabounidis et al., 2023)\\u201d. In more detail, all works that use a residual layer or side-channel (including this one) aim to let missing concept information pass directly from inputs to targets, letting the ground truth concept representations intact and not influenced by leakage. Yet, Zabounidis et al., 2023 highlight that the residual information is not guaranteed to capture this intended missing information, and the two representations may be entangled. We argue that the lack of transparency in the residual channel does not make these methods convincing enough for the specific problem.\\n\\nIn conclusion, while we understand that the three-phase method may seem more complex, we argue that it is more effective and provides some novel advantages compared to existing methods, such as the ability to perform group-specific leakage examination in the form of decision paths and to identify the exact decision rules based on leakage. Thus, our work cannot be quantitatively compared with these previous works. Moreover, we argue that the three-phase method is not that complex in practice, because essentially it only involves the training of one global tree and individual sub-trees for the leaf nodes of this tree, along with an independently trained concept predictor like in all CBMs.\"}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"**Question 1**: *Please explain the metrics at length. Concept accuracy, fidelity and explanation accuracy. I believe explanation accuracy is mentioned but never used (in which case it can be dropped)*.\\n\\n**Response**: As stated in lines 383-388, these are all metrics introduced in existing works. More specifically:\\n* The *concept accuracy* introduced in [1] refers to the accuracy of the concept predictor when predicting all categorical concepts (refer to Step 1 of the method, Figure 1, page 2). Since each concept is predicted independently, the total concept accuracy reported is the aggregation of all predicted concepts. We can observe in Table 2, page 9, that the concept accuracy is the same for all CBM types where the concept predictor is trained independently. For Joint CBMs, we observe the trade-off between concept and task accuracy based on the parameter $\\\\lambda_C$. This is explained in detail in the original CBM work of [1]. The Black-Box model does not have a concept accuracy since it does not use concept supervision (inputs are directly mapped to targets in a single neural network). \\n\\n* The *task accuracy* refers to the accuracy of the label predictor. For example, the task accuracy of the Hard CBM refers to the accuracy of the global decision tree (refer to Step 1 of the method, Figure 1, page 2), while the task accuracy of MCBM-Seq refers to the accuracy of the same tree but extended with all potential sub-trees (refer to Step 3 of the method, Figure 1, page 2).\\n\\n* The *explanation accuracy* introduced in [2] measures the task performance of a model when using its extracted explanation formulas instead of the model\\u2019s predictions, which would correspond to the task accuracy. In case of Entropy-Net, their method approximates the predictions of a neural network using simplified logic rules. Thus, the task accuracy is always greater or equal than the explanation accuracy when using Entropy-Net, as can be observed from Table 1, page 9 as well as the original results of the paper [2]. However, in decision trees, the explanation accuracy is the same as the task accuracy by default, since a decision path from the root to a leaf node serves as both the classifier and the explanation. \\n\\n* The *fidelity of an explanation* is also introduced in [2] and measures how well the extracted explanation matches the predictions obtained using the explainer. In practice, the authors calculate it as the accuracy score between the labels predicted from the neural network and the labels predicted from the logic rules that serve as explanations. In decision trees, we denote the fidelity score as 100% to indicate that no fidelity considerations occur, since a decision path from the root to a leaf node serves as both the classifier and the explanation. \\n\\n[1] Koh, P. W., Nguyen, T., Tang, Y. S., Mussmann, S., Pierson, E., Kim, B., & Liang, P. (2020, November). Concept bottleneck models. In International conference on machine learning (pp. 5338-5348). PMLR.\\n\\n[2] Barbiero, P., Ciravegna, G., Giannini, F., Li\\u00f3, P., Gori, M., & Melacci, S. (2022). Entropy-Based Logic Explanations of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6046-6054.\"}", "{\"summary\": \"Concept bottleneck models (CBMs) is a valuable technique for enhancing the interpretability and explainability of deep learning models; however, they have recently been shown to suffer from information leakage issues. This work proposes a new decision tree-based method to address the leakage issue. Unlike the original CBM which used a small network (e.g. a linear model) to predict the label Y from the (soft or hard) concept representation C, this work uses a decision tree as the predictor. The authors smartly show that by comparing the cases when soft concepts and hard concepts are used to construct the tree, it is possible to inspect and control information leakage corresponding to specific configurations of concepts. The method is evaluated on several standard datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a new decision tree-based method for quantifying and controlling information leakage in CBMs, which to the best of my knowledge is very novel. I personally think that the use of tree-based model in CBMs itself may already be a good contribution before its applications in addressing information leakage.\", \"The writing is mostly clear and easy to follow. I especially appreciate Figure 1 which helps readers to intuitive understand the main idea and pipeline of the proposed methodology;\", \"To the best of my knowledge, this is the first work to formulate information leakage issues in CBMs. For this purpose, the authors propose a general, information-theoretic metric (Definition 3.1). The applicability of this metric is well beyond the specific method considered in this paper (tree-based CBMs);\", \"The work also offers a new method for inspecting how information is leaked and used in a more fine-grained and interpretable manner. By inspecting each path in the decision tree, practitioner can understand how hard concepts are insufficient for accurate predictions and how additional information could enhance these predictions (thereby leads to information leakage). This interpretability, facilitated by the use of decision tree, is a notable innovation over existing methods;\", \"Reproducibility: the authors have provided certain implementation details as well as offering anonymous code repo.\"], \"weaknesses\": [\"(Major) There might be, in my opinion, a potential discrepancy between the information leakage metric defined (Definition 3.1) and the actual tree-based implementation for computing this information leakage (see the \\u201cquestions\\u201d section below);\", \"(Major) Related to the above point, it seems that the information leakage problem addressed in this paper differs slightly from that studied in existing works. Specifically, this work appears to focus on computing information leakage corresponding to specific configurations of hard concepts $C$ i.e. $I(\\\\hat{C}; Y|C=const)$ (where $C$ equals to some constant) rather than calculating $I(\\\\hat{C}; Y|C)$ (where both $\\\\hat{C}$ and $C$ are random variables);\", \"(Major) While the insights and ideas presented are highly novel, the proposed three-phase method also seems more complex than existing approaches aiming to address information leakage [1, 2, 3];\", \"(Minor) The work has not been compared to state-of-the-art methods for addressing information leakage, such as CEM [1], PCBM [2] and [3];\"], \"further_comments\": \"it seems more natural to rename the method to \\\"Tree-based CBM\\\". This is just a recommendation for consideration.\\n\\n*References*\\n\\n[1] Concept Embedding Models. NeurIPS 2022\\n\\n[2] Post-hoc Concept Bottleneck Models. NeurIPS 2022\\n\\n[3] Addressing leakage in concept bottleneck models. NeurIPS 2022\\n\\n\\n\\n\\n*Disclaimer: the reviewer is not the author of any of these papers.\", \"questions\": [\"The information gain you computed in eq.(5) seems to condition on a particular configuration of the hard concepts (e.g. ck = [0, 1, 0]) corresponding to the leaf node. Is this correct?\", \"How is the calibration process described in lines 211-241 actually performed? It may not be immediately clear to those who are unfamiliar with the specific calibration technique mentioned. The author could consider to include a short description of these processes in future refinement.\", \"The work has not been compared to other methods for addressing information leakage in CBMs e.g. [1, 2, 3]. Could the authors provide a justification for this omission?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"**Weakness 2 and Question 2**: *From Table 2, I do not see how MCBM is better. It has worser task accuracy than EntropyNet, but perhaps lower leakage? (which is not apparent). Given that their method has advantages when the number of concepts is small, their evaluation too should bring out more readily. From the argument in L460-462, decision trees with soft concept scores must have led to more leakage (or poor fidelity?), but that's not the case in Table 1. Please explain*.\\n\\n**Response**: We apologize if the interpretation of Tables 1 and 2 is not immediately clear. We will attempt to describe them in this response in an intuitive and detailed manner.\\n\\nThe purpose of Table 2 is to show that MCBM-Seq is comparable in task performance compared to existing CBMs, or lower in performance compared to Joint CBMs which however suffer from information leakage (refer to the work of [3]) and Black-Box neural networks which are inherently uninterpretable. The Table does not show the advantage of our method by itself, but shows how it performs compared to standard methods in order for our analysis to be complete.\", \"the_important_take_away_from_table_2_when_looking_at_the_numbers_is_that_the_relationship_of_task_accuracies_in_cbm_modes_is_roughly_the_following\": \"Hard, Independent < **MCBM-Seq** <= Sequential < **MCBM-Joint** (for small $\\\\lambda_C$) < Joint (for small $\\\\lambda_C$) < Black-Box. In contrast, in terms of leakage, they follow the opposite trend: Hard, Independent (No Leakage by definition [3,4]) > **MCBM-Seq** (has leakage, but this is inspectable and controllable by the decision maker) > Sequential CBM (has leakage according to [3,4], which is uninspectable, uncontrollable and affects all samples as stated in L460-462) > **MCBM-Joint** (has more leakage but this is again controllable) > Joint-CBM (has a lot of uncontrollable leakage, based on [3,4]). The two reverse trends show the trade-offs of CBMs.\\n\\n**Table 1 was constructed to highlight the advantages of MCBM-Seq**. **First**, the column named \\u201cLeakage Inspection\\u201d emphasizes that MCBM-Seq is the only method that allows for Leakage Inspection, which is the novel property we aim to introduce in this work. The existing completely soft sequential CBMs, regardless of their type of label predictor (Entropy-Net, Simple Decision Tree), typically have leakage, as indicated in previous works [3, 4] but this leakage is neither easily inspectable nor controlled, which motivated our work. We also provide an intuition for this claim with a practical example in Appendix A.3, page 14. Hard and Independent CBMs are not included in this table because they do not have leakage, so leakage inspection is not applicable. **Secondly**, the table reveals another advantage of MCBM-Seq when compared explicitly with a Purely Soft Sequential CBM using an Entropy-Net as a label predictor, which is that MCBM-Seq also achieves higher Explanation Accuracy and does not raise Fidelity issues, as explained in lines 416-426. This second advantage does not hold when compared to purely soft sequential CBMs using traditional decision trees, but the first main advantage of leakage inspection still remains. \\n\\n**The reasons why our leakage inspection is useful** are those highlighted in section 5.3: **a)** we can analyze our model for specific decision paths (groups), and thus **b)** we can derive more meaningful group-specific explanations, since bi) the decision-maker has the flexibility to control the concept explanation based on the length of the decision path (lines 515-518) and bii) leakage will not impact all decision-making paths in a mixed CBM (lines 518-519).\\n\\nBased on the above clarifications and going back to Question 2, the answer is that decision trees with purely soft concept scores do not raise fidelity issues but they also do not allow for Leakage Inspection, unlike our new MCBM-Seq method. We again encourage the reviewer to refer to Appendix A.3, page 14 which shows a decision tree with purely soft scores and how its inefficiency motivated us to develop MCBM-Seq.\\n\\nIn conclusion, **the argument of our work is the following: If MCBM-Seq has a task accuracy between those of a Hard and a Sequential CBM (Table 2) but is superior in terms of explainability due to its leakage inspection property, which is shown in Table 1 and section 5.3 (pages 9 and 10), then we believe it is a useful training method for CBMs**. We hope this clarifies the use of our Tables.\\n\\n[3] Mahinpei, A., Clark, J., Lage, I., Doshi-Velez, F., & Pan, W. (2021). Promises and Pitfalls of Black-Box Concept Learning Models. ArXiv, abs/2106.13314.\\n\\n[4] Addressing leakage in concept bottleneck models. NeurIPS 2022\"}", "{\"summary\": \"This work introduces Mixed Concept Bottleneck Models (MCBMs), a predictive model and an inspection tool that uses decision trees to analyze and control information leakage in traditional Concept Bottleneck Models (CBMs). By exploiting a hard independent CBM whose label predictor is a decision tree, this model constructs a CBM by expanding each of the original decision tree\\u2019s leaf nodes using new sub-trees that operate on soft concept representations for the concepts that are used to reach that node. This allows MCBMs to properly quantify leakage across each decision path and rule, producing group-based explanations. This paper evaluates MCBMs across three datasets and shows that they may lead to high-fidelity and interpretable explanations whilst performing similarly to equivalent hard, independently trained CBMs.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Thank you so much for submitting this work! I enjoyed reading this paper and learned a lot while reading it. Below are what I believe are this paper\\u2019s main strengths:\\n\\n1. **[Originality] (Critical)** Introducing decision trees to capture, disambiguate, and control leakage in vanilla CBMs is, to the best of my knowledge, certainly novel. I like the general idea and believe others may also find it interesting and new.\\n2. **[Significance] (Major)** If shown to lead to actionable conclusions, I do believe that MCBM may have some impact as an analysis tool (more so than as a predictive model; see weaknesses below). Moreover, the simple yet useful formalization of information leakage (i.e., equation (4)), which MCBMs can very easily estimate, is a helpful step towards better understanding and studying information leakage. Therefore, this approach may be useful to others working in this space, particularly those interested in leakage. As such, I believe both of these contributions are potentially useful for the overall XAI community. \\n3. **[Quality] (Minor)** This work is well-placed within the concept-based XAI literature and does a good job of connecting MCBMs to existing work outside of this paper.\\n4. **[Clarity] (Major)** The paper is very well-written, easy to follow, and full of visual aids that truly help with its understanding.\", \"weaknesses\": \"In contrast, I believe the following are some of this work\\u2019s limitations:\\n\\n1. **[Significance] (Critical)** I can see how the proposed approach may be useful for studying leakage in vanilla CBMs. However, I think a very strong case can be built against using MCBMs as a model for prediction in real-world cases, given their significant drops in performance compared to existing even simple baselines (e.g., joint CBMs). A case can be built to use a model that offers more interpretability (however one defines that) if the hit on performance is not very significant. In this case, however, the presented evidence suggests this hit can indeed be quite significant. Moreover, it is unclear how MCBMs compare to and how they could be used to analyze much more modern approaches (e.g., Post-hoc CBMs, CEMs, ProbCBMs, Energy-free CBMs, etc.), all of which are much better than vanilla CBMs. As such, I have some doubts about the potential impact of this work without further evidence or contributions showing their use for modern baselines/pipelines/frameworks. See below for further questions on this particular topic, as this is my biggest concern/hesitation regarding this work.\\n2. **[Significance] (Critical)** Related to my concern above, although MCBM is claimed to be helpful not just as a model but also as an analysis tool, the current experiments fail to provide evidence that any conclusions extracted by analyzing MCBM\\u2019s outputs do indeed lead to actionable changes that improve the analyzed model. My current concern is that this work presents MCBM as both (1) a model and (2) an analysis tool, without providing sufficient evidence, in my opinion, that it leads to actionable significant improvements in either of those two directions. \\n3. **[Significance/Quality] (Major)** Concept interventions, a standard evaluation procedure for CBM-like models, are not evaluated anywhere in this work. As such, it is hard to fully understand the benefits of MCBMs over other existing baselines in this field.\\n4. **[Quality] (Major)** Against common good practices, no error bars are provided for any of the results. This makes it very difficult to judge for significance, and it is particularly important here as some of the gains are small enough that they could just be from noise.\\n5. **[Quality/Clarity] (Minor)** It is unclear how some of the key hyperparameters (e.g., $\\\\texttt{msl}$) were selected for the different tasks. Given how sensitive MCBMs are shown to be to this hyperparameter (in the appendix), it is very important to verify that this hyperparameter was properly selected without accidental test-set leakage.\", \"questions\": \"**[Post rebuttal update: Changed my score to a *5: marginally below the acceptance threshold*]**\\n\\nCurrently, given some of my concerns with this work's framing and evaluation, and considering them w.r.t. the strengths I listed above, I am leaning towards rejecting this paper. However, I am absolutely happy to be convinced that some or all of my conclusions are wrong and to change my recommendation based on a discussion with the authors. For this, the following questions could help clarify/question some of my concerns:\\n\\n1. **(Critical)** I understand that \\u201ctrustworthy\\u201d/leakage-free explanations are always better. However, in the case where the concept set is incomplete (which is likely to be the case for any real-world dataset), what is the argument for using something like MCBM over any of the baselines that can achieve high accuracy in these setups (e.g., joint logit CBMs, Hybrid CBMs, Hybrid Post-hoc CBMs, CEMs, etc)? The performance difference between MCBM-Seq and joint CBMs, arguably a weak baseline for incompleteness compared to more recent approaches, seems to be large enough that one could construct a very convincing argument that any gains in reductions in concept leakage are not worth it in practice (e.g., up to 25% absolute drop in task accuracy in CUB according to Table 2). This is my largest concern with this work. Am I misunderstanding something here? If not, what is the case for using something like an MCBM over any other existing approaches in practical scenarios where concepts are almost certainly bound to be incomplete? The reason why I am fixating on this is that there are several claims in the paper (e.g., \\u201cthese tree-based approaches \\u2026achieve better accuracy on datasets with incomplete concept information\\u201d in Section 6) that do not appear to be backed by the same evidence presented in this paper. As such, assuming I correctly understand this work (happy to be convinced that I do not), I think these claims should be revised to represent better what the evidence shows.\\n2. **(Critical)** Related to the question above, if the interest is to use MCBM as a predictive model, do you have a sense of how MCBMs perform against any (not necessarily all) of the many high-performing modern baselines (CEMs/Post-hoc CBMs/ProbCBMs/Energy-based CBMs/etc)?\\n3. **(Critical)** If the argument to be built is that MCBMs are better for post-hoc analysis than prediction, then I would say the paper should focus more on the analysis part and show how MCBMs can be used to lead to actionable changes/edits/insights that improve the underlying model somehow. Section 5.3 shows some of this but falls short in that it does not convincingly show that some of the conclusions made from the MCBM\\u2019s analysis can lead to changes to the underlying model that indeed improve it under some intended metric. Therefore, do you have any empirical evidence of actionable changes derived from insights from MCBM that led to a model\\u2019s update improving its performance under a reasonable metric? Could these sorts of studies be extended to more modern architectures like those discussed above?\\n4. **(Critical)** Could you please provide error bars for the results in all of the presented Tables? This would enable one to determine the significance of any deviations from a baseline. This is particularly important here as some of the gains presented (e.g., in Table 2) are small enough that they could be attributed to noise.\\n5. **(Critical)** The use of bold in Table 1 is very confusing and seems to follow an unconventional use. Is it the case that only the best scores for each metric are in bold as it is traditionally done? If so, then why are there no entries in bold for the Task accuracy (where MCBM underperforms), and why are certain MCBM results bolded when, in fact, they are worse than competing baselines (e.g., \\u201cexplanation\\u201d for CUB and MIMIC-II)? I think it is absolutely okay to use bolding for any purpose as long as it is made clear to the readers. In the absence of an explanation for it, however, I would say the common assumption is that bold fonts indicate the best-performing baseline (which does not appear to be the case here).\\n6. **(Critical)** In Section 1, it is claimed that information leakage may affect the ability to intervene on CBMs (a statement I agree with for vanilla CBMs but seems to not be the case for other sorts of CBM-like models as recent evidence suggests [1]). Is it the case that interventions in MCBMs lead to higher accuracies than in their CBM counterparts? Given the importance that interventions have for CBMs (and the way they serve as a verification of their interpretability across the literature), it would be extremely helpful to understand what they look like for MCBMs.\\n7. **(Major)** The appendices show that the hyperparameter $\\\\texttt{msl}$ has a significant effect on the performance and interpretability of the resulting MCBM. How was this selected for the results shown in Section 5? How would one select this argument in practice?\\n8. **(Minor)** It is claimed that the method \\u201cdoes not introduce any computational overhead compared to a Sequential CBM with a single decision tree as label predictor\\u201d. This is true from an asymptotic complexity point of view, but it may not necessarily be true from a practical point of view (asymptotic analysis does not consider the average instance and ignores potentially large constants, which may have non-trivial effects in the \\\"small n\\\" limit). In practice, what is the observed overhead in training an MCBM vs a CBM as the number of samples or concepts varies?\\n9. **(Minor)** If my understanding is correct, from Algorithm 1 and Section 4.1\\u2019s description, the concept predictor\\u2019s outputs are used during test time in both the global and the specialized decision trees. If that is true, then Figure 2 is a nice addition but may be a bit misleading as it appears to indicate that some of the concepts don\\u2019t come from the concept predictor, but instead, they come from some oracle/ground truth source. Is my understanding of how this method operates correct? If so, then why are there no connections/edges from the concept predictor to the hard CBM part (LHS) of Figure 2?\\n10. **(Minor)** How are the 45 concepts for CUB chosen? I can see the list of selected concepts in the Appendix, but it is unclear why these were selected over the rest.\\n11. **(Minor)** Out of curiosity, in case this has already been tried, if one does leaf-node specialization based on the **concept** **logits** rather than the probabilities, do you get better performance on incomplete datasets? If so, then why would this not be a better path than using the probabilities? Logits can also be calibrated and interpreted as probabilities, and they may enable more leakage that can benefit the downstream task.\\n\\n### Minor Suggestions and Typos\\n\\nWhilst reading this work, I found the following potential minor issues/typos which may be helpful when preparing a new version of this manuscript:\\n\\n1. **(Potential Typo)** In line 80, \\u201c\\u2026 the purpose of this work is provide \\u2026\\u201d should probably be \\u201c\\u2026 the purpose of this work is to provide\\u2026\\u201d\\n2. **(Potential Typo, nitpicking)** In line 135, should \\u201ccategorical vector\\u201d be \\u201cbinary vector\\u201d instead for the concept vector $c$?\\n3. **(Potential Typo)** In line 141, \\u201ce.g\\u201d should probably be \\u201ce.g.\\u201d\\n4. **(Potential Typo)** In line 214, the citation to Platt is accidentally all upper-cased.\\n5. **(Formatting)** When using the opening quotations (\\u201d) in Latex, I would suggest using `` rather than \\\". Otherwise, the left quotation symbol is reversed (see Section 5.1 for examples).\\n\\n## References\\n\\n- [1] Zarlenga et al. \\\"Learning to Receive Help: Intervention-Aware Concept Embedding Models.\\\"\\u00a0NeurIPS\\u00a0(2023).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear Reviewers,\\n\\nWe would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our submission. After careful consideration, we have decided to withdraw our paper from the review process. The reason is that we aim to make notable changes to the experimental section of the paper, and thus we expect the new pdf to be relatively different. Such changes include a) an intervention analysis and b) a more thorough explanation of the usefulness of leakage inspection, related to the answers we provided on your comments.\\n\\nBest regards, Submission10191 Authors\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely thank the reviewer for reading our work and providing comments. We plan to respond to all highlighted points.\\n\\n**Weakness 1**: *Moreover, it is unclear how MCBMs compare to and how they could be used to analyze much more modern approaches (e.g., Post-hoc CBMs, CEMs, ProbCBMs, Energy-free CBMs, etc.)*. \\n\\n**Response**:\\nWe understand this concern. In our Related Work of section 2, we provided a short justification of why we believe these modern CBM approaches (some of which you also mention) either do not address information leakage at all or they partially resolve the issue. Since reviewer MJtJ had a similar concern, we provide below the same detailed justification for three of these works:\\n\\n* *\\u201cConcept Embedding Models. NeurIPS 2022\\u201d*: This paper introduces the idea of a \\u201cconcept embedding\\u201d, which was later adopted by more CBM papers such as PCBM [2]. In our work, we use scalar-valued concepts instead of concept embeddings, following the original CBM paper*. While concept embeddings achieve excellent task performance because they lead to more expressive concept representations, the authors do not comment on information leakage, i.e. they do not provide a justification about whether these concept embeddings also capture unintended (\\u201cleaked\\u201d) information from the inputs to improve the task performance. Instead, they propose the Concept Alignment Score (CAS) to measure how much learnt concept embeddings can be trusted as faithful representations of their ground truth concept labels. Their intuition is that clustering samples based on a faithful concept embedding would result in coherent clusters. While they show that CEMs achieve high CAS scores, we argue that their method may not be sufficiently interpretable because: \\n - They achieve CAS scores of around 80% in certain datasets, such as CUB and CelebA, which may imply the presence of leakage.\\n - This approach does not indicate which subsets may suffer from the imperfect concept alignment, or how does this imperfection affect concept-based explanations. In contrast, our tree-based method allows for group-specific leakage examination in the form of decision paths, and gives the exact decision rules based on leakage.\\n - The information captured in a high-dimensional concept embedding is unintuitive compared to a scalar-valued concept, which directly represents the probability (confidence) of the concept predictor.\\n\\n* *\\u201cPost-hoc Concept Bottleneck Models. NeurIPS 2022\\u201d*. Similar to CEMs, this work uses concept embeddings but in the form of concept activation vectors (CAVs). Also, the authors do not address the issue of concept faithfulness or that of leakage, since they rely on multi-modal models to learn concepts that may be unavailable. While they effectively deal with the problem of missing concept annotations, their concept quality and faithfulness relies on the fidelity of their multimodal model.\\n\\n* *\\u201cAddressing leakage in concept bottleneck models. NeurIPS 2022\\u201d*. This work is closer to our method, in the sense that a) it uses scalar-valued concepts and b) specifically addresses leakage. However, as we mention in our related work, \\u201cHavasi et al. (2022b) tackle missing information with a side channel and an auto-regressive concept predictor, but these approaches struggle with interpretability and disentanglement of residual information (Zabounidis 2023)\\u201d. Specifically, all such works which use a residual layer or side-channel aim to let missing concept information pass directly from inputs to targets, keeping the ground truth concept representations not influenced by leakage. Yet, (Zabounidis 2023) highlight that the residual is not guaranteed to capture this intended missing information, and the two representations may be entangled. We argue that the lack of transparency in the residual channel does not make these methods convincing enough for this problem.\\n\\nThus, we believe that MCBM-Seq is more effective as an analysis tool and provides some novel advantages compared to existing methods, such as the ability to perform group-specific leakage examination and to identify the exact decision rules based on leakage. Our work cannot be compared with these previous works in terms of **how they deal with leakage**, since they either do not address this problem, or they indirectly address it using vastly different approaches. Also leakage was not quantitatively defined in these works in order to be properly compared (in contrast to other traditional metrics such as task accuracy). \\n\\nRegarding the question of whether MCBM-Seq **could instead be used to analyze these modern approaches**, we believe that it is compatible with many other CBM methods because it does not pose any constraint on the architecture of the concept encoder (lines 536-538). It is promising to combine an Auto-Regressive Concept encoder from the work \\u201cAddressing leakage in concept bottleneck models. NeurIPS 2022\\u201d with our tree-based label predictor.\"}", "{\"summary\": \"Concept-bottleneck Models (CBM) suffer from information leakage, where the model exploits information in the soft concept scores with unintended consequences towards interpretability.\\nThe paper addresses the leakage issue by fitting constrained decision trees on top of the concept scores.\\nThey quantify information leakage with a mutual information measure and propose a three-step procedure for modeling.\\nThe paper argues that their technique yields better explanations even when the concept set is incomplete, and guides the developer to concept sets that needs expansion.\\n\\nI found the idea neat and the presentation clear but found empirical validation underwhelming.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well motivated and the presentation is clean. They argued their modeling choices well.\", \"Mixed-CBM idea is intuitive, and follows from their mutual information measure of (4).\"], \"weaknesses\": \"**Empirical Validation**\\n\\nThe results in Table 1 and Table 2 indicate that MCBM-* (their method) has similar task accuracy to Sequential/Joint with vanilla decision trees.\\nMCBM-*, however, allows for leakage inspection as remarked in Table 1 or in Section 5.3.\\nI too see MCBM-*'s major contribution is with leakage inspection, but the paper makes only a sparing evaluation of the same.\\nI expect to see stronger evaluation of MCBM's utility for information leakage, and their implications in improving task accuracy or explanation quality.\\nPerhaps through human-studies or mining new concepts on one of the tasks such as CUB.\\n\\nFrom Table 2, I do not see how MCBM-* is better. It has worser task accuracy than EntropyNet, but perhaps lower leakage? (which is not apparent).\\nGiven that their method has advantages when the number of concepts is small, their evaluation too should bring out more readily.\\n\\nOverall, I did not find the evaluation convincing on (a) the promise and implications of MCBM-*'s leakage inspection, (b) MCBM's merits over decision trees or any other baselines.\", \"questions\": \"1. Please explain the metrics at length. Concept accuracy, fidelity and explanation accuracy. I believe explanation accuracy is mentioned but never used (in which case it can be dropped).\\n2. From the argument in L460-462, decision trees with soft concept scores must have led to more leakage (or poor fidelity?), but that's not the case in Table 1. Please explain.\\n3. When the concept scores are mixed, I do not see why only the concepts on the path are softened. What happens when all the concepts are softened (when expanding the tree) or only the leaf concept is softened.\\n4. Please comment on the choice of hyperparam. Table 2 lists out numbers for various hparam $\\\\lambda_C$, but how is it picked? \\n5. (Comment) A picture or description of Morpho-MNIST can be handy.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"*(continuing the previous response...)*\\n\\nIdentifying such instances can then assist our decision-making when deriving concept-based explanations per decision path. This corresponds to the analysis of section 5.3. For better clarity, we update this section by defining two scenarios:\\n\\n* *If a group (leaf node) **cannot** be further split using leakage, i.e. a sub-tree is not found*: This shows that the particular group is not affected by information leakage, which is desirable. In addition, if the task accuracy for this particular group is high, we may consider this an ideal classification example, because the available concept annotations are sufficient for an accurate distinction. The fact that we can identify and isolate such groups is highlighted as a key advantage of our work. **Unlike a purely soft CBM, leakage will not impact these groups, and thus the concept explanations for those groups are both leakage-free and accurate**. If, on the other hand, the task accuracy is not sufficient, we may flag this group to an expert to either annotate additional concepts or perform an intervention (future work). \\n\\n\\n* *If a group (leaf node) **can** be further split using leakage, i.e. a sub-tree is found*: Then this group can be flagged for additional analysis. We describe a detailed case study for the Woodpecker example in page 10, paragraph: \\u201cOur tree-structure allows for meaningful group-specific explanations\\u201d. We urge the user to investigate if this difference in likelihoods (leakage) might be intuitive or not, similar to the cat-dog example we described in the beginning. Then the user has the following options: a) rely solely on the decision process of the global tree to derive a perfectly understandable, leak-free explanation, b) extend the decision process with the sub-tree of MCBM-Seq if this likelihood difference seems intuitive, c) use MCBM-Joint\\u2019s less intuitive probabilities for maximum accuracy, d) flag this group to an expert to either annotate additional concepts or perform an intervention (future work).\\n\\nIn conclusion, identifying leakage is useful because it provides these analysis tools to a decision maker when deriving explanations, which are not available to a standard CBM. We currently develop future work providing interventions and concept discovery strategies specifically when leakage is observed. However, future leakage mitigation strategies first require a method that controls and inspects leakage for specific sub-groups, thus we believe this work may be used as a very useful analysis tool while not sacrificing the task performance of a standard CBM (the task performance is comparable).\\n\\n**Question**: *Why do the authors focus on decision trees (e.g. as opposed to a linear model CBM)? The paper only compares against Entropy Net & a black-box baseline*.\\n\\n**Response**: We focus on decision trees because they offer us the advantage of inspecting leakage, which is not provided by a linear model or a neural network. More specifically, they allow us to inspect leakage defined in Eq. (4) using our formulation of Eq. (5) and Appendix A.2. Quantifying Leakage is not straightforward using a linear model or the Entropy-Net, i.e. it is not evident how the mutual information $I(y; \\\\hat{c}|c)$ in Eq. (4) could be approximated using such models in a similarly efficient way. Moreover, trees allow us to inspect leakage for specific groups and derive group-specific explanations in the form of decision paths by controlling the \\u201cminimum samples per leaf (msl)\\u201d constraint, as highlighted in section 5.3. In contrast, linear models as well as the Entropy-Net model only allow us to form either instance-specific or class-specific explanations. In terms of task and explanation accuracy, we compare our method directly with the Entropy-Net model because it is currently the state-of-the art method for concept-based explanations, outperforming simple linear models [2]. \\n\\n[2] Barbiero, P., Ciravegna, G., Giannini, F., Li\\u00f3, P., Gori, M., & Melacci, S. (2022). Entropy-Based Logic Explanations of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6046-6054.\"}", "{\"comment\": \"I admire the authors' effort to address my and other reviewer's concerns. All my questions have been answered, and some of the issues raised have been addressed. However, the work still lacks a direct empirical comparison to existing works for controlling information leakage (e.g. PCBM and CEM). At the same time, I still feel the three-stages unnecessarily heavy, especially that there is currently no clear (empirical) evidence showing the advantages against existing methods. All these make me uncertain whether the work is borderline acceptable or not.\\n\\nI thereby maintain my current scores at this stage, and will determine my final score after discussing with other reviewers. My current score should be interpreted as a 5.5.\\n\\nOnce again, thank your for your noticeable efforts in improving the work. Wish you the best of luck with the submission.\"}", "{\"comment\": \"**Question 3**: *When the concept scores are mixed, I do not see why only the concepts on the path are softened. What happens when all the concepts are softened (when expanding the tree) or only the leaf concept is softened*.\\n\\n**Response**: This is a very important detail. We specifically refer to this in lines 258-268. If all concepts are softened, the mutual information for the leaf node: $I(y;\\\\hat{c}_k|c_s)$ would not be satisfied, thus our Definition 3.1 of Leakage in line 176 would not hold since Equation (8) in the Appendix would be incorrect. Refer to Appendix A.2, page 14 for further details. Intuitively, the reason is that we treat Information Leakage as \\u201cThe amount of unintended information that is used to predict label $y$ with soft concepts $\\\\hat{c}$ that is not present in hard representation $c$. Thus, we need to make sure first that all samples of a group have the hard concept $c$, in order to then investigate if the soft representation of this concept provides **extra**, leaky information. **We cannot investigate the impact of a soft concept in a sample if the sample does not possess the hard concept in the first place.**\\n\\nThe concepts appearing in the decision paths are guaranteed to be shared by all samples in the path. Taking the leftmost path in Figure 3, page 6 as an example, the 926 digits ending up in the green leaf node all have small length and small thickness, if we follow their decision path from the root. We then search if the soft representation of any of those two concepts leads to leakage, which does not happen in the particular path because there no nodes with light gray color were found. Concepts not appearing in the path may not be shared by all digits in the group. \\n\\n**Question 4**: *Please comment on the choice of hyperparam. Table 2 lists out numbers for various hparam $\\\\lambda_C$, but how is it picked?*\\n \\n**Response**: According to the original CBM work [1], the parameter $\\\\lambda_C$ controls the trade-off between task and concept accuracy in Eq. (3) line 165, which is also evidenced in our results of Table 2 (smaller values of this parameter increase the task accuracy and reduce the concept accuracy, while the opposite holds for large values). For computational reasons, we tested one very small value $\\\\lambda_C = 0.1$, one very large value $\\\\lambda_C = 100$ and one in the middle, to show how the metrics change in the full range of values for this parameter.\\n\\n[1] Koh, P. W., Nguyen, T., Tang, Y. S., Mussmann, S., Pierson, E., Kim, B., & Liang, P. (2020, November). Concept bottleneck models. In International conference on machine learning (pp. 5338-5348). PMLR.\", \"title\": \"Author Response (Continue)\"}", "{\"comment\": \"**Weakness 1, Questions 1 and 2**: *The performance difference between MCBM-Seq and joint CBMs, arguably a weak baseline for incompleteness compared to more recent approaches, seems to be large enough that one could construct a very convincing argument that any gains in reductions in concept leakage are not worth it in practice...*\\n\\n**Response**\\n\\nYour concern about the significant drop compared to joint CBMs is perfectly reasonable. However, on the other hand joint CBMs highly suffer from information leakage as shown in previous work [3,4]. We believe that having a high-performing CBM with very unreliable explanations due to leakage contradicts the reason CBMs were created in the first place, which is to provide concept-based explanations. Otherwise, it would make more sense to use a high-performing black-box model. Similar to reviewer kyiD, we will attempt to describe the results of Tables 1 and 2 in this response in an intuitive and detailed manner, and we hope our argument will be clarified at the end of this response.\\n\\nThe purpose of Table 2 is to show that MCBM-Seq is comparable in task performance compared to existing CBMs, or lower in performance compared to Joint CBMs which however suffer from information leakage (refer to the work of [3]) and Black-Box neural networks which are inherently uninterpretable. The Table does not show the advantage of our method by itself, but shows how it performs compared to standard methods in order for our analysis to be complete.\", \"the_important_take_away_from_table_2_when_looking_at_the_numbers_is_that_the_relationship_of_task_accuracies_in_cbm_modes_is_roughly_the_following\": \"Hard, Independent < **MCBM-Seq** <= Sequential < **MCBM-Joint** (for small $\\\\lambda_C$) < Joint (for small $\\\\lambda_C$) < Black-Box. In contrast, the problem of leakage follows the opposite trend: Hard, Independent (No Leakage by definition [3,4]) > MCBM-Seq (has leakage, but this is inspectable and controllable by the decision maker) > Sequential CBM (has leakage according to [3,4], which is uninspectable, uncontrollable and affects all samples as stated in L460-462) > MCBM-Joint (has more leakage but this is again controllable) > Joint-CBM (typically it has the most leakage and this is uncontrollable, based on [3,4]). The two reverse trends show the trade-offs of CBMs.\\n\\n**Table 1 was constructed to highlight the advantages of MCBM-Seq. First**, the last column named \\u201cLeakage Inspection\\u201d emphasizes that MCBM-Seq is the only method that allows for Leakage Inspection, which is the novel property we introduce in this work. The existing completely soft sequential CBMs, regardless of their type of label predictor (Entropy-Net, Simple Decision Tree), typically have leakage, as shown in previous works [3, 4] but this leakage is neither easily inspectable nor controlled, which motivated our work. We also provide an intuition for this claim with a practical example in Appendix A.3, page 14. **Secondly**, the table reveals another advantage of MCBM-Seq when compared explicitly with a Purely Soft Sequential CBM using an Entropy-Net as a label predictor, which is that MCBM also achieves higher Explanation Accuracy and does not raise Fidelity issues, as explained in lines 416-426. This second advantage does not hold when compared to purely soft sequential CBMs using traditional decision trees, but the first main advantage of leakage inspection still remains. \\n\\n**The reasons why our leakage inspection metrics is useful** are those highlighted in section 5.3: **a)** we can analyze our model for specific decision paths (groups), and thus **b)** we can derive more meaningful group-specific explanations, since bi) the decision-maker has the flexibility to control the concept explanation based on the length of the decision path (lines 515-518) and bii) leakage will not impact all decision-making paths in a mixed CBM (lines 518-519).\\n\\nIn conclusion, the argument of our work is the following: **If MCBM-Seq has a task accuracy between those of a Hard and a Sequential CBM (Table 2) but is superior in terms of explainability due to its leakage inspection property, which is shown in Table 1 and section 5.3 (pages 9 and 10), then we believe it is a useful training method for CBMs**. \\n\\nIn terms of comparing with other modern CBMs, please refer to our previous response. We explain that these methods are only comparable in terms of task performance and not in leakage mitigation. Our method is indeed inferior in prediction accuracy compared to leaky CBMs such as Joint CBMs. However, we believe that leakage inspection and control is a crucial property that has not been thoroughly examined and is equivalently (or even more) important than predictive performance for interpretable models like CBMs.\\n\\n[3] Mahinpei, A., Clark, J., Lage, I., Doshi-Velez, F., & Pan, W. (2021). Promises and Pitfalls of Black-Box Concept Learning Models. ArXiv, abs/2106.13314.\\n\\n[4] Addressing leakage in concept bottleneck models. NeurIPS 2022\", \"title\": \"Author Response (Continue)\"}", "{\"title\": \"Author Response (Continue)\", \"comment\": \"*(we continue our response to question 3)*\\n\\n**Identifying such instances can then assist our decision-making when deriving concept-based explanations per decision path**. This corresponds to the analysis of section 5.3. For better clarity, we update this section by defining two scenarios:\\n\\n* If a group (leaf node) **cannot** be further split using leakage, i.e. a sub-tree is not found: This shows that the particular group is **not affected by information leakage, which is desirable**. In addition, if the task accuracy for this particular group is high, we may consider this an ideal classification example, because the available concept annotations are sufficient for an accurate distinction. The fact that we can identify and isolate such groups is highlighted as a key advantage of our work. **Unlike a purely soft CBM, leakage will not impact these groups, and thus the concept explanations for those groups are both leakage-free and accurate**. If, on the other hand, the task accuracy is not sufficient, we may flag this group to an expert to either annotate additional concepts or perform an intervention (future work). \\n\\n* If a group (leaf node) can be further split using leakage, i.e. a sub-tree is found: Then this group can be flagged for additional analysis. We describe a detailed case study for the Woodpecker example in page 10, paragraph: \\u201cOur tree-structure allows for meaningful group-specific explanations\\u201d. We urge the user to investigate if this difference in likelihoods (leakage) might be intuitive or not, similar to the cat-dog example we described in the beginning. Then the user has the following options: a) rely solely on the decision process of the global tree to derive a perfectly understandable, leak-free explanation, b) extend the decision process with the sub-tree of MCBM-Seq if this likelihood difference seems intuitive, c) use MCBM-Joint\\u2019s less intuitive probabilities for maximum accuracy, d) flag this group to an expert to either annotate additional concepts or perform an intervention (future work).\\n\\nIn conclusion, identifying leakage is useful because it provides these analysis tools to a decision maker when deriving explanations, which are not available to a standard CBM. We currently develop future work providing interventions and concept discovery strategies specifically when leakage is observed. However, future leakage mitigation strategies first require a method that controls and inspects leakage for specific sub-groups, thus we believe this work may be used as a very useful analysis tool.\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely thank the reviewer for reading our work and providing comments. We will plan to respond to the highlighted weaknesses and questions in a sequential manner.\\n\\n**Weaknesses 1-2 and Question 1**: *\\\"(Major) There might be, in my opinion, a potential discrepancy between the information leakage metric defined (Definition 3.1) ... (Major) Related to the above point, it seems that the information leakage problem addressed in this paper differs slightly from that studied in existing works...\\\"*\\n\\n**Response**: Thank you for this great question. You are correct. The definition 3.1 we provide is general and indeed refers to $\\\\hat{C}$ and $C$ being random variables. Indeed, we calculate the metric **per leaf node** and **per decision split** by conditioning the random values $\\\\hat{C}$ and $C$ appropriately. Refer to Appendix A.2, page 14 for the full description. We indeed denote $I_{Leakage}$ as $I_{Leakage}( \\\\hat{c}_k )$ in Eq. 8, showing that this is the Information leakage induced by the specific split. We do not estimate the mutual information of definition 3.1 specifically, since we observed that it is more useful in practice to quantify leakage in specific groups rather than providing a global leakage estimate, as shown in the per-path analysis of section 5.3. We believe that conditioning on the random variables does not introduce a discrepancy, but rather makes this information metric more specific.\\n\\n**Weaknesses 3-4 and Question 3**: *\\\"The work has not been compared to state-of-the-art methods for addressing information leakage, such as CEM [1], PCBM [2] and [3];\\\"*\\n\\n**Response**: The three cited papers are included in our section 2 \\u201cRelated Work\\u201d, with a short justification of why we believe they do not sufficiently address information leakage. Here, we elaborate for each one:\\n\\n* *\\u201cConcept Embedding Models. NeurIPS 2022\\u201d* [1]: This paper introduces the idea of a \\u201cconcept embedding\\u201d, which was later adopted by more CBM papers such as PCBM [2]. In our work, we use scalar-valued concepts instead of concept embeddings, following the original CBM paper*. While concept embeddings achieve excellent task performance because they lead to more expressive concept representations, the authors do not comment on information leakage, i.e. they do not provide a justification about whether these concept embeddings also capture unintended (\\u201cleaked\\u201d) information from the inputs to improve the task performance. Instead, they propose the Concept Alignment Score (CAS) to measure how much learnt concept embeddings can be trusted as faithful representations of their ground truth concept labels. Their intuition is that clustering samples based on a faithful concept embedding would result in coherent clusters. While they show that CEMs achieve high CAS scores, we argue that their method may not be sufficiently interpretable because: i) They achieve CAS scores of around 80% in certain datasets, such as CUB and CelebA, which may imply the presence of leakage. ii) This approach does not indicate which subsets may suffer from the imperfect concept alignment, or how does this imperfection affect concept-based explanations. In contrast, our tree-based method allows for group-specific leakage examination in the form of decision paths, and gives the exact decision rules based on leakage. iii) The information captured in a high-dimensional concept embedding is unintuitive compared to a scalar-valued concept, which directly represents the probability (confidence) of the concept predictor.\\n\\n* *\\u201cPost-hoc Concept Bottleneck Models. NeurIPS 2022\\u201d* [2]. Similar to CEMs, this work uses concept embeddings but in the form of concept activation vectors (CAVs). Also, the authors do not address the issue of concept faithfulness or that of leakage, since they rely on multi-modal models to learn concepts that may be not annotated. While they effectively deal with the problem of missing concept annotations, their concept quality and faithfulness relies on the fidelity of their multimodal model.\\n\\n*(The response continues in the next comment)*\"}", "{\"summary\": \"The authors study a particular form of information leakage in decision-tree CBMs. They introduce a metric and method to quantify this information leakage by comparing the decision-tree paths of hard CBMs with their soft counterparts. Their method induces little to no drops in task / concept accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"the authors study the interesting and important problem of how to build trustworthy CBMs\", \"the authors perform experiments on diverse datasets\"], \"weaknesses\": [\"the main issue seems to be a lack of rationale for definition 3.1, which forms the basis for the paper. The authors quantify *information leakage* as the amount of \\u201cunintended information\\u201d that is used to predict the label with soft concepts that is not present in hard representation. Why captures whether this information is \\u201cunintended\\u201d. It seems to me that the author\\u2019s information measure needlessly penalizes soft concepts that provide extra, intended information. For example, in Morpho-MNIST, continuous features (i.e. thickness, area, length, width, height, and slant of digits) are binned and then used for prediction. Soft concepts may improve the model by removing the binning, making these features more reliable.\", \"It is also unclear how quantifying this information leakage is useful. It would be nice to see whether this information could be used, e.g. to improve concepts for a downstream task or in a human user study where concepts are shown to be more understandable. This is especially important seeing as the introduced method seems to slightly decrease some desirable metrics (Table 1)\"], \"questions\": [\"why do the authors focus on decision trees (e.g. as opposed to a linear model CBM)? The paper only compares against Entropy Net & a black-box baseline\", \"minutia\", \"line 141 \\u201clinear layer decision trees\\u201d - do the authors mean \\u201clinear layer or decision trees\\u201d\", \"line 144 \\u201cnetworks f and g\\u201d - is g a network? line 141 would suggest it is not\", \"why is this a reasonable definition?\", \"line 189 \\u201cas the trees be incomparable\\u201d\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your rebuttal\", \"comment\": [\"Dear Authors,\", \"Thank you for taking all the time, effort, and patience to reply to some of my many questions and concerns. The time taken to do this is certainly appreciated. Below, I outline a few general comments after carefully reading your responses:\", \"I entirely agree with the authors and understand the argument that competing methods may have more leakage and, therefore, better performance. However, my concern is that there must be some tolerance in performance drop for a model's utility to be worth it over competing baselines. For example, it is ok for a method to perform slightly worse than black box models if they can provide more things that one considers useful (e.g., interpretability, interventions, etc). However, even there, there is an implicit tolerance to how much one would be willing to sacrifice in performance before any benefits of the new approach are just simply not worth the drop. My argument here is that it seems that MCBM's drop in performance, even against very weak baselines to today's standards (e.g., Joint-CBMs are usually significantly outperformed by CEMs, Post-hoc CBMs, etc), would, in my opinion, be beyond that acceptable threshold for practical tasks.\", \"Related to the point above, if MCBM is sold as a predictive model, then I do not see a particularly strong case for why these more recent baselines are exempt from evaluation because they are more potentially \\\"leaky\\\". If that is the case, then the evidence will show that, and readers will also be able to place the proposed method with respect to more modern approaches that have gained significant momentum recently. Without an actual evaluation, it is hard to do this and to understand what MCBM brings to the table that other baselines do not.\", \"Moreover, even if MCBM controls for leakage much better than competing approaches, at the end of the day, what the average practitioners are potentially most interested in is whether the explanation is aligned with what the model predicts at the end (something that is much easier to evaluate via interventions than by measuring leakage). Without any intervention evidence, it is hard to fully understand what I get in practice from reducing leakage. The theoretical/meta argument for why reducing leakage is potentially good for interventions is reasonable (although, as pointed out in my review, there is more recent evidence that challenges this notion). Yet, the argument for that could be clearer/stronger if empirical evidence is there to support the claim. I was hoping that this evidence could come as part of this rebuttal (as it does not require re-training, it is just evaluation). However, I understand that the rebuttal window is tight, and this may be left for future work.\", \"I strongly suggest that my comments above on error bars and bolding be addressed in the next iteration of this manuscript, as they go against common good practices. Apologies if I missed a comment/change in your rebuttal that introduced these changes, though.\", \"Perhaps more importantly, **I still believe that by presenting MCBM as both a predictive model and an analysis tool, this work is attempting to cover perhaps too much without constructing a particularly strong case for either of these two directions**. I am happy to be convinced otherwise by my fellow reviewers and ACs. However, from the paper and rebuttal itself, I am still not entirely convinced that either direction is strongly supported by evidence suggesting MCBM should be adopted over existing alternatives (this particularly goes for the predictive model side of the argument).\", \"Because of these reasons, **I am willing to increase my score slightly to a borderline reject but will not raise my score further as several of my critical concerns were not addressed during the rebuttal**.\", \"Once more, I thank the authors for their rebuttal and their paper and wish them the best of luck with this submission.\"]}" ] }
Bi1083wNPb
Equivariant Graph Self-Attention Transformer for Learning Higher-Order Interactions in 3D Molecular Structures
[ "Asiri Wijesinghe", "Piotr Koniusz" ]
Despite their considerable success in multiple fields, studying 3D molecular structures of varying sizes presents a significant challenge in machine learning, particularly in drug discovery, as existing methods often struggle to accurately capture complex geometric relationships and tend to be less effective at generalizing across diverse molecular environments. To address these limitations, we propose a novel Equivariant Graph Self-Attention Transformer, namely EG-SAT, which effectively leverages both geometric and relational features of molecular data while maintaining equivariance under Euclidean transformations. This approach enables the model to capture molecular geometry through higher-order representations, enhancing its ability to understand intricate spatial relationships and atomic interactions. By effectively modeling the radial and angular distributions of neighboring atoms within a specified cutoff distance using Atom-Centered Symmetry Functions (ACSFs), EG-SAT leads to a more nuanced and comprehensive understanding of molecular interactions. We validate our model on the QM9 and MD17 datasets, demonstrating that EG-SAT achieves state-of-the-art performance in predicting most quantum mechanical properties, thus showcasing its effectiveness and robustness in this domain.
[ "Graph Self-Attention", "GNNs", "3D Molecular Structures" ]
https://openreview.net/pdf?id=Bi1083wNPb
https://openreview.net/forum?id=Bi1083wNPb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qRJZa4h0JQ", "oIW4oRm7ni", "jjkJKTM9Us", "YGxu4ocMM0", "D26STQGFmf", "3sx2V4lVwM" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730697068298, 1729785330676, 1729800854869, 1732253767962, 1730573227191, 1730699150644 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6746/Reviewer_7Sfw" ], [ "ICLR.cc/2025/Conference/Submission6746/Reviewer_JAQn" ], [ "ICLR.cc/2025/Conference/Submission6746/Reviewer_1Hn3" ], [ "ICLR.cc/2025/Conference/Submission6746/Authors" ], [ "ICLR.cc/2025/Conference/Submission6746/Reviewer_TByL" ], [ "ICLR.cc/2025/Conference/Submission6746/Reviewer_RKLt" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes EG-SAT (Equivariant Graph Self-Attention Transformer), a novel approach for learning 3D molecular structures. The key contribution is the introduction of Attention-based Atom-Centered Symmetry Functions (AACSFs) that integrate both radial and angular information while maintaining roto-translational invariance. The model improves upon traditional ACSFs by incorporating element-specific attention mechanisms and addresses scalability challenges through attention-based mechanisms. The authors validate their approach on QM9 and MD17 datasets, demonstrating competitive performance in predicting quantum mechanical properties and molecular forces.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper attempts to address the scalability limitations of traditional ACSFs through attention mechanisms, providing a potentially interesting direction for future research in this area.\\n\\n2. The mathematical formulation of the model's equivariance properties is presented in a structured manner with supporting proofs in the appendix, making the theoretical aspects accessible.\\n\\n3. The implementation details are documented clearly with hyperparameters and architectural specifications, which aids in potential reproduction of the results.\", \"weaknesses\": \"1. Incomplete baselines for all the datasets presented in the paper. For QM9 dataset, authors didn't include recent works such as Spherenet [1], Equiformer(V2) [2,3], LEFTNet [4], SaVeNet [5], and Geoformer [6] to name few. Although the authors cited Equiformer, they didn't compare the results on QM9.\\n\\n2. The baselines are on all small molecular tasks on QM9 and MD17 datasets. Therefore, limiting the applicability of the proposed methods.\\n\\n3. Complexity Analysis: While the authors claim linear complexity with respect to the number of edges, this seems inconsistent with the use of angular information ($\\\\beta_{ijk}$) which typically involves triplet interactions. The current analysis doesn't adequately justify how the model maintains linear complexity despite considering all possible triplets.\\n\\n4. Empirical Validation of Efficiency Claims: Despite emphasizing computational efficiency and suitability for high-throughput screening, the paper lacks empirical evidence comparing computational costs with baseline methods.\\n\\n\\n[1] Liu, Y.\\u00a0_et al._\\u00a0(2022) \\u2018Spherical Message Passing for 3D Molecular Graphs\\u2019, in\\u00a0_International Conference on Learning Representations_. Available at: https://openreview.net/forum?id=givsRXsOt9r.\\n\\n[2] Liao, Y.-L. and Smidt, T. (2022) \\u2018Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs\\u2019. Available at: https://openreview.net/forum?id=_efamP7PSjg.\\n\\n[3] Liao, Y.-L.\\u00a0et al.\\u00a0(2024) \\u2018EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations\\u2019, in\\u00a0The Twelfth International Conference on Learning Representations. Available at: https://openreview.net/forum?id=mCOBKZmrzD.\\n\\n[4] Du, W.\\u00a0et al.\\u00a0(2023) \\u2018A new perspective on building efficient and expressive 3D equivariant graph neural networks\\u2019, in\\u00a0Thirty-seventh Conference on Neural Information Processing Systems. Available at: https://openreview.net/forum?id=hWPNYWkYPN.\\n\\n[5] Aykent, S. and Xia, T. (2023) \\u2018SaVeNet: A Scalable Vector Network for Enhanced Molecular Representation Learning\\u2019, in\\u00a0Thirty-seventh Conference on Neural Information Processing Systems. Available at: https://openreview.net/forum?id=0OImBCFsdf.\\n\\n[6] Wang, Y.\\u00a0_et al._\\u00a0(2023) \\u2018Geometric Transformer with Interatomic Positional Encoding\\u2019, in\\u00a0_Thirty-seventh Conference on Neural Information Processing Systems_. Available at: https://openreview.net/forum?id=9o6KQrklrE.\", \"questions\": \"1. As mentioned in W1 authors didn't include recent baselines, some of which even cited in the work. Therefore, authors shown to be aware of those works but why are they decided not to include in the baselines?\\n\\n2. Authors discussed their computational complexity and mentioned \\\"high-throughput screening\\\" however there is no experiment to support authors claim on the proposed methods' computational complexity compared to the baseline methods.\\n\\n3. Given that the proposed method utilizing a angular information with $\\\\beta_{ijk}$, how does the complexity remains linear to the number of edges? The clarifications are needed for this since when we consider all possible triplets, the complexity is $n^3$ with respect to the number of nodes $n$.\\n\\n4. Could the authors provide additional experiments on larger molecular systems or different types of chemical structures to demonstrate the method's generalizability beyond small molecules?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Previous geometric graph neural networks generally exhibit poor scalability when dealing with large molecular structure data. To address this issue, this paper proposes improvements to the traditional atom-centered symmetry functions by incorporating self-attention mechanisms to integrate both angular and radial information. This approach enhances the scalability of the graph neural network while preserving rotational and translational invariance.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper introduces the related work and background knowledge of equivariant graph neural networks in a detailed and clear manner, allowing readers to quickly establish relevant domain knowledge.\", \"weaknesses\": \"1. The writing and formatting of the paper are somewhat rough. The size of Figure 1 and the font of Figure 2 both require adjustments. The caption of the table should be placed above the table. Additionally, Section 4 contains only a portion of the content, yet it is labeled with a subsection title '4.1,' which seems redundant. The first sentence of the abstract uses 'their' without a clear referent, among other issues. These problems make the article difficult to read and do not meet the standard of a top conference paper.\\n2. This paper resembles a review of equivariant graph neural networks, and its actual contributions are not aligned with what is claimed in the introduction. The article devotes a significant amount of space to background knowledge and related work, only presenting the proposed method towards the end of page 7. Given the structure, it would be more appropriate to submit this as a review paper.\\n3. The experimental performance of the proposed method is not promising, and the baseline methods used for comparison are somewhat outdated, mostly from before 2021. Several well-known methods in the field, such as Equiformer[1], are missing from the comparison. As a result, the experiments do not effectively demonstrate the validity of the proposed method.\\n4. There seems to be an error in Equation 5, where $d_{ij}$ should be $d_{jk}$. \\n5. The innovation of the proposed method is rather limited. Introducing the attention mechanism into graph neural networks is not particularly novel, and I am curious about the differences and connections between the proposed method and existing approaches like GAT[2].\\n\\n[1] Equiformer: Equivariant graph attention transformer for 3d atomistic graphs.\\n\\n[2] Graph attention networks.\", \"questions\": \"Please refer to Weaknesses 5.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript describes a novel machine learning model called the Equivariant Graph Self-Attention Transformer (EG-SAT), which is designed to overcome challenges in capturing geometric and relational structures of molecules. By using Atom-Centered Symmetry Functions (ACSFs), EG-SAT captures molecular geometry through higher-order representations, modeling both the radial and angular distributions of neighboring atoms within a certain cutoff distance. This allows the model to gain a nuanced understanding of molecular interactions, which benefits the geometric information preservation.\\n\\nIn the experiment, the model is validated on the QM9 and MD17 datasets. Compared with multiple methods, EG-SAT shows state-of-the-art performance, especially in predicting quantum mechanical properties.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper is well-organized with clear writting.\", \"weaknesses\": \"Lack of Novelty:\\n1. Equivariance Claim: The proposed method, Equivariant Graph Self-Attention Transformer (EG-SAT), is supposed to be equivariant, which the authors aim to achieve by encoding the geometric information of 3D graphs in Euclidean space. However, this is a commonly used approach seen in models such as SchNet, DimeNet, and SphereNet, which are not only equivariant but also invariant. Therefore, the claim of \\\"equivariance\\\" is insufficient as a primary contribution of this method. Given that the model is invariant, introducing the concept of irreducible representations (irreps) in Section 3 is unnecessary. The irreps of SO(3) are typically not used in invariant graph neural networks (GNNs).\\n\\n2. Graph Transformer: The model is based on the graph transformer architecture, which is also widely used in the field. This, again, is not sufficient to be considered a significant contribution of the work.\\n\\n3. Scalability of ACSFs: The authors claim to address scalability issues in Atom-Centered Symmetry Functions (ACSFs) using attention-based mechanisms through GRU blocks to approximate interactions between atoms. However, attention mechanisms for interaction have already been considered within the graph transformer framework. The authors should further explain the necessity of introducing this specific mechanism in their approach.\", \"insufficient_experiments\": \"1.Outdated Comparisons: The experiments compare the proposed method with multiple other approaches. However, many of these comparison methods are outdated. The authors should benchmark their model against more recent methods such as SphereNet and Molformer. Moreover, since the method is theoretically invariant, there is little need to compare it with numerous equivariant methods.\", \"questions\": \"Suggestions:\\n1. Improve the novelty of the method.\\n2. Make more comparison with newer invariant methods to demonstrate the effectiveness and robustness.\\n3. Adjust the framework of the manuscript, reduce the length of background and related work, and strengthen the correlation with the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you for the comprehensive feedback and the time dedicated by the reviewers. Our submission contains several shortcomings and has prompted numerous questions. We recognise that resolving these concerns will need considerable time for revisions in the rebuttal process. Consequently, we have chosen to withdraw our submission for now to enhance the paper's quality.\"}", "{\"summary\": \"This paper aims to improve computational efficiency while preserving equivariance under Euclidean transformations for 3D molecules of varying sizes. By introducing the ACSFs, the authors propose the Equivariant Graph Self-Attention Transformer (EG-SAT), which leverages both geometric and relational features while maintaining roto-translational invariance. The theoretical analysis and time complexity are presented. Experimental results on the QM9 and MD17 datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a significant research question and proposes a method to solve it.\", \"The paper employs a self-attention Transformer to achieve equivariance while reducing computational costs.\", \"Comprehensive background information is provided, making the paper accessible and easy to follow.\", \"Experiments on the QM9 and MD17 datasets demonstrate the effectiveness of the proposed method.\"], \"weaknesses\": [\"The specific research problem addressed by the paper is unclear. While it provides comprehensive background information on molecular learning, symmetry, invariance, irreps, and ACSFs, the research problem is not clearly defined. In Sec. 4.1, the authors highlight the limitations of ACSFs but do so without detailed discussion or analysis to clarify the issue.\", \"The paper contains excessive information that may obscure its primary focus. For instance, the authors introduce irreps, but there is limited mention of its relevance within the method or analysis.\", \"Compared to ACSFs, the proposed EG-SAT still faces challenges in computational complexity. While the paper provides a complexity analysis for the proposed method, it lacks a direct comparison with ACSFs. Equations 7 and 8 do not offer computational savings relative to Equations 4 and 5.\", \"Overall, the content does not sufficiently support the contribution claims in Sec. 1. The paper lacks novelty and significant contributions.\"], \"questions\": [\"The work primarily addresses the limitations of ACSFs. However, why are ACSFs not included in the comparison?\", \"What is the computational complexity of ACSFs?\", \"Can visualizations or case studies be provided to demonstrate that the method achieves equivariance?\", \"What criteria are used to select baselines? SchNet and DimeNet are invariant networks, so why are they chosen? Why do the comparison methods differ for QM9 and MD17? More recently proposed methods may also warrant comparison.\", \"What is meant by the gating mechanism in the paper, and what improvement does it offer over ACSFs?\", \"What challenges are encountered when integrating the self-attention Transformer, and what improvements does this integration provide?\", \"What are the limitations of the current work, and what steps could improve efficiency?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes EG-SAT, an equivariant graph self-attention transformer for modeling 3D molecular structures, introducing Attention-based Atom-Centered Symmetry Functions (AACSFs) to capture higher-order geometric interactions. The authors showed the performance of proposed model on QM9 and MD17 datasets. While the paper presents some interesting ideas around combining attention mechanisms with ACSFs, there are several critical limitations that need to be addressed.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The integration of attention mechanisms with ACSFs is interesting.\\n2. the framework is applicable to multiple molecular property prediction tasks\\n3. the paper discusses the theoretical foundation of symmetry and group representation in detail\", \"weaknesses\": \"1. The paper omits several recent works in molecular property prediction, making the comparisons less relevant.\\n2. The authors didn't conduct ablation studies to evaluate the contribution of different components in the framework. \\n3. There is no computational efficiency analysis, what's more, the claims of improved scalability are unsupported by any experiments.\\n4. The motivation for incorporating angular information lacks clear examples where angular information provides benefits.\\n5. There's no proof that the attention mechanism preserves chemical validity\", \"questions\": \"1. Can the authors provide experiment for the claimed scalability improvements over traditional ACSFs?\\n2. How does the computational complexity scale with the number of atoms and chemical elements compared to existing methods?\\n3. What is the memory footprint of the attention mechanism for larger molecular systems?\\n4. Can the authors provide ablation studies showing the specific benefits of angular information integration?\\n5. How sensitive is the model to hyperparameter choices, particularly the attention and gating parameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BhECSDSkAE
Temporal-Aware Test-Time Training via Self-Distillation for One-Shot Image-to-Video Segmentation
[ "Zixuan Zheng", "Yilei Shi", "Jingliang Hu", "Xiao Xiang Zhu", "Lichao Mou" ]
This paper introduces a novel task and approach for one-shot medical video object segmentation using static image datasets. We address the critical challenge of limited annotated video data in medical imaging by proposing a framework that leverages readily available labeled static images to segment objects in medical videos with minimal annotation---specifically, a ground truth mask for only the first frame. Our method comprises training a one-shot segmentation model exclusively on images, followed by adapting it to medical videos through a test-time training strategy. This strategy incorporates a memory mechanism to utilize spatiotemporal context and employs self-distillation to maintain generalization capabilities. To facilitate research in this domain, we present OS-I2V-Seg, a comprehensive dataset comprising 28 categories in images and 4 categories in videos, totaling 68,416 image/frame-mask pairs. Extensive experiments demonstrate the efficacy of our approach in this extremely low-data regime for video object segmentation, establishing baseline performance on OS-I2V-Seg. The code and data will be made publicly available.
[ "medical video analysis", "one-shot video object segmentation", "test-time training", "self-distillation" ]
https://openreview.net/pdf?id=BhECSDSkAE
https://openreview.net/forum?id=BhECSDSkAE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "lswd7Gs9pp", "jG3J1R4pG3", "ZvpUz9Wf7k", "DzU0Flcj1z" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731642151153, 1729055876276, 1730654112737, 1730605211911 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13118/Authors" ], [ "ICLR.cc/2025/Conference/Submission13118/Reviewer_6Uad" ], [ "ICLR.cc/2025/Conference/Submission13118/Reviewer_L3oX" ], [ "ICLR.cc/2025/Conference/Submission13118/Reviewer_35mV" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a novel task and method for single-frame image to video segmentation, focusing on medical video object segmentation. Since annotated video data in medical images is very limited, the paper adopts a framework to achieve video object segmentation using only annotated static images, and proposes a method that uses self-distillation for training in the test phase, combined with memory mechanism to utilize spatiotemporal context, greatly reducing dependence on annotated data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a test-time training strategy combined with a memory mechanism, enabling the model to make adaptive adjustments using spatiotemporal context during the test phase, while ensuring generalization through self-distillation.\\n2. The authors present a novel dataset, OS-I2V-Seg, covering 28 categories of images and 4 categories of videos, providing a valuable resource for researchers in related fields and advancing the development of medical image segmentation tasks. \\n3. The proposed method is not only well-suited for the task of single-frame image-to-video segmentation but also maintains strong generalization in cross-domain scenarios through the self-distillation mechanism, demonstrating excellent task scalability.\", \"weaknesses\": \"The research motivation of this project is somewhat lacking. For example, one of the primary issues with ECHO images or videos is the insufficient availability of high-quality annotated data, which limits the ability to train robust segmentation models. Consequently, there are several shortcomings in the current approach:\\n\\n1. The proposed method requires training an image segmentation model as a prerequisite for video segmentation. However, for a novel test dataset with insufficient annotations, the image segmentation step is likely to fail, which would, in turn, lead to failure in video segmentation. Furthermore, ECHO does not require full video segmentation; only the ED and ES frames are necessary for clinical purposes. Datasets like CAMUS has ground-truth information like LVEF, authors can calculate the segmentation results of ED and ES to check the accuracy with Ground-truth LVEF.\\n\\n2. To address this issue, the paper should include more ECHO video segmentation experiments to validate the model's effectiveness. While the paper primarily focuses on ECHO in its experiments, the limited number of videos in the test datasets weakens the argument. Although the CAMUS dataset contains around 500 videos, its extensive preprocessing and high quality do not represent the general clinical setting, which diminishes the persuasiveness of the results. The EchoDynamic dataset could be considered as an alternative for testing.\\n\\n3. While the study aims to propose a novel image-to-video segmentation method, it is notable that widely recognized segmentation methods, such as SAM2 and BioMedSAM2, are not included in the comparison experiments for similar annotated segmentation tasks. This omission undermines the credibility of the results. The authors should explain more about why these exps not included.\\n\\n4. The proposed method requires further validation, particularly in terms of time inference. Without such experiments, the current approach risks being comparable to frame-by-frame image segmentation, which would nullify the significance of this method. Time inference experiments are crucial to demonstrate the advantage of the proposed video segmentation method. It is a good idea to compare the segmentation time cost for segmenting the whole video with traditional frame-to-frame video segmentation model or SAM2.\", \"questions\": \"1. add exp on EchoDynamic dataset.\\n2. try to compare with SAM2\\n3. make the significance of the paper more meaningful.\\n4. ECHO images really requires 3-types segmentation as shown in table1. Were these three segmentation trained separately or together?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"New published medical dataset needs extra privacy illustration.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method for one-shot video object segmentation where only the first frame of the video is annotated during testing. The main idea involves first pre-training an image segmentation backbone on the labelled static images, followed by adapting the segmentation model to medical videos through test time training strategy using a form of spatiotemporal consistency and self-distillation.\\n \\nFor enforcing spatiotemporal consistency, the authors proposed a FIFO memory mechanism (acting as support set) that stores the features from the segmentation backbone of support images as keys (K), and the output of segmentation head (enhanced features) as values (V). The FIFO memory always contains the first annotated frame of the video. To predict the final enhanced feature for the query frame q, authors use similarity of q\\u2019s spatial features with those of keys present in FIFO memory queue as weights for weighted average of values present in FIFO. This process can be thought of autoregressive prediction based on softmaxed similarity with past T frames observed before the current frame.\\n \\nThe self-distillation from the original pre-trained model helps to regularize the adapted model by preventing overfitting to the labelled frame of the video. Authors utilize Hinton\\u2019s KL divergence as teacher student distillation loss treating the softmaxed features over its dimension as the corresponding probability.\\n \\nMoreover, the paper also introduces a new Dataset OS-I2V-Seg for this task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper tackles an important problem of generalizing dense segmentation task over videos using sparsely annotated frame (in their case the first annotated frame).\\n2. The experiments on several medical videos show that their method outperform different one-shot, cross-domain one-shot and test-time training methods.\\n3. One major strength about this method is incorporation of 2D image segmentation model for 3D video frames during testing using temporal awareness though additional memory mechanism.\", \"weaknesses\": \"Major Weakness:\\n1. In general, though the main idea of the paper is conveyed clearly, the specific detail about their method is missing. For instance, details about one-shot segmentation model pretrained on the static images is missing in Section 3.2 and Figure 1. Specifically, only backbone network is shown in the figure, while no discussion on the prior mask generation module, multi-scale feature enhancement module and segmentation head is included. It\\u2019s unclear how the authors use (Peng et al. 2023)\\u2019 s correlation mechanism for prior mask generation module. If I am correct, Peng et al. 2023, use correlation between single support image\\u2019s features and the query image\\u2019s features to generate the binary mask. Do the authors use the same methodology?\\n \\n2. In the introduction section, the authors mention one of the major limitations of Few Shot methods is dealing with discrepancy between the domains of base and novel classes [line 083-084]. Could the authors clarify what base class means in this context? Does it mean that the support images are from different domain compared to the query image, or it means training and testing datasets are from different domain. In either case, it is unclear how the current methodology tackles the domain shift problem. In the first scenario, there is no domain shift as the support frame is from the same video, while in the second scenario, I am not sure how the current method can overcome domain shifts between the pre-training and testing domains.\", \"minor_weakness\": \"1. In eq. 3, what are the dimension of W, v^R, v^M. It may be unclear to the readers how the read-out value is obtained exactly and further utilized and how it represents weighted average of similar features in the past frames.\\n \\n2. The title of the paper should highlight medical video segmentation as the current approach is not tested for non-medical videos.\\n \\n3. Size of FIFO queue. Since the performance the method is correlated to the FIFO queue, a discussion on how to set the size of queue is lacking.\", \"questions\": \"1. Different self-distillation regularization? The authors employ KL divergence between the features obtained from pre-trained segmentation backbone and the current adapted model. This model of self-distillation assumes that the features are \\u201clogits\\u201d for the loss computation, which might give incorrect regularization as the softmax(logits) = softmax(logits + contant). Why is this model motivated? Does this help with the domain shift problem discussed by the authors earlier?\\n \\n2. Is the current method autoregressive? In other words, for predicting the segmentation mask of current query, does the method only look at the past frames? Can this be extended to bidirectional approach, where we look at all the frames for the prediction of the current query for eq 3?\\n \\n3. Can the current method be extended to foundational segmentation models like SAM (Segment Anything Model)? How does current method compare to SAM? It would be good to know if the current method is a useful alternative to these foundation models based on computation, segmentation accuracy, etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel task and approach for one-shot medical video object segmentation using static image datasets. The authors address the challenge of limited annotated video data in medical imaging by proposing a framework that leverages readily available labeled static images to segment objects in medical videos with minimal annotation. The proposed method involves a two-stage process, training a one-shot segmentation model on images and temporal-aware test-time training via self-distillation. Experimental results demonstrate that the proposed method outperforms existing approaches in this low-data regime for video object segmentation on OS-I2V-Seg.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper is well-motivated. It presents a novel problem formulation by introducing one-shot medical video object segmentation using only static image datasets.\\n\\nThe proposed framework combines a memory mechanism with self-distillation during test-time training. This allows the model to leverage temporal information in videos without requiring additional annotated video data during training.\\n\\nExtensive experiments, including comparisons with state-of-the-art methods and ablation studies, demonstrate the effectiveness of the proposed approach.\", \"weaknesses\": \"1. There are no details of the multi-scale feature enhancement module in the manuscript and in the figure.\\n\\n2. The process of usage of memory values is not revealed in Figure 1.\\n\\n3. Although ablation studies are presented, more detailed analysis on the impact of specific hyperparameters (e.g., memory bank size, selection of top-k affinities) could provide deeper insights into the method's performance and robustness.\\n\\n4. The results section is not well-organized or described. It would be better to provide a more detailed description and analysis and reorganize some content from the Appendix into the manuscript within the page limitation.\\n\\n5. It would be better to also collect other SOTA performance on HMC-QU, ASU-Mayo, and CAMUS to show the progress between the current work and the ultimate target based on temporal models. \\n\\n6. The possible reasons behind the dramatic performance decrease are not analyzed. For example, why does no TTT generally generate sub-optimal performance, but using some modules may slightly decrease the performance, and some make training collapse, yet using them all can generate the best performance? There is no deeper analysis of the performance gap and how and why some designs work.\", \"questions\": \"Please see the weakness part 1,2,3,4,5,6.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BhBVAC5i2T
Meta-Referential Games to Learn Compositional Learning Behaviours
[ "Kevin Yandoka Denamganai", "Sondess Missaoui", "James Alfred Walker" ]
Human beings use compositionality to generalise from past to novel experiences, assuming that past experiences can be decomposed into fundamental atomic components that can be recombined in novel ways. We frame this as the ability to learn to generalise compositionally, and refer to behaviours making use of this ability as compositional learning behaviours (CLBs). Learning CLBs requires the resolution of a binding problem (BP). While it is another feat of intelligence that human beings perform with ease, it is not the case for artificial agents. Thus, in order to build artificial agents able to collaborate with human beings, we develop a novel benchmark to investigate agents’ abilities to exhibit CLBs by solving a domain-agnostic version of the BP. Taking inspiration from the Emergent Communication, we propose a meta-learning extension of referential games, entitled Meta-Referential Games, to support our benchmark, the Symbolic Behaviour Benchmark (S2B). Baseline results and error analysis show that the S2B is a compelling challenge that we hope will spur the research community to develop more capable artificial agents.
[ "referential game", "language grounding", "compositionality", "systematicity", "few-shot learning", "meta-learning", "reinforcement learning", "language emergence", "symbolic behaviours", "benchmark" ]
Reject
https://openreview.net/pdf?id=BhBVAC5i2T
https://openreview.net/forum?id=BhBVAC5i2T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rvRFnsh7vy", "p4EPbTn5WU", "gL1gEo5IWF", "eX5Y6pNKTB", "eA2iAuxsFf", "W7BH8duTS2", "OXeuYMxKre", "KNGsO9DFux", "IwgE4YkPfv", "GjvHPUieHj", "Gbw5J57M0T", "1zJ7BE9ygZ", "0UrfABV9Pt" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_review" ], "note_created": [ 1730448151113, 1733259470812, 1733058273340, 1733058297606, 1733090375871, 1730108290255, 1733167087711, 1733090350230, 1737524143759, 1733259512207, 1734721168092, 1730589230192, 1730551703845 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11749/Reviewer_45zL" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Submission11749/Reviewer_U8cn" ], [ "ICLR.cc/2025/Conference/Submission11749/Reviewer_kMvd" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11749/Authors" ], [ "ICLR.cc/2025/Conference/Submission11749/Area_Chair_NLdM" ], [ "ICLR.cc/2025/Conference/Submission11749/Reviewer_kMvd" ], [ "ICLR.cc/2025/Conference/Submission11749/Reviewer_FTQk" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a benchmark called S2B that can evaluate the ability of artificial intelligence agents in combinatorial learning behaviors (CLBs). S2B uses Meta-Referential Games as the basic framework and uses the SCS method to represent stimuli. This paper uses S2B to test the CLBs of multi-agent reinforcement learning models and LLM.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces the S2B benchmark, designed to evaluate the combinatorial learning behaviors (CLBs) of AI models.\", \"It proposes the SCS method for representing stimuli in a domain-independent manner, avoiding reliance on specific modalities like visual, verbal, or auditory information.\", \"Meta-Referential Games are presented as the primary framework within the S2B benchmark, aiming to assess agents' capabilities in symbolic learning and combinatorial learning behaviors (CLBs).\"], \"weaknesses\": [\"Insufficient validation of domain-agnostic BP. While the S2B benchmark and meta-referential game frameworks intend to construct domain-agnostic BP, there is a lack of sufficient experimental data to validate their applicability in various domains or applications. Whether this benchmark and framework can be extended to different fields such as vision and language still needs to be further verified.\", \"Terminology and lack of concrete examples: The paper contains a large number of terms (such as CB, CLB, BP, support stage, query stage, etc.), although their concepts are mentioned in the article, the concepts are relatively vague and lack simple examples to help readers understand. It may not be intuitive for readers who are new to these concepts.\"], \"questions\": [\"In Figure 2, it is not clear what Latent Stimuli is.\", \"In line 267, the standard deviation sampling interval of the Gaussian distribution is not explained.\", \"In 4.1 and 4.2 section, there are lacks of details and examples: the representation and meaning of the symbol combination, the specific process of the experiment are not explained in detail, which confuses some readers when understanding the experimental design. While the interaction between multiple agents is mentioned, no specific examples are provided to illustrate how the messaging and identification tasks are performed, which can lead to barriers to understanding.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply 2 (part 1/2)\", \"comment\": \"Thank you for your reply. We apologise for not being able to address the minor points in our revised PDF or in our previous response and for submitting a 10.6 pages revision, time and life unfortunately caught up with us.\", \"we_attempt_to_address_the_minor_comments_below\": \"1. \\\"generalise -> generalize\\\" : \\n\\n Throughout the paper, we try to abide by British English where the spelling 'generalise' is correct.\\n \\n\\n2. \\\"l391: axises -> axis\\\" : \\n\\n Thank you, we will make this correction in our next revision.\\n \\n3. \\\"l87: What is \\\"a semantic domain that can be probed and queried\\\"\\\" :\\n\\n We refer to a 'semantic domain' as a set of symbols and meanings. The expression contains two requirements about this set of meanings. We enforce these requirements for experimentation and measurement purposes. Indeed, we propose to investigate the ability of AI agents to manipulate symbols and meanings, therefore it is important that those manipulations can be probed/measured and that the agent's abilities be queried by giving them prompts to respond to. On line 87, we simply meant to convey that artificial languages that emerge over the course of RGs provides valuable affordances for experimentors to gain insights about agents' understanding and abilities. For instance, by building our rule-based speaker agent that communicates through the semantic domain (the artificial language) with the learning listener agent, we as experimentors can gain understanding of the listener agent's CLB capabilities.\\n \\n4. \\\"Figure 2 and Figure 4 are hard to read: the fontsize is too small and the bold text is hard to read.\\\" :\\n\\n Thank you, we will increase the fontsize and try to make the bold text more readable in our next revision.\\n \\n5. \\\"l256: partitionaing\\\" : \\n\\n Thank you, we will correct this in our next revision.\\n \\n6. \\\"l334: what does it mean do bridge the gap between two conditions \\\"Hill-RSC and Chaa-RSC\\\".\\\" :\\n\\n Both richness of the stimuli conditions aim to develop systematicity in agents but they appeared in different settings (e.g. embodied reinforcement learning for Hill-RSC versus unsupervised learning for Chaa-RSC) and therefore it is unclear how do each of the requirements relate to each other. This gap in context/setting of application is the gap that we mention in the mentioned sentence.\\n\\n And, in this work, we present a benchmark that instantiates hyperparameters that allow experimentors to control each of these requirements in a single, common setting. In doing so, we hope that our benchmark will allow shedding some lights on what kind of relationship each requirement share with each other, and therefore possibly reconcile them into a single overarching understanding of what is required (and possibly what is sufficient) to further systematicity in agents.\\n \\n7. \\\"l349: What is the core memory module ?\\\" :\\n\\n Thank you for your attention to details, we should have clarified this indeed. We propose to clarify it in our next revision in the Agent Architecture paragraph of Section 4:\\n The 'core memory module' is the expression we use to refer to the part of the agent architecture that aims to integrate information from one RL timestep to another. We leverage both simple 2-layers LSTM modules (as described in Appendix C, which we point the reader to in line 377 of the revised PDF) or Emergent Symbol Binding Network (ESBN) (Webb et al., 2020) and the Dual-Coding Episodic Memory (DCEM) (Hill et al., 2020) (Section 4.2.1).\\n \\n8. \\\"The beginning of 4.2 about meta-RG is very clear and could probably go to Section 3.\\\" :\\n\\n Thank you, we were thinking of it as describing an experimental setup, but we do appreciate that it can indeed be considered part of the method as well, thus we agree to move it to Section 3 in our next revision. \\n \\n9. \\\"l285-286 : It is confusing as the described set of stimuli used is misaligned with what is described in Section 2. The speaker does not receive the same set of stimuli in both.\\\" :\\n\\n Thank you for catching this possible source of confusion. We presented in Section 2 some variants of RGs but it would indeed be valuable to present on top of or solely the variant that we are actually using in the experiments in the paper. Our initial intent was to provide a bigger picture in Section 2 towards later focus on a specific variant that we use in our experiments but we can see how this can lead to confusions of the reader expecting the variant presented in Section 2 as being also the variant we will later use in the paper. Thus, in our next revision, we propose to focus our explanations in Section 2 on the variant that we will actually use in the remainder of the paper.\"}", "{\"title\": \"Reply 1 (part 1/N)\", \"comment\": \"We thank the reviewer for their time and thorough review and constructive comments. We reply to main comments below.\\n\\n## Comment 1: \\\"a previous version of the benchmark (published ?)\\\"\", \"we_assume_that_you_refer_to_the_first_line_of_paragraph_1_in_section_3\": \"\\\"The version of the S2B that we present in this paper is focused on evaluating receptive and constructive behaviour traits [...]\\\"\\n\\nWe mean to clarify that there is no other publication on S2B (the footnote link hidden for review refers to the github link of the codebase), it is the main novelty of the paper here.\\nOur sentence was meant to convey the fact that S2B contains more than just the ability to evaluate receptive and constructive behaviour traits, but we focus here only on this part and therefore present a version of S2B that only focuses on this.\\n\\nWould the following sentence disambiguate effectively?\\n\\\"While S2B contains tasks towards evaluating different aspects of symbolic behaviours, we only present in this paper the part that concerned with evaluating receptive and constructive aspects of symbolic behaviours.\\\"\\n\\n## Comment 2: \\\"The paper is hard-to-follow and often confusing\\\"\\n\\nWe assume that Section 3 was a main reason for this comment. We have pushed a revision PDF that reframes Section 3 in a top-down narrative, with extra focus on the introductory paragraph, and then rewriting the subsections towards introduction Meta-RG first, before going into the details of the stimuli and the SCS representations.\\n\\nWe hope that this top-down narration is helpful in increasing the clarity of the paper.\\nPlease let us know if you see any further changes that could help improve the clarity of the paper. \\n## Comment 3 + 4: \\\"Human-agent collaboration is only used at a high-level motivation\\\" + \\\"Emergent communication [...] is it a framework?\\\"\\n\\nEmergent Communication (EmeCom) is a subfield of Natural Language Processing (NLP) and Representation Learning, which has had multiple workshops in NeurIPS (2017: https://sites.google.com/site/emecom2017/ 2018: https://sites.google.com/site/emecom2018/ ; 2019: https://sites.google.com/view/emecom2019 ; 2020: https://sites.google.com/view/emecom2020/home) and ICLR 2022 ( https://sites.google.com/view/emecom2022/ / https://iclr.cc/virtual/2022/workshop/4562). While NLP approaches used to learn from fixed datasets, ignoring the how and why of language use, EmeCom proposed to focus on trying to capture the functional and interactive aspects of communication with natural and artificial languages. EmeCom is focused with cooperation between agents through language use.\\n\\nWhen considering natural language use, EmeCom would focus on cooperation between agents and human beings [1,2]. That is where the human-agent collaboration aspect in our paper stems from, and it is emphasised when considering symbolic behaviours which are typically exhibited by human beings. Given those legacy aspects, and the motivational aspect that you mentioned, it seems difficult to not mention it in the paper.\\n\\nThus, we propose to rephrase the abstract sentence starting with \\\"Taking inspiration from the Emergent Communication, [...]\\\" with the following sentence to disambiguate and make sure that readers do not get a wrong idea:\\n\\\"We leverage referential games from the Emergent Communication (EmeCom) context rather than a human-in-the-loop context for it has a long history of simulating aspects of symbolic behaviours without human involvement. We propose a meta-learning extension of referential games, entitled Meta-Referential Games, to support our benchmark, the Symbolic Behaviour Benchmark (S2B).\\\"\\n\\nPlease let us know if this modification addresses your concerns, and whether you would see this sentence being emphasised again in any specific part of the main paper (e.g. maybe in the Language Grounding and Emergence paragraph of the introduction to clarify that we could have employed a human-in-the-loop approach but rather resort to a referential game as stated?).\\n\\n### References:\\n[1] : Lazaridou, Angeliki, Anna Potapenko, and Olivier Tieleman. \\\"Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning.\\\"\\u00a0_Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. 2020.\\n\\n[2] : Lazaridou, Angeliki, and Marco Baroni. \\\"Emergent multi-agent communication in the deep learning era.\\\"\\u00a0_arXiv preprint arXiv:2006.02419_\\u00a0(2020).\"}", "{\"title\": \"Reply 1(part 2/N)\", \"comment\": \"## Comment 5: \\\"\\u00a0What does it mean to \\\"instantiate a BP\\\" ? Do any representations of any latent factors instantiate a BP ? I overall don't understand why this paper needs to talk about BP. This paper is about meta-learning behaviors that leverage the compositional nature of their inputs.\\\"\\n\\nBefore delving into this, we mean to emphasise that we tried to simplify our narrative around the binding problem in the paper, which probably is the reason why our statements around it feels unclear. \\nIn this reply, we attempt to provide our whole perspective on the binding problem and how it relates to our work:\\n\\nA binding problem (BP) refers to the inability of a given model/architecture/agent with distributed representations to efficiently segregate and (compositionally) re-use/-bind information that is spread throughout the architecture towards solving a specific task. Thus, to instantiate a BP **for a given model** means to instantiate a task that requires the given model to efficiently segregate and re-use information that is spread through the model's distributed representations. \\n\\nIn that sense, \\\"any latent factors\\\" on its own does not constitute a task specification, so we can't weigh in on whether it would instantiate a BP for a given model. That being said, if we want to build a task from any latent factors as observations of the agent, we can consider that of asking a given agent to act, from said observation, according to a specific goal. Then, from that setting, how can we verify that a BP is instantiated in this task for this model/agent? Unfortunately, [Greff et al., 2020] does not really weigh in on this, but we meant to propose to evaluate the degree to which a BP is instantiated in a given task for connectionist models by evaluating the performance of a basic connectionist architecture (i.e. MLP or MLP+RNN) on said task. Thus, any latent factors representations, such as OHE or SCS, could instantiate some amount of BP for a connectionist agent.\\nThat being said, for the kind of tasks we consider, we argued in Appendix E.1 that the OHE representation does not instantiate a BP whereas the SCS representation does. Our narrative in appendix E.1 was tentatively simplified by making it binary rather than frame it as the degree to which a BP may be instantiated...\\n\\nAs far as what the paper is about, and why do we think it is important to talk about BPs, it is found in the notion of CLBs. As you acknowledge, this paper is about meta-learning behaviours that leverage the compositional nature of the inputs, that is to say about CLBs. We think that it is important to talk about BP as it is one of the main difficulties that needs addressing when it comes to getting agents to learn CLBs. We were aiming to provide the reader with more perspective about why CLBs are difficult. \\nThen, realising the gap in the literature for a benchmark that poses a clear BP, we weave this contribution into our narrative.\\n\\nAll that being said, we appreciate your advice about cutting BP concerns out from the paper. We propose to move those concerns to the appendix, in effect:\\n- moving paragraph Binding Problem & Meta-Learning of the introduction into Appendix E.1 ;\\n- removing mentions of BP in Section 3.2 mainly\\n\\nPlease let us know if this reply and proposed modification address your concerns, and/or whether you have any further advice on the matter.\"}", "{\"title\": \"Reply 1 (part 4/4)\", \"comment\": \"## Comment 8: \\\"I do not understand how the object-centric variant of the representation (for the listener) is built. That should be clarified in Section 3.\\\"\\n\\nWe appreciate your advice and propose to clarify adding the following lines at the end of Section 3.2 on the SCS representation:\\n\\n\\\"As a continuous representation for symbolic spaces, and on the contrary to discrete representations like OHE/MHE, the SCS representation enables us to consider object-centric stimuli, that is to say stimuli that represent the same meaning, the same symbol or object, while being superficially different. Indeed, once symbolic values are fixed on each factor dimension, by sampling latent values $l(i)\\\\in[1; d(i)]$, infinitely many values can be used on each dimension to populate the SCS vector by sampling from the Gaussian distribution associated with $l(i)$ and parameterised as $\\\\mathcal{N}(\\\\mu_{l(i)} , \\\\sigma_{l(i)} )$. In other words, object-centric stimuli are i.i.d samples from the set of Gaussian distributions attached to each instantiated $l(i)$ latent value of each dimension $i$. They can be thought of as similar pictures of a same object under different viewpoints, for instance.\\\"\\n\\nPlease let us know if this addresses your concerns on the matter, and/or whether you can identifies ways to further improve on the issue.\"}", "{\"summary\": \"This paper introduces the Symbolic Behaviour Benchmark (S2B), a meta-learning benchmark designed to test agents\\u2019 abilities to generalize compositionally within single episodes through Compositional Learning Behaviours (CLBs). S2B utilizes Meta-Referential Games, an extension of referential games embedding a Binding Problem (BP), challenging agents to dynamically bind and recombine information from limited examples. Baseline results on multi-agent reinforcement learning (MARL) and large language models (LLMs) suggest S2B presents a challenging benchmark for current AI capabilities.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Originality: The benchmark provides a unique approach to evaluating compositionality in a few-shot learning context, moving beyond static generalization and introducing a dynamic, episode-based test.\", \"Quality: The benchmark and supporting experiments are rigorously defined, leveraging established methods like referential games and meta-learning frameworks to create a novel testing ground for CLBs.\", \"Significance: S2B addresses an important challenge in AI\\u2014evaluating compositional learning in dynamic environments\\u2014which could spur new research into agent architectures and learning strategies.\"], \"weaknesses\": [\"Presentation: The structure of the meta-RG could be clarified by using a single detailed example to illustrate an episode from beginning to end, making the compositional requirements more apparent.\", \"Experiments: Evaluating LLMs within this benchmark may not be entirely fair, as the task setup deviates from natural language processing, which LLMs are primarily designed for. The benchmark\\u2019s symbolic structure lacks the natural language context that LLMs are optimized to process, raising concerns about using LLMs without modifications to better align with symbolic input.\", \"Human Baseline: The lack of a human experiment or baseline raises questions about the difficulty of the benchmark for agents versus human performance. Introducing human testing on S2B could provide additional insights, highlighting any inherent complexity in CLB learning and enabling better evaluation of AI performance relative to human compositional understanding.\"], \"questions\": [\"Could the authors clarify how the vocabulary permutation scheme is implemented and whether it impacts the agents\\u2019 learning or communication strategies in any unintended ways?\", \"How sensitive are the experimental results to variations in the number of symbolic dimensions (Ndim) or other hyperparameters of the SCS representation? Additional experiments on this could offer more insight into the scalability of the benchmark.\", \"Were any human experiments conducted to provide a baseline for performance on S2B? Including human data could offer a valuable perspective on the complexity of the benchmark.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their clarifications and suggestions that I overall approve.\\n\\nIf systematicity is a synonym of compositional generalization etc... I would suggest to use only one term and stick to it through the paper.\\n\\nHowever, the authors did not address my minor points, I don't see all clarifications (even removals) in the revised version and the paper goes beyond 10 pages (10.6 pages).\"}", "{\"title\": \"Reply 1 (part 3/4)\", \"comment\": \"## Comment 6: \\\"Some critical concepts are unclearly defined: systematicity, ZSCT (l 174)\\\"\\n\\nWe thank the reviewer for their feedback and acknowledge that our definitions for these terms (systematicity being a synonymous for compositional behaviours - cf. l. 156-158) and their relationship is slightly loose. Indeed, zero-shot compositionality tests (ZSCTs) consist of a quantitative way to measure systematicity. We propose to clarify this by adding the following sentence at the end of the relevant sentence on l. 156-158:\\n\\n\\\"Systematicity (or also referred to as compositional or algebraic generalisation) is commonly measured using zero-shot compositional tests (ZSCTs - [1,2,3]) where a set of stimuli made up of novel combinations of familiar attributes are presented to the tested agents.\\\"\\n\\nWe hope that this is sufficiently addressing your concerns, please let us know if not or if you would seek any other additions of that kind.\\n\\n### References:\\n[1] : Choi, Edward, Angeliki Lazaridou, and Nando de Freitas. \\\"Compositional Obverter Communication Learning from Raw Visual Input.\\\"\\u00a0_International Conference on Learning Representations_. 2018.\\n\\n[2] : Lake, Brenden, and Marco Baroni. \\\"Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks.\\\"\\u00a0_International conference on machine learning_. PMLR, 2018.\\n\\n[3] : Denamgana\\u00ef, Kevin, Sondess Missaoui, and James Alfred Walker. \\\"Visual Referential Games Further the Emergence of Disentangled Representations.\\\"\\u00a0_arXiv preprint arXiv:2304.14511_\\u00a0(2023).\\n\\n## Comment 7: \\\"symbolic space (l195), EoA (l379), posdis (l163) and bosdis (l163). It would help if compositionality measures were briefly explained (posdis, bosdis)\\\"\\n\\nWe thank the reviewer for flagging our lack of explicitness in defining what we mean by symbolic space. We had a definition previously in l. 242-245, now in lines 335-340, but we can understand that it is not sufficiently emphasised. Thus, we can propose to emphasise it again at the beginning of the new Section 3.1 (Meta-Referential Games), after the second sentence ending in \\\"[...] symbolic space.\\\" . We propose to add the following:\\n\\n\\\"We define a $N_\\\\text{dim}$-dimensioned symbolic space as a finite set of vectors with $N_\\\\text{dim}$ dimensions that is entirely characterized by parameter integers $( d(i) ) \\\\in \\\\mathbb{N}^{N_\\\\text{dim}} $, that we refer to as its semantic structure, such that each i-th dimension can take symbolic values within the integer range $[1,d(i)]$.\\\"\\n\\nNext, with respect to Ease-of-Acquisition (EoA), we were missing a clear definition of the acronym even though we defined its meaning, previously in lines 378-382 and now in the revised PDF in lines410-414. We propose to emphasise it further in the revision by adding the following sentence, providing credit where it is due:\\n\\n\\\"Ease-of-Acquisition metric is inspired by Ease-of-teaching [1], which trains new listener agents with frozen speakers on the same task than the frozen speaker agent was previously trained on. The critical differences are that (i) the new listener are trained on a different task (which would make this metric part of the category of metric proposed as Ease of Transfer Learning[2]), that of a referential game rather than a meta-referential game, and (ii) that in our context the frozen speaker agent has not been trained on the current task at all, indeed its weights/parameters have not been changed since the beginning of the current Meta-RG where it faces a novel symbolic space, but it has tried to adapt to it using its core memory mechanism, in a meta-learning fashion. \\\"\\n\\nFinally, with respect to compositionality measures and your advice to provide brief explanations, we propose to include in the appendix a few paragraphs that summarises their motivations and the algorithm to compute them. We will point to those paragraphs from l.163.\\n\\nWe hope that this reply addresses your concerns. Please let us know if you see some further ways to improve our work.\\n\\n\\n### References:\\n[1] : Li, Fushan, and Michael Bowling. \\\"Ease-of-teaching and language structure from emergent communication.\\\"\\u00a0_Advances in neural information processing systems_\\u00a032 (2019).\\n\\n[2] : |Chaabouni, Rahma, et al. \\\"Emergent communication at scale.\\\"\\u00a0_International conference on learning representations_. 2022.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply 2 (part 2/2)\", \"comment\": \"10. \\\"Section 4.2.1: Is it using the rule-based speaker agent ? Is it normal there is no 4.2.2 ? Then 4.3. is using the Posdis-speaker agent ?\\\" :\", \"thank_you_for_catching_this_mistake\": \"4.3 is actually 4.2.2 and as it is indeed using the posdis-speaker agent and therefore takes place in the single-agent listener-focused RL context. We will correct this issue in our next revision.\\n\\nPlease let us know whether those replies fully address your concerns and/or whether you could see further ways to improve the paper.\"}", "{\"metareview\": \"This paper presents a benchmark to assess the ability of artificial agents to meta-learn behaviors that leverage the compositional nature of their sensory inputs. In this benchmark, two collaborative agent strive to meta-learn to solve referential games. In each episode, the two agents first execute a series of referential games that take in samples endowed with a specific compositional distribution. Then, they test their ability to generalize to novel samples from the same distribution. The global objective is to meta-learn this task for any compositional distribution of inputs. Experiments support that current methods largely fail at this task.\\n\\nThe paper's strengths are an important benchmark, illustrating how approaches fail. However, the paper is a bit complex and is over the page limit and reviewers questioned what were the main takeaways from the paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not fully address reviewers concerns and were also over the page limit.\"}", "{\"summary\": \"This paper introduce a novel benchmark aiming to assess the ability of artificial agents to meta-learn behaviors that leverage the compositional nature of their sensory inputs. In this benchmark, two collaborative agent strive to meta-learn to solve referential games. In each episode, the two agents first execute a series of referential games that take in samples endowed with a specific compositional distribution. Then, they test their ability to generalize to novel samples from the same distribution. The global objective is to meta-learn this task for any compositional distribution of inputs. Experiments support that current methods largely fail at this task.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This benchmark introduces a challenging and important problem for current methods. Experiments support the claim of the paper.\", \"weaknesses\": \"It is hard to assess the novelty as it is stated in the paper that there is a previous version of the benchmark (published ?). The paper is hard-to-follow and often confusing.\", \"questions\": \"The paper could likely be simplified:\\n\\nThe paper introduces a lot of concepts, the abstract refers to compositionality, binding problem, emergent communication, human-agent collaboration and meta-referential game. Also few-shot learning in the introduction.\\n- Human-agent collaboration is only used as a high-level motivation of the work, so its reference in the middle of the abstract and line 52 mostly confuse the reader with works including humans in the loop.\\n- Emergent Communication: In the abstract, it is unclear what \\\"Emergent Communication\\\" refers to. Is it a framework ?\\n- The binding problem (BP): The relation between CLB and binding problem is unclear in the abstract, not very clear in the introduction, and get clearer in Greff et al. (2020). Most of the statements related to BP are unclear: that an \\\"inherent BP\\\" must be solved be agents to exhibit CLB. \\\"Solving the BP instantiated in such a context, i.e. re-using previously-acquired information in ways that serve the current situation\\\" is done by all learning artificial agents. What does it mean to \\\"instantiate a BP\\\" ? Do any representations of any latent factors instantiate a BP ?\\nI overall don't understand why this paper needs to talk about BP. This paper is about meta-learning behaviors that leverage the compositional nature of their inputs.\", \"some_critical_concepts_are_unclearly_defined\": \"systematicity, ZSCT (l 174), symbolic space (l195), EoA (l379), posdis (l163) and bosdis (l163). It would help if compositionality measures were briefly explained (posdis, bosdis).\\n\\nI do not understand how the object-centric variant of the representation (for the listener) is built. That should be clarified in Section 3.\", \"minor_points\": [\"generalise -> generalize\", \"l391: axises -> axis\", \"l87: What is \\\"a semantic domain that can be probed and queried\\\"\", \"Figure 2 and Figure 4 are hard to read: the fontsize is too small and the bold text is hard to read.\", \"l256: partitionaing\", \"l334: what does it mean do bridge the gap between two conditions \\\"Hill-RSC and Chaa-RSC\\\".\", \"l349: What is the core memory module ?\", \"The beginning of 4.2 about meta-RG is very clear and could probably go to Section 3.\", \"l285-286 : It is confusing as the described set of stimuli used is misaligned with what is described in Section 2. The speaker does not receive the same set of stimuli in both.\", \"Section 4.2.1: Is it using the rule-based speaker agent ? Is it normal there is no 4.2.2 ? Then 4.3. is using the Posdis-speaker agent ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel benchmark, the \\\"Symbolic Behaviour Benchmark\\\", for evaluating compositional learning behaviors. The study introduces Meta-Referential Games (Meta-RGs), a meta-learning extension of referential games, to test agents' ability to solve a binding problem that is crucial for learning CLBs. This benchmark emphasizes symbolic receptivity and constructivity, encouraging agents to develop compositional generalization skills while interacting with each other.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Relevance: I think this paper tries to address an important problem in that it proposes a benchmark in which succesful behavior means that agents learned to generalize compositionally, instead of only having learned to generalize compositionally. The problem of compositionality and compositional generalization is of general interest to the community. Thus, this benchmark might be generally useful.\", \"novelty\": \"The introduction of S2B and Meta-RGs adds depth to the compositionality field by pushing beyond mere combinatorial generalization (CG) to a meta-learning context where agents adapt to unseen symbolic structures. The Symbolic Continuous Stimulus (SCS) representation is an innovative method to instantiate a BP, ensuring agents must infer structures over multiple observations, aligning with the real-world learning constraints in open-ended contexts.\", \"analyses\": \"The experiments establish state-of-the-art limitations effectively, showing that both MARL agents and LLMs struggle with this benchmark, thus illustrating its difficulty and relevance. The experiments are clear and I like that they always come with a hypothesis followed by the results.\", \"introduction\": \"The first few sections, i.e. overview of the problems of systematicity/compositionality, lingustic compositionality, and compositionality are helpful.\", \"weaknesses\": \"Accessibility: The meta-learning setup, combined with the specialized SCS representation, might limit accessibility and reproducibility. The SCS's construction, particularly the Gaussian kernel setup, could be further detailed, I didn't quite get what was going on there. The writing is generally quite verbose and I had really some difficulties in following along. That there are many abbreviations throughout doesn't really help here either. Some of the figures are very small an complicated to read. This makes it again hard to follow.\", \"scope\": \"The focus on RL and MARL agents is suitable, but extending evaluations to to other models could have been fun. I would have really liked to see something on further multi-modal models here. Many of the evaluations are currently in the simplest form, i.e. the basic form of the proposed game, the most common agents playing them, as well as LLMs without any modifications to the standard prompts. This makes it a little unclear what exactly makes the difference in agents' inabilities to do these games well.\", \"clb_definition\": \"While the paper defines CLBs distinct from CBs, it would benefit from clearer operationalization criteria to guide comparisons. Are there any performance metrics beyond linguistic compositionality and RG accuracy?\", \"llm_behavior\": \"The below-chance LLM performance is interesting but could be further analyzed.\", \"fit\": \"It felt a little like this paper would fit better to a more targeted conference but I can be convinced.\", \"questions\": \"What do we learn from this? Essentially most of the results just point to an inability of different agents to learn to generalize compositionally. Is the idea that the community should now focus on getting these agents to be better on the benchmark? How would the authors imagine such progress?\\nI'm not quite sure but what is meant by domain-agnostic BP?\\nPerhaps adding short summaries to every section about what the main message is could help?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
Bgz3okeZ7H
AoPS Dataset: Leveraging Online Olympiad-Level Math Problems for LLMs Training and Contamination-Resistant Evaluation
[ "Sadegh Mahdavi", "Muchen Li", "Kaiwen Liu", "Christos Thrampoulidis", "Leonid Sigal", "Renjie Liao" ]
Advances in Large Language Models (LLMs) have sparked interest in their ability to solve Olympiad-level math problems. However, the training and evaluation of these models are constrained by the limited size and quality of available datasets, as creating large-scale data for such advanced problems requires extensive effort from human experts. In addition, current benchmarks are prone to contamination, leading to unreliable evaluations. In this paper, we present an automated pipeline that leverages the rich resources of the Art of Problem Solving (AoPS) forum, which predominantly features Olympiad-level problems and community-driven solutions. Using open-source LLMs, we develop a method to extract question-answer pairs from the forum, resulting in **AoPS-Instruct**, a dataset of more than 650,000 high-quality QA pairs. Our experiments demonstrate that fine-tuning LLMs on AoPS-Instruct improves their reasoning abilities across various benchmarks. Moreover, we build an automatic pipeline that introduces **LiveAoPSBench**, an evolving evaluation set with timestamps, derived from the latest forum data, providing a contamination-resistant benchmark for assessing LLM performance. Notably, we observe a significant decline in LLM performance over time, suggesting their success on older examples may stem from pre-training exposure rather than true reasoning ability. Our work presents a scalable approach to creating and maintaining large-scale, high-quality datasets for advanced math reasoning, offering valuable insights into the capabilities and limitations of LLMs in this domain. Our benchmark is available at [livemathbench.github.io/leaderboard](https://livemathbench.github.io/leaderboard).
[ "Mathematical Reasoning", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=Bgz3okeZ7H
https://openreview.net/forum?id=Bgz3okeZ7H
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z2gldwt4sY", "kFZkEmDbQX", "X9V6hvIZwl", "WQp1ceXq0L", "UzPI66pScA", "3gRuevsMHP" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "decision", "official_review" ], "note_created": [ 1730195005595, 1730712790944, 1734899373087, 1730733877527, 1737523606615, 1730671374495 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3915/Reviewer_vxhJ" ], [ "ICLR.cc/2025/Conference/Submission3915/Reviewer_Qqyj" ], [ "ICLR.cc/2025/Conference/Submission3915/Area_Chair_Gmqg" ], [ "ICLR.cc/2025/Conference/Submission3915/Reviewer_sDbk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3915/Reviewer_pSEP" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors prepared an instruct tuning dataset and a benchmark by collecting the math questions from a forum. The key steps of the data curation procedure are mostly clearly described. They also conducted experiments for finetuning a few open-source LLMs.\\n\\nOverall, it is a reasonable work and might be able to provide the community with a useful resource. The technical contribution is not significant because it is more like a software system for data curation.\\n\\nMoreover, the authors did not confirm what data they would release. As the major contribution of the paper, if the prepared datasets (instruct tuning and benchmark) will not be released, their contribution to the community will be significantly undermined.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"In this paper, the authors prepared an instruct tuning dataset and a benchmark by collecting the math questions from a forum. The key steps of the data curation procedure are mostly clearly described. They also conducted experiments for finetuning a few open-source LLMs.\", \"weaknesses\": \"Overall, it is a reasonable work and might be able to provide the community with a useful resource. The technical contribution is not significant because it is more like a software system for data curation.\\n\\nMoreover, the authors did not confirm what data they would release. As the major contribution of the paper, if the prepared datasets (instruct tuning and benchmark) will not be released, their contribution to the community will be significantly undermined.\", \"some_other_points\": [\"There is no discussion regarding the answer correctness in the instruct tuning data.\", \"For decontamination, I think 10-gram or 8-gram is sort of not enough. In Zhuo et al., 2024, 10-gram is used on coding data, but here is math data. They are quite different.\"], \"questions\": \"1. In both training and benchmark, Qwen LLMs are used for rewriting, will this bring in bias?\\n\\n2. When preparing answers for the testing questions, what\\u2019s the breakdown of each step? the boxed answer, and the agreed answers by the rewriting LLMs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Owing to the structured nature of math problem solving that requires not just recall of facts but also, understanding, abstraction and reasoning, it is one of the most challenging tasks for LLMs. While there are existing math datasets like, GSM8K and MATH, they have reached a level of saturation with SOTA models and are now susceptible to contamination. Newer datasets, like, OlympiadBench and OmniMath temporarily mitigate the above problems but remain susceptible to these issues as LLMs continue to evolve. Moreover, creation of these datasets, especially complex Olympiad-level problems, at scale is time and cost intensive. Motivated by these challenges, the authors argue for the need for scalable and automated methods to collect high-quality data for Olympiad-level problems to facilitate further advancements in this field. It is also crucial that these evaluation benchmarks be evolving and contain abundant and up-to-date test samples.\\nTowards that the authors leverage the raw AoPS forum data and propose a pipeline to extract questions and solutions leading to AoPS-Instruct, a novel large-scale dataset with 666.1K Olympiad-level math QA pairs. Using the most recent QA pairs, they develop an automatic pipeline that introduces LiveAoPSBench, a contamination-resistant evaluation set. Their experiments on LiveAoPSBench show a declining performance trend over time for various LLMs that improves after SFT on AoPS-Instruct.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"As LLMs continue to progressively evolve, there is a growing need for more complex evaluation benchmarks that challenge the capabilities of these models. The evaluation also needs to be trustworthy and the onus lies equally on those training models as well as those publishing evaluation benchmarks. I think this paper does justice to both these requirements and in that it attempts to address an important aspect of the LLM research and development.\\nIn doing so, the authors take into account the effort and cost involved in building such benchmarks and propose an automated pipeline that is able to produce complex Olympiad-level QA pairs at scale. To the best of my knowledge, the ideas presented here are original. Although the approach leverages community QA data from AoPS forums, the steps involved in curating QA pairs from the raw data are non-trivial.\\nThe writing of the paper is clear and the ideas and the methodology are well presented. The authors also conduct thorough evaluation to justify the complexity and trustworthiness of their benchmark datasets.\", \"weaknesses\": \"Their evaluation dataset currently focuses on boxed answers and excludes proof questions. Although the authors highlight this as their current limitation, I think proof questions form a significant part of the Olympiad-level questions. Excluding them might considerably limit the scope of evaluation of LLMs for their reasoning abilities.\\nExpanding the scope to proof questions would require a thought around evaluation as well. While boxed answers are more amenable to objective evaluation, proof questions might require a more subjective evaluation, potentially leveraging LLMs as a judge.\\nI also found no references to LLM as judge as striking. While it might not be a necessary tool considering the current scope, I think my meta point is that the authors should have touched upon these aspects and shown some evidence of early work / experiments. That would further strengthen the contributions of this work and its application to further the state-of-the-art in LLM research.\", \"questions\": \"Results in Table 2 for AIME24 and AMC23 are inconsistent. Why are certain models able to perform better on one vs the other?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces two contributions to mathematical reasoning for LLMs:\\n*AoPS-Instruct* is a dataset of 650,000 Olympiad-level math question-answer pairs derived from the Art of Problem Solving (AoPS) forum, designed for instruction tuning. \\n*LiveAoPSBench*: An evaluation benchmark that updates over time, highlighting declining LLM performance on newer problems. \\nThe paper shows that fine-tuning LLMs on AoPS-Instruct improves reasoning benchmarks and provides insights into pretraining contamination effects.\", \"strengths\": [\"Practical relevance: Several reviewers (sDbk, Qqyj, pSEP) highlight the importance of scalable, high-quality datasets and benchmarks for mathematical reasoning, emphasizing the utility of LiveAoPSBench in reducing contamination and providing up-to-date evaluation.\", \"Focus on contamination: Reviewers acknowledged the importance of benchmarks in addressing contamination, which LiveAoPSBench aims to do by dynamically updating the benchmark\", \"Extensive Experiments: Reviewers (sDbk, pSEP) commend the extensive evaluation and performance improvements shown with AoPS-Instruct.\"], \"weaknesses\": [\"Methodological novelty: The dataset creation process primarily uses existing LLMs for extracting and rewriting forum content (vxhJ).\", \"Data Quality Issues: Reviewers (vxhJ, pSEP) raise concerns about the reliability of the datasets, citing a notable error rate in benchmark annotations and the dependence on rewriting models for training data.\", \"Limited Benchmark Scope: Reviewer Qqyj notes that LiveAoPSBench excludes proof-based questions, which are crucial for fully evaluating mathematical reasoning capabilities.\", \"Overall, the practical contributions and insights into dataset contamination and model degradation are acknowledged by multiple reviewers. However, while during discussions some of the concerns were resolved, issues regarding quality of the dataset and the methodological rigor and novelty remain unresolved (even after internal discussion).\"], \"additional_comments_on_reviewer_discussion\": \"Data Quality: The authors addressed concerns raised by pSEP and vxhJ by refining prompts and analyzing errors, which improved human agreement from 88% to 91%. However, reviewers (pSEP) suggest further efforts to improve accuracy and perform detailed error analyses for reliability in mathematical tasks.\", \"exclusion_of_proof_questions\": \"The authors acknowledged this limitation raised by Qqyj but justified it as out of scope for the current work while expressing interest in future extensions.\", \"perceived_contribution\": \"Reviewer vxhJ maintained that the paper\\u2019s primary contribution\\u2014dataset creation\\u2014lacked sufficient technical novelty. The authors argued that similar works, like Llemma, have been accepted to ICLR, and such works also target finetuning models and releasing math-specific datasets.\\nHowever, it could be argued that Llema had a significantly larger scope, by pretraining large-scale models, releasing a large-scale training data that was a collective effort of curation and synthetic generation, incorporating tool usage, as well as matching performance of the proprietary math model (Minerva), while providing many additional analysis.\"}", "{\"summary\": \"This paper created a dataset in one of the important areas of mathematical reasoning, IMO problem solving, where the LLMs currently suffer. Such a open-source dataset with 650K samples and benchmark can be a\\n\\n1. They presented the dataset creation pipeline, comes with quality filtering, solution rewriting.\\n2. Also they also presented the automated pipeline for liveAoPSbench for evaluating recent LLM to avoid contamination.\\n3. The conduct experiments with SFT over open-source models to demonstrate the presented dataset.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. Dataset can be a very important contribution to the community, both training and LiveBench are ready.\\n2. The pipeline can be used for other domain/scenarios to create something similar and essential for current LLM. \\n3. The instruction-tuning performance clearly show that the dataset is useful for improving the performance on IMO.\", \"weaknesses\": \"1. It\\u2019s again very difficult to guarantee the quality of the data. How are we going to update the data performance time by time? Otherwise, the bad data will still affect the performance anyway.\", \"questions\": \"As mentioned in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work presents a new olympiad-level math problem dataset with two main features: (1) a training set for instruction fine-tuning in math problem solving (AoPS-Instruct), and (2) an evaluation dataset creation pipeline that can periodically update test examples (LiveAoPSBench). The manuscript provides a clear and detailed description of the dataset creation process, including steps taken for data contamination prevention and quality control. The experiments conducted demonstrate the effectiveness of AoPS-Instruct in fine-tuning relatively small LLMs and highlight the characteristics of LiveAoPSBench.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The evolving nature of the proposed automatic pipeline, LiveAoPSBench, is important as it reduces the risk of data contamination.\\n\\n2. The dataset creation process is clearly documented, with specific steps taken for quality control. Experimental results also demonstrate the effectiveness of the training set (in terms of its success in fine-tuning small LLMs) and the quality of the test set (as indicated by a high correlation with existing benchmarks).\\n\\n3. The finding that benchmarked LLM performance declines on more recent test examples highlights the need for periodically updated benchmarks.\", \"weaknesses\": \"The dataset creation process relies heavily on LLMs, which may undermine the reliability and usefulness of the proposed datasets and evaluation pipeline, given that LLMs are not always dependable. More specifically,\\n\\n1. The QA pairs are extracted by Llama3.1-70B-Instruct. As discussed in Section 4.4, Evaluation Quality Assessment, human annotators found 8% of the annotations to be incorrect and 4% to fall under the no-answer category, resulting in a combined error rate of 12%. Such noise can be problematic, especially when evaluating state-of-the-art models whose error rates may not be significantly higher. Additionally, a 91% agreement rate between human annotators is reported. While it is understandable that olympiad-level problems are challenging, as explained in the manuscript, this still indicates a degree of unreliability in human evaluation itself, as mathematical problems should ideally have objective correctness. This suggests that the actual error rate of the dataset construction pipeline might be higher, considering the noise in human annotation.\\n\\n2. The step-by-step solutions in the training set are rewritten by Qwen 2.5 72B, so it is likely that the performance of models fine-tuned on this training set will be limited by the performance of Qwen 2.5 72B. This makes the training set more suitable for use in a distillation setting to fine-tune smaller models. However, it might be less effective for training larger models. Notably, the training experiments conducted are also on much smaller models, which aligns with a distillation approach.\", \"questions\": \"Regarding the \\\"Weaknesses\\\" section, what caused disagreement among human annotators when verifying answer correctness? Was it due to (1) mistakes by some annotators, (2) uncertainty among annotators, or (3) inherent subjectivity in assessing correctness? (These factors might also have contributed together).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BgxsmpVoOX
Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
[ "Dongmin Park", "Sebin Kim", "Taehong Moon", "Minkyu Kim", "Kangwook Lee", "Jaewoong Cho" ]
State-of-the-art text-to-image (T2I) diffusion models often struggle to generate rare compositions of concepts, e.g., objects with unusual attributes. In this paper, we show that the compositional generation power of diffusion models on such rare concepts can be significantly enhanced by the Large Language Model (LLM) guidance. We start with empirical and theoretical analysis, demonstrating that exposing frequent concepts relevant to the target rare concepts during the diffusion sampling process yields more accurate concept composition. Based on this, we propose a training-free approach, R2F, that plans and executes the overall rare-to-frequent concept guidance throughout the diffusion inference by leveraging the abundant semantic knowledge in LLMs. Our framework is flexible across any pre-trained diffusion models and LLMs, and can be seamlessly integrated with the region-guided diffusion approaches. Extensive experiments on three datasets, including our newly proposed benchmark, RareBench, containing various prompts with rare compositions of concepts, R2F significantly surpasses existing models including SD3.0 and FLUX by up to 28.1%p in T2I alignment. Code is available at https://github.com/krafton-ai/Rare-to-Frequent.
[ "Text-to-image", "Diffusion", "Large Language Models" ]
Accept (Spotlight)
https://openreview.net/pdf?id=BgxsmpVoOX
https://openreview.net/forum?id=BgxsmpVoOX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzuXWPfp5z", "x2D2KPzmTQ", "t8NDm1BoVz", "rWA45EEeit", "lMlJW8F9BH", "iBmZi3nEtC", "cBA8wJrxK3", "Zsq23kHjR1", "W0VqfqgqE2", "HTKAxVgb8L", "FqZWk86rZA", "CNN8CcaOJH", "CHbRBIgzc4", "AsHJJVXUIl", "8LPQRItSjS", "7VTwt1sklx", "4PPKOdCV0q", "4JO2JRLt5W", "2u2UC0MT7E" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730630675443, 1732416535434, 1730103371019, 1732423407340, 1732040667294, 1732472029082, 1732030067127, 1732029551030, 1732030239784, 1732029752286, 1734606374389, 1730284385362, 1737523465414, 1730349792504, 1732030478313, 1732102495559, 1732030371942, 1732030592922, 1732068710750 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_p5vc" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_7ZtE" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_zTAb" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_p5vc" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_zqku" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Area_Chair_WDCX" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_zqku" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_zTAb" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Reviewer_7ZtE" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ], [ "ICLR.cc/2025/Conference/Submission1711/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies how to perform rare concept image generation with current pre-trained diffusion models. The authors leverage the LLMs to extract the rare concept and rewrite the prompt, and then perform rare-to-frequent guidance with the rewritten prompts across the multi-step denoising generating process. Abudent theoretical and empirical analyses are provided to validate the effectiveness of the method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The proposed rare-to-frequent prompt rewrite is novel and effective in terms of generating rare-concept-images.\\n2) The empirical results looks promising.\\n3) Solid empirical results are provided to validate the effectiveness of the method.\\n4) A new benchmark, RareBench, is provided to facilitate research in the task of rare-concept-image-generation.\\n5) Code and detailed implementation is provided to ensure the reproducibility of the method.\", \"weaknesses\": \"(1) The method requires alternating among a set of prompts during denoising process, which makes multiple step inference inevitable. Therefore, this design might not work well with current state-of-the-art acceleration methods, which reduce the number of denoising steps to 4 steps or even less.\\n\\n(2) There is a small gap between the theoretical analysis and the empirical method. For the theoretical analysis, the author study the scenarios of linearly interpolation of scores produced by different prompts. While for empirical results, the author performs alternating prompts across different denoising steps.\", \"questions\": \"Please see weakness (1). The reviewer is curious about how can we apply the proposed method on accelerated version of diffusion models such as consistency model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe would appreciate your letting us know if our response has addressed your concerns.\\nThank you for your time and effort in the rebuttal period. \\n\\nBest regards\"}", "{\"summary\": \"The paper introduces an innovative method for compositional generation of rare concepts. It demonstrates both theoretically and empirically that incorporating frequent concepts related to the target rare concepts leads to more accurate compositions. Building on this analysis, the Rare2Frequent (R2F) approach is presented, which strategically guides the transition from rare to frequent concepts during diffusion inference by utilizing the extensive semantic knowledge available in large language models (LLMs). R2F undergoes comprehensive evaluation, both qualitatively and quantitatively, achieving state-of-the-art results on multiple benchmarks, along with the introduction of a new benchmark for rare compositions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written and well-organized.\", \"The results, both qualitative and quantitative, are impressive.\", \"Although the concept of transferring knowledge from frequent to rare concepts has been explored in the context of domain adaptation and long-tail learning [1,2,3], its application in diffusion models for image generation is novel.\", \"A significant new benchmark, RareBench, is introduced to assess the generation of rare concept compositions.\", \"The proposed approach is applied to various diffusion models (SD3.0, Flux, RPG, and region-guided diffusion), demonstrating its effectiveness.\", \"[1] Parisot., et al. (2022) Long-tail Recognition via Compositional Knowledge Transfer.\", \"[2] Samuel., et al. (2020) From Generalized zero-shot learning to long-tail with class descriptors.\", \"[3] Jing., et al. (2021) Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation.\"], \"weaknesses\": \"My primary concern is that what is deemed rare for the diffusion model may not be considered rare for the LLM. Since the LLM lacks access to the training distribution of concepts used by the diffusion model, it may substitute rare concepts with other rare ones. Providing the LLM with the concept distribution from LION could enhance the results. This distribution has been published by [1].\\n\\n[1] Samuel., et al. (2024) Generating images of rare concepts using pre-trained diffusion models. (https://github.com/dvirsamuel/SeedSelect/tree/main)\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your effort and feedback\", \"comment\": \"Thanks for your effort in the rebuttal and clarification. I will keep my score.\"}", "{\"comment\": \"Thanks for the added experiments and clarification, which have resolved my concerns. I have increased the score accordingly.\"}", "{\"title\": \"Official Comment by Reviewer zqku\", \"comment\": \"Thank you for your detailed response. After reading all the replies, I plan to keep my score for acceptance. I appreciate this thoughtful discussion.\"}", "{\"title\": \"Author's Response to Reviewer zTAb\", \"comment\": \"We sincerely appreciate the reviewers' constructive comments and positive feedback on our manuscript.\\n\\n`W1. The current design for the scheduling of the selection of frequent and rare compositions of concepts is a bit ad-hoc. You always use frequent composition at the beginning and then start randomly selecting of composition after a fixed point. Based on your theoretical analysis, any additional guidance can be included or used to determine the selection of composition of concepts?`\\n\\n\\nWe thank the reviewer for your constructive comments. We acknowledge your concerns but respectfully argue that our scheduling approach is not ad-hoc but carefully designed for two reasons.\\n**(1) Leveraging abundant knowledge in LLMs**. To ensure *careful* scheduling, we leverage the strong zero-shot ability of LLMs to extract \\\"visual detail level\\\" required to draw each concept and use it for concept guidance, based on prior observations that diffusion models determine rough visual features during the early sampling steps and detailed visual features in the later steps (as explained in lines 237-240). This LLM-guided scheduling approach provides an appropriate concept guidance schedule across prompts with various semantics.\\n**(2) Consistent performance improvement across diverse concept categories.** With our careful scheduling, R2F significantly improves the composition performance across diverse concept categories, including property, shape, texture, etc, outperforming fixed scheduling as shown in Figure 9.\\n\\n\\n`W2. From your example, each rare composition has only two concepts. How do you generalize your approach to more complicated and rare composition (3 or more concepts, such as adj. + adj. + noun, e.g., an agent rabbit with a gun in a casual suit )`\\n\\n\\nThanks for your careful comments. RareBench already includes the complicated rare composition cases (as the '*complex*' case), consisting of three or more concepts, and R2F still exhibits superior performance on such complex cases as shown in Table 6. Specifically, looking at Figure 6, there is an example \\\"A horned bearded spotted raccoon smiling\\\" from the complex case, and R2F successfully generates the image that accurately follows the prompt. Technically, given examples such as \\\"adj1 + adj2 + noun\\\", R2F finds a noun that more frequently appears in the context of \\\"adj1 + adj2\\\", and uses it for frequent concept guidance.\\n\\n\\n`W3. Have you tried to use rare components at the beginning and then use frequent instead? The intuitive explanation for using frequent one first is needed. It will be good if you have relevant experimental results.`\\n\\n\\nAs per your suggestion, we generated images using rare concepts at the beginning and then used frequent concepts at the last steps instead. The generated images can be found in Figure 17 of Appendix L. Overall, the generated image tends to align more closely with the frequent prompt rather than the original rare prompt. This is because diffusion is a step-wise denoising process where the generated image depends more on the lately used prompt. Therefore, it is advisable to guide the process by using frequent concepts first and rare concepts last.\\n\\n- **We included these results in Section L of the revised manuscript.**\\n\\n`W4. In the real world, both rare and frequent composition of concepts are considered in generation. Then, the method that improves the quality of rare compositions should not hurt the quality of frequent composition. Without manual determination, how can your approach still maintain a high generation of frequent composition. In other words, it will be good if you can provide discussion on how your method could be adapted to automatically handle both rare and frequent compositions without manual intervention.`\\n\\n\\nThanks for your constructive comments. Our R2F framework **leverages the LLM** to determine whether each decomposed sub-prompt has a rare concept and needs frequent concept guidance **without requiring manual intervention**. For example, as shown in Figure 4, for the sub-prompt \\\"an awful snake\\\" (denoted as $c^2$), the LLM determines it has no rare concepts, and thus, R2F guidance is not applied for this sub-prompt. As a result, on the **T2I-compbench**, containing a high proportion of frequent concepts, R2F **still shows superior composition performance** indicating that it can effectively control the quality of frequent concept composition. Furthermore, with the region-guided R2F+, we can ensure that addressing rare compositions does not compromise the quality of frequent compositions.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely appreciate all the reviewers' positive feedback and valuable comments. Most reviewers agreed that (1) **the observation and methodology are novel** (All reviewers), (2) **empirical results are solid and promising** (All reviewers), (3) **the presentation is clear and reproducible** (Reviewer p5vc, zqku, and 7ZtE), and (4) **the proposed benchmark looks significant** (Reviewer p5vc and 7ZtE). During the rebuttal, we addressed the reviewers' remaining concerns by providing clarifications with additional experimental results (including two additional latest diffusion backbones, three image quality metrics, GPU efficiency analysis, and more visualizations; see the revised PDF file). We hope that the remaining concerns are successfully addressed by the rebuttal and are happy to answer more questions during the discussion period.\\n\\n---\\n\\nWe thank all reviewers for their constructive comments and insightful suggestions. We have uploaded the final revised manuscript, which includes the following modifications and improvements:\\n\\nMajor (enhanced practical applicability)\\n- Integration with two more recent backbones: IterComp (in Section 4.3) and FLUX (in Appendix K)\\n- Faster inference; supporting 4-step inference with FLUX integration (in Appendix K)\\n- GPU time and memory analysis (in Appendix O)\\n- Discussion for applications (in Appendix P)\\n\\nBesides\\n- Clarification of the connection between theory and method (in lines 213-215 of Section 3.2)\\n- Three image quality scores (in Appendix N)\\n- LLM prompting study using LAION dataset (in Appendix M)\\n\\nChanges are highlighted in blue. \\nThanks again for the constructive efforts in the comments and reviews.\\n\\nAuthors\"}", "{\"title\": \"Author's Response to Reviewer zqku (1/3)\", \"comment\": \"`W1. For applications, rare concept composition generation is still a relatively niche area, although I acknowledge that it is indeed a novel task within compositional generation. Have you considered exploring a broader range of application scenarios?`\\n\\n\\nThis is an excellent question. Rare concept composition is essential in various applications that require the creation of **creative content**, such as designing characters and posters for comics, movies, and games. Creators in these domains should often produce content that has never existed, such as characters with elemental faces (e.g., fire or water), pirate ship-shaped spaceships, or trumpet-like guns. Therefore, rare concept composition could be considered a mainstream area for these creators.\\n\\nFurthermore, our idea of frequent concept guidance can potentially be **extended to other modalities**, such as text-to-speech (TTS) [1,2,3] and text-to-music [4]. For instance, TTS models have concept categories including speaker, intonation, and emotion. When generating speech such as \\\"an angry Trump speaking Catalan\\\", we might expose frequent concepts such as \\\"an angry Spanish speaking in Catalan\\\" to improve the composition performance of diffusion-based TTS models.\\n\\nThus, we believe that rare concept composition has a broader range of application scenarios.\\n\\n- **We included this discussion in Section P of the revised manuscript.**\\n\\n---\\n\\n[1] AudioLDM: Text-to-Audio Generation with Latent Diffusion Models, ICML, 2023\\n\\n[2] Natural language guidance of high-fidelity text-to-speech with synthetic annotations, ArXiv, 2024\\n\\n[3] DiTTo-TTS: Efficient and Scalable Zero-Shot Text-to-Speech with Diffusion Transformer, ArXiv, 2024\\n\\n[4] Simple and Controllable Music Generation, NeurIPS, 2023\\n\\n\\n\\n`W2. For the computational cost, this paper adopts an approach similar to LMD to enhance R2F, resulting in R2F+, which involves substantial latent and gradient computations. A detailed comparison of computational and memory overhead with other methods is essential to assess the feasibility of the proposed approach.`\\n\\nWe thank the reviewer for helping us improve our paper. The table below compares the A100 GPU time and memory required to generate a prompt \\\"*a horned lion and a wigged elephant*\\\", which consists of *two* rare concepts \\\"*horned lion*\\\" and \\\"*wigged elephant*\\\". \\n\\n| Models | SD3 | R2F | R2F+ |\\n| ---------------- | ----- | ----- | ----- |\\n| Peak Memory (GB) | 31.52 | 31.76 | 35.08 |\\n| GPU Time (sec) | 20.04 | 20.60 | 72.37 |\\n\\n\\n**Peak memory.** While R2F+ involves several latent and gradient computations, there is no significant difference in peak memory compared to R2F since it follows a sequential process. R2F requires approximately 31GB of peak memory, and R2F+ requires approximately 35GB of peak memory where an additional 4GB mostly comes from the gradient computations in cross-attention control.\\n\\n**GPU Time.** The time taken for R2F+ is approximately 72 sec, which can be decomposed as 1) masked latent generation via object-wise R2F takes around 42 sec, and 2) region-controlled concept guidance takes around 30 sec. Specifically, for the process of 1), the generation of each object takes around 20 sec (same as R2F) with an additional 1 sec for masking, resulting in a total of 42 sec for two objects. For the process of 2), the majority of the increased computation time compared to R2F is attributed to attention control, which adds around 10 sec. Consequently, for a prompt with N objects, the time complexity of R2F+ is expected as N*(T+1)+(T+10), where T is the inference time of R2F or SD3. \\n\\nThus, in our experiments, we generated each image **within tens of seconds to a few minutes on a single 40GB A100 GPU, which is feasible.** \\n\\n- **We added this efficiency study in Section O of the revised manuscript.**\"}", "{\"title\": \"Author's Response to Reviewer p5vc\", \"comment\": \"`W1. The method requires alternating among a set of prompts during the denoising process, which makes multiple-step inference inevitable. Therefore, this design might not work well with current state-of-the-art acceleration methods, which reduce the number of denoising steps to 4 steps or even less.`\\n\\n\\nWe thank the reviewer for helping us improve our paper. To address your concern, we conducted additional experiments using FLUX-schnell, one of the state-of-the-art diffusion models that can generate high quality images in 4 steps, as the backbone for R2F with 4 steps of denoising process. \\n\\nThe core idea of R2F is that by **exposing frequent concepts** to the diffusion sampling process its rare concept composition performance can be significantly enhanced. Prompt alternating is one design choice, which is highly effective (in terms of T2I alignment) and efficient (as only one prompt, either rare or frequent, is used for each step) for long guidance length. However, as you pointed out, the efficacy of the prompt alternating may diminish for short guidance length. In this case, we recommend using the Composable approach (detailed in Section 4.4) which blends rare and frequent concepts within the text embedding as an alternative approach.\\n\\n**Configuration.** For the short guidance length of 4 steps, we adjusted the interpolation configuration of the Composable method. For concepts with a visual detail level from 1 to 3, we applied the composable method only for the first step, while for those with a visual detail level from 4 to 5, we applied it to both the first and second steps. In the final third and fourth steps, only the original rare prompt was exposed. We set the blending factor of $\\\\alpha$ to 0.3.\\n\\n**Result.** The image generation results are presented in the table below, and the **generated images are added in Figure 16 of Appendix K in the revised manuscript**. Similar to the original results with a longer guidance length, the T2I alignment performance of FLUX-schnell improved in most cases when R2F was applied. Therefore, the frequent concept exposure idea of R2F can be generalized to the latest acceleration methods with 4 steps, which can have a broader impact on many real-world applications where fast inference time is crucial.\\n\\n| RareBench | Property | Shape | Texture | Action | Complex |\\n| -------------------------------- | ---- | ---- | ---- | ---- | ---- |\\n| FLUX.1.schnell (4 steps) | 72.5 | 68.1 | 49.3 | 61.2 | 73.7 |\\n| R2F_flux.1.schnell (4 steps) | 78.7 | 75.0 | 56.8 | 67.5 | 68.7 |\\n\\n\\n- **We included these results in Section K of the revised manuscript**.\\n\\n\\n\\n`W2. There is a small gap between the theoretical analysis and the empirical method. For the theoretical analysis, the author studies the scenarios of linear interpolation of scores produced by different prompts. For empirical results, the author performs alternating prompts across different denoising steps.`\\n\\n\\nThanks for your careful comments. Our theory assumes a score estimator for image sampling, which does not include multi-step denoising. For diffusion models with multi-step denoising, both the linear interpolation (of latents or text embeddings) and alternating prompts can be regarded as a way of interpolation. Table 6 empirically compares the effectiveness of these design choices, and the alternating approach is the most effective in multi-step denoising. Additionally, the alternating approach is the most efficient because it only requires to use either the rare or frequent prompt at each step. Nevertheless, both the linear interpolation and the alternating approach are more effective than the vanilla SD3.0. \\n\\n- **We clarified this explanation in lines 213-215 in Section 3.2 of the revised manuscript.**\"}", "{\"metareview\": \"This paper proposes an interesting approach for compositional image generation with rare concepts. Given the text prompt with rare concepts, the LLM first decomposes the prompt into regions and then maps rare concepts into frequent ones, which further guides the diffusion sampling process. This paper receives four positive reviews. Reviewers clearly acknowledge the significant contribution, the interesting and inspiring observations, the novel idea, and the solid experiments. Reviewers questions on design choices, experiments, and others are adequately addressed during the rebuttal. All reviewers responded to the rebuttal and either increased the scores or kept the original positive scores. Therefore, AC would recommend acceptance (spotlight).\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised questions on design choices, experiments, etc. Most of them are adequately addressed in the rebuttal, and all reviewers responded that they are satisfied with the rebuttal.\"}", "{\"summary\": \"This paper deals with generating rare compositions of concepts, which is challenging for existing compositional generation methods. The authors propose Rare-to-Frequent (R2F), which utilizes LLMs to plan and execute the overall rare-to-frequent concept guidance throughout the diffusion inference. The paper improves R2F with the layout guidance to achieve more precise spatial-aware generation. Moreover, a new benchmark RareBench is proposed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and easy to follow.\", \"The method is training-free. Experimental results show that R2F outperforms previous models on various metrics.\", \"It brings a new task to compositional generation or text-to-image generation.\"], \"weaknesses\": [\"For applications, rare concept composition generation is still a relatively niche area, although I acknowledge that it is indeed a novel task within compositional generation. Have you considered exploring a broader range of application scenarios?\", \"For the computational cost, this paper adopts an approach similar to LMD to enhance R2F, resulting in R2F+, which involves substantial latent and gradient computations. A detailed comparison of computational and memory overhead with other methods is essential to assess the feasibility of the proposed approach.\"], \"questions\": [\"I\\u2019m not entirely clear on the specific rules LLMs use to determine the \\u201cvisual detail level.\\u201d In your writing, this measure is used in alternating concept guidance to set the guidance length for rare and frequent prompts, with more challenging rare concepts requiring extended guidance. However, LLMs lack knowledge of diffusion priors, which would inform the difficulty associated with generating certain objects or attributes.\", \"The example you give in Figure 4, where \\\"plants made of glass\\\", I don't think it is a frequent concept. Furthermore, in the initial stages of denoising, diffusion models primarily focus on generating rough visual features (e.g., shape, location). Consider the concept of \\u201cfurry\\u201d; both \\u201cfurry bird\\u201d and \\u201cfurry tiger\\u201d are frequent concepts LLMs may output, yet there is a significant difference in the size and shape of these objects, which has a notable impact on the generated result. Thus, I question whether LLMs can reliably provide suitable frequent concepts.\", \"Is the design of R2F+ necessary\\uff1f In fact, layout-based methods have outstanding spatial awareness, however, the trade-off is increased computational cost and a decline in image quality (in terms of detail, aesthetics, etc.). First, you need to conduct a comparative evaluation of R2F+ in terms of image quality. Additionally, as noted in Table 3\\u2019s T2I-CompBench, R2F achieves higher spatial metrics than both the layout-based method LMD and the LLM-based method RPG. Thus, expanding R2F to a layout-based approach may be unnecessary, as it would only improve spatial performance while significantly compromising image quality.\", \"You can consider using the IterComp[1], which is a backbone specifically designed for compositional generation and may lead to a more significant performance improvement.\", \"I will revise my rating according to the author's feedback and the reviewer's discussion.\", \"[1] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"This paper has no ethical concerns.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"summary\": \"This paper examines diffusion-based image generation for objects with unusual attributes, which is termed as rare composition of concepts and pretty common in art design. Current methods are struggle to accurately generate images from rare and complex prompts. To solve this question, this approach effectively utilizes the correlation between frequent and common composition. Specifically, in the early stage of the reverse process, the frequent composition is used to guide noise prediction where the rare one is used. In this way, the frequent one is used to provide good initialization for the final generation. This method is training free with both theoretical analysis and experimental validation provided. An advanced version of the region-based generator is also proposed.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1.\\tThe observation in alternating prompts in diffusion-based models are important.\\n2.\\tBoth global and region-based generation are proposed.\\n3.\\tDetailed visualization are provided.\", \"weaknesses\": \"1.\\tThe current design for the scheduling of the selection of frequent and rare composition of concepts is a bit ad-hoc. You always use frequent composition at the begining and then start randomly selection of composition after a fixed point. Based on your theoretical analysis, any additional guidance can be included or used to determine the selection of composition of concepts?\\n2.\\tFrom your example, each rare composition has only two concept. How do you generalize your approach to more complicated and rare composition (3 or more concepts, such as adj. + adj. + noun, e.g., an agent rabbit with a gun in a casual suit ).\\n3.\\tHave you tried to use rare components at the beginning and then use frequent instead? The intuitive explanation for using frequent one first is needed. It will be good if you have relevant experimental results. \\n4.\\tIn real world, both rare and frequent composition of concepts are considered in generation. Then, the method that improves quality of rare composition should not hurt the quality of frequent composition. Without manual determination, how can your approach still maintain high generation of frequent composition. In other words, it will be good if you can provide discussion on how your method could be adapted to automatically handle both rare and frequent compositions without manual intervention.\", \"questions\": \"Please address my questions in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's Response to Reviewer zqku (3/3)\", \"comment\": \"`Q3. Is the design of R2F+ necessary\\uff1f In fact, layout-based methods have outstanding spatial awareness, however, the trade-off is increased computational cost and a decline in image quality (in terms of detail, aesthetics, etc.). First, you need to conduct a comparative evaluation of R2F+ in terms of image quality. Additionally, as noted in Table 3\\u2019s T2I-CompBench, R2F achieves higher spatial metrics than both the layout-based method LMD and the LLM-based method RPG. Thus, expanding R2F to a layout-based approach may be unnecessary, as it would only improve spatial performance while significantly compromising image quality.`\\n\\n**Image Quality.** Per your suggestion, we additionally measured three popular image quality scores (e.g., LAION-aesthetic [5], PickScore [6], and ImageReward [7]) of R2F+ for multi-object cases in RareBench and compared it to R2F. As shown in the table below, there was no significant difference in image quality scores.\\n\\n| Image Quality Scores | LAION-aesthetic | PickScore | ImageReward |\\n| -------- | -------------- | -------------- | -------------- |\\n| R2F | 3.980 +- 0.361 | 0.226 +- 0.009 | 0.626 +- 0.029 |\\n| R2F+ | 3.887 +- 0.353 | 0.222 +- 0.009 | 0.609 +- 0.033 |\\n\\n- **We added this quality analysis in Section N of the revised manuscript.**\\n\\n**Necessity of R2F+.** R2F+ is useful in many applications where the layout-aware composition is very important. **(1) Layout-aware image/poster design.** When a user creator wants to place an object in a specific position within an image for poster design, R2F is insufficient because it cannot adjust absolute positions. In such cases, R2F+ becomes essential. **(2) Data synthesis for enhancing spatial understanding of foundation models.** Recent multi-modal LLMs (e.g., LLaVA [8]) and pre-trained VLMs (e.g., CLIP [9]) are known to exhibit weaknesses in spatial understanding [10], so several studies have attempted to enhance their performance with spatiality-aware image synthesis (e.g., generating images that accurately captures spatial information in text prompts). R2F+ has the potential to enhance the performance of these foundation models by serving as a data synthesis method, as spatial composition is more critical than image quality in this case.\\n\\n- **We added this discussion in Section P of the revised manuscript.**\\n\\n---\\n[5] Laion Aesthetic Predictor. https://github.com/LAION-AI/aesthetic-predictor, 2022\\n\\n[6] Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation, NeurIPS, 2023\\n\\n[7] Imagereward: Learning and Evaluating Human Preferences for Text-to-image Generation, NeurIPS, 2024\\n\\n[8] Visual Instruction Tuning, NeurIPS, 2024\\n\\n[9] Learning Transferable Visual Models From Natural Language Supervision, ICML, 2021\\n\\n[10] SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities, CVPR, 2024\\n\\n\\n`Q4. You can consider using the IterComp[1], which is a backbone specifically designed for compositional generation and may lead to a more significant performance improvement.`\\n\\nThanks for introducing an important relevant work. We conducted additional experiments using IterComp as the backbone for R2F, and the results are in the below table.\\n\\n\\n| Models | Property | Shape | Texture | Action | Complex | Concat | Relation | Complex |\\n| ------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\\n| SDXL | 60.0 | 56.9 | 71.3 | 47.5 | 58.1 | 39.4 | 35.0 | 47.5 |\\n| R2F_sdxl | **71.3** | **71.9** | **73.8** | **54.4** | **70.6** | **50.6** | **36.0** | **52.8** |\\n| **IterComp** | 63.8 | 66.9 | 61.3 | 65.6 | 61.9 | 41.3 | 29.4 | 53.1 |\\n| **R2F_itercomp** | **78.1** | **77.5** | **79.4** | **66.9** | **63.9** | **41.5** | **36.6** | **53.4** |\\n| SD3.0 | 49.4 | 76.3 | 53.1 | 71.9 | 65.0 | 55.0 | 51.2 | 70.0 |\\n| R2F_sd3.0 | **89.4** | **79.4** | **81.9** | **80.0** | **72.5** | **70.0** | **58.8** | **73.8** |\\n\\nOverall, R2F_itercomp **consistently improves** the compositional generation performance of IterComp on Rarebench dataset (i.e., better T2I alignment scores by GPT-4o). These results further demonstrate the flexibility of R2F across the diffusion backbones. R2F_itercomp was generally better than R2F_sdxl but worse than R2F_sd3.0. This is likely because IterComp enhances the SDXL backbone by compositional-aware preference optimization [11], so it might not yet match the generative performance of the more recent SD3.0 backbone. We will include these results in Table 4 of the main paper.\\n\\n- **We included this result in Table 4 of the revised manuscript.**\\n\\n---\\n[11] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation, ArXiv, 2024\"}", "{\"comment\": \"Thanks for the clarification. I have read all the responses. I would like to keep my original score.\"}", "{\"title\": \"Author's Response to Reviewer zqku (2/3)\", \"comment\": \"`Q1. I\\u2019m not entirely clear on the specific rules LLMs use to determine the \\u201cvisual detail level.\\u201d In your writing, this measure is used in alternating concept guidance to set the guidance length for rare and frequent prompts, with more challenging rare concepts requiring extended guidance. However, LLMs lack knowledge of diffusion priors, which would inform the difficulty associated with generating certain objects or attributes.`\\n\\n\\nIt is known that the diffusion denoising process determines rough global features (e.g., shape) in the initial steps and decides detailed local features (e.g., texture) in the later steps. Therefore, we used **the degree of locality required to draw each visual concept** as a specific rule for determining the visual detail level. As shown in Table 9 of Appendix B, this rule is carefully reflected in the full LLM instruction as follows:\\n\\n> d. Additionally, please provide how much local visual detail is needed to draw the rare concept on a scale of 1 (minimal detail needed) to 5(local detail essential), and explain why. Please give the score according to the degree of locality used to draw the visual concept.\\n\\nIn addition, we further provide the LLM with in-context examples for each visual detail level from 1 to 5, enabling more precise scoring (See Table 10).\\n\\n\\nAs a result, R2F with a visual detail level-aware stop point consistently outperformed R2F with a fixed stop point across various concept categories (See Figure 9). This demonstrates that the **zero-shot ability of the LLM** to assess the visual detail level of a concept enables more accurate concept guidance even **without the need for diffusion priors**. Nevertheless, as you suggested, reflecting diffusion priors more accurately to the LLM could further enhance compositional generation performance, and we would like to leave this as future work.\\n\\n\\n`Q2. The example you give in Figure 4, where \\\"plants made of glass\\\", I don't think it is a frequent concept. Furthermore, in the initial stages of denoising, diffusion models primarily focus on generating rough visual features (e.g., shape, location). Consider the concept of \\u201cfurry\\u201d; both \\u201cfurry bird\\u201d and \\u201cfurry tiger\\u201d are frequent concepts LLMs may output, yet there is a significant difference in the size and shape of these objects, which has a notable impact on the generated result. Thus, I question whether LLMs can reliably provide suitable frequent concepts.`\\n\\nThanks for your careful review. The goal of R2F is not to obtain concepts that are absolutely frequent, but rather to obtain **relatively** frequent concepts that can yield benefits in concept composition. From this perspective, \\\"plants made of glass\\\" is relatively more frequent than \\\"cactuses made of glass\\\", which can lead to performance improvements. Also, in the full LLM instruction (See Table 9), we prompted LLM to identify frequent concepts that **should be relevant** to the original rare concept as follows:\\n> ...when a rare concept is identified in the input text, you **should** replace it with **relevant yet more frequent** concepts.\\n\\nWith this careful instruction, LLMs can reliably provide suitable frequent concepts. Usually, the generated frequent concepts often contain general terms easier for composition such as \\\"animal\\\" or \\\"object\\\" (See Table 14 of Appendix H for more detailed examples of the generated frequent concepts). As a result, unreliable mappings, such as the substitution of \\\"bird\\\" or \\\"tiger\\\" when the prompt context is unrelated, are unlikely to occur.\\n\\nTherefore, we believe that our approach can **reliably obtain suitable frequent concepts from LLMs with careful instructions**.\"}", "{\"title\": \"Author's Response to Reviewer 7ZtE\", \"comment\": \"We sincerely appreciate the reviewers' constructive comments and positive feedback on our manuscript.\\n\\n`W1. My primary concern is that what is deemed rare for the diffusion model may not be considered rare for the LLM. Since the LLM lacks access to the training distribution of concepts used by the diffusion model, it may substitute rare concepts with other rare ones. Providing the LLM with the concept distribution from LION could enhance the results. This distribution has been published by [1].`\\n\\nThis is an excellent question. Because recent diffusion models have been trained on **billion-scale text-to-image datasets**, we could naturally expect that their distributions are closely aligned with LLMs. This may be the reason why our zero-shot LLM guidance performs well in our experiments. Meanwhile, following your suggestion, we provided captions from LAION-400M to assist LLMs in generating a rare-to-frequent concept mapping. \\n\\n**Setup.** Given a rare attribute word (\\\"bearded\\\") in a prompt (\\\"a bearded apple\\\"), we measured the frequency of all the subsequent words in the LAION dataset. For example, if there are 100 captions containing \\\"bearded man\\\" in LAION, the frequency of \\\"man\\\" for the attribute \\\"bearded\\\" is calculated as 100. We then integrated these next-word frequencies into the R2F process in two ways:\\n**(1) R2F with the most frequent subsequent word in LAION.** For each rare concept attribute word, we extract the most frequent subsequent noun from LAION and directly use it for the noun of the frequent concept. For example, if \\\"man\\\" appears most frequently after \\\"bearded\\\", we use \\\"bearded man\\\" as the frequent concept for the original rare concept such as \\\"bearded giraffe\\\".\\n**(2) R2F using Top20 subsequent word frequency in LLM prompt.** In this case, we extract the top 20 frequent subsequent words from LAION, and add them to the LLM instruction for identifying rare-to-frequent concept mapping as follows:\\n\\n> ...When finding frequent concepts for extracted rare concepts, please consider the words that appeared most frequently after the attribute word of the rare concept in the LAION image caption dataset. The list of the top 20 words is as follows and is in the format of ('next word', 'count'). \\\\n EXAMPLES...\\n\\n\\n**Result.** The results for these R2F variants with LAION information are in the table below.\\n\\n| Models | SD3 | R2F | R2F with (1) | R2F with (2)| \\n| ------------------ | ---- | ---- | ---- | ---- |\\n| RareBench_property | 49.4 | 89.4 | 81.3 | 85.9 |\\n\\nWhile we expose the LAION information, the performance of these variants is not higher than our original R2F. This may be due to the low quality of LAION captions (e.g., captions are mostly alt texts that are crawled from the web), and because recent models such as SD3.0 are trained on more high-quality image-caption datasets [1], potentially diverging from the distribution of LAION captions. Indeed, the top 5 subsequent words following \\\"bearded\\\" in LAION were ('man', 8772), ('dragon', 5996), ('collie', 3573), ('iris', 2153), and ('dragons', 1087), showing a discrepancy with common sense knowledge.\\n\\nHowever, we believe that with access to high-quality diffusion training sets, the performance of LLM guidance can be further enhanced, and we will include this finding in the revised manuscript. Again, we greatly appreciate your insightful feedback.\\n\\n- **We added this analysis in Section M of the revised manuscript.**\\n\\n---\\n[1] Improving Image Generation with Better Captions, ArXiv, 2023\"}", "{\"title\": \"Thanks for your positive feedback\", \"comment\": \"We are glad to hear that you are satisfied with our response. Again, thank you very much for your insightful comments.\"}" ] }
BgvAzuCfHc
Self-Supervised Feature Re-Representation via Lennard-Jones Potential Loss
[ "Jianlong Kwan", "Jiayu Xiong", "Dilong Li", "Jing Wang" ]
The Lennard-Jones potential, initially developed to model molecular interactions, is characterized by a repulsive force at short distances to prevent over-clustering and an attractive force at longer distances to maintain balanced proximity, resembling the equilibrium-seeking behavior of particles in natural systems. This offers a potential pathway for more orderly entropy reduction in higher-order features. This paper introduces a self-supervised approach for feature re-representation, utilizing a Lennard-Jones potential loss to constrain the gradient directions between positive and negative features in computer vision tasks. Unlike supervised learning directly driven by downstream tasks or contrastive learning with multi-label data pairs and multi-feature extractors, the proposed loss term integrates with existing task-specific losses by directly constraining gradient directions, thereby enhancing the feature learning process. Extensive theoretical analysis and experimental results demonstrate that, across various domains, datasets, network architectures, and tasks, models incorporating the Lennard-Jones potential loss significantly outperform baseline models without this auxiliary loss in both accuracy and robustness. This approach highlights the potential of physics-inspired loss functions to improve deep learning optimization.
[ "Physics-Inspired Optimization", "Pluggable Self-Supervised Loss", "Lennard-Jones Potential" ]
https://openreview.net/pdf?id=BgvAzuCfHc
https://openreview.net/forum?id=BgvAzuCfHc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywcKHnbxZQ", "cy4FC1QgTV", "OfSQjrZ8nL", "IV1pvwI2sb", "AoJSL6uI6f" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730223929896, 1730537185912, 1731293468966, 1730668203709, 1732023812127 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1046/Reviewer_qF5S" ], [ "ICLR.cc/2025/Conference/Submission1046/Reviewer_A8HJ" ], [ "ICLR.cc/2025/Conference/Submission1046/Reviewer_a6Wg" ], [ "ICLR.cc/2025/Conference/Submission1046/Reviewer_h8v8" ], [ "ICLR.cc/2025/Conference/Submission1046/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a self-supervised feature re-representation technique using the Lennard-Jones potential inspired by molecular interactions as a loss function, aiming to balance intra-class compactness and inter-class separation in feature space. The method enhances feature clustering without predefined positive-negative pairs. Empirical results on various vision tasks show that the proposed Loss improves performance and robustness in models like ViT and ResNets across classification and segmentation tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The proposed LJ potential seems to provide a novel approach to feature clustering. Empirical validation shows enhanced performance across several architectures and datasets.\", \"weaknesses\": \"1. Writing: Personably, I think that this submission is tedious and verbose. In the main body, many parts, such as Sec. 3.1 and 3.2, can be removed to save space for the content in the appendix, such as more experiments. Also, the logic in the appendix is unclear to me where often I do not understand why some descriptions should appear. I strongly suggest the authors to improve the writing.\\n\\n2. Terms: I do not understand why the proposed loss is \\\"self-supervised\\\". Also, what is \\\"RE-representation\\\"? This term appears 3 times only, including one in the title, one in the abstract, and the one in the conclusion. What exactly does it mean????\\n\\n3. Experiments: In current format (main body + appendix), I am not convinced by the results. Ablation study is missing. The improvements are marginal. There is no state-of-the-art comparison, or comparison with other additive losses, or simply regularizations. I do not see the value of adding such a loss.\", \"questions\": \"See my comments in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces the Lennard-Jones (LJ) potential from thermodynamics to the field of self-supervised re-representation. In particular, a Lennard-Jones is proposed to regularize the downstream task training.\\nThe proposed loss is motivated from physics with repulsive force and attractive force interaction in the design. \\nTheory from physics gives some explanations to overfitting issue. \\n\\nExperiments are conducted on 2D image recognition, and 3D point cloud classification and segmentation tasks, with ViT as the backbone. \\nFrom the results, introducing LJ loss increases the performance of various tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"### A novel method from thermodynamics\", \"This method is well-motivated from physics and thermodynamics. The replusive and attractive forces align with the self-supervised pair interactions.\", \"Discussion with contrastive learning method is presented.\", \"Theory of physics try to explain the overfitting issue from a new perspective.\", \"### LJLoss outperforms No LJLoss in most settings\", \"Comparing the introduction of LJLoss and No LJLoss, the performance gains are consistent and large.\", \"Both 2D and 3D tasks (classificaiton and segmentation) benefit from the LJLoss.\"], \"weaknesses\": [\"### What is the connection between LJLoss and the concept of Global and local features?\", \"Sec. 3.2 desribes global and local features in detail. But it is unclear or there is no detailed explanation about the relationship with LJLoss\", \"### Experimental verification is only LJLoss and NoLJLoss\", \"The experiments verify the effectiveness of LJLoss as an effective regularizer on CE loss of downstream finetuning tasks.\", \"However, how about the Contrastive loss? The paper claims the benefits of LJ over contrastive.\", \"As a regularization loss, regularization term weight $\\\\lambda$ is an important hyper-parameter. I did not find ablation on this term.\", \"Same to other hyper-parameters such as $\\\\sigma$ and $\\\\epsilon$.\", \"### Other minor issues\", \"What does SE Loss in Figure 1 mean?\", \"Typos such as left and right quotations, e.g. line 081, line 125, and line 126.\"], \"questions\": [\"What is the relationship betweem Global/Local features and the LJLoss?\", \"Is there any numerical comparison between the contrastive loss and LJLoss?\", \"Hyper-parameter effect of $\\\\lambda$, $\\\\sigma$, and $\\\\epsilon$.\", \"What does SE Loss in Figure 1 mean?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Inspired by the Lennard-Jones potential, this work proposed the corresponding loss function to help computer vision tasks. By setting hyperparameters appropriately, the loss aims to balance intra-class and inter-class distributions. Experiments are conducted on multiple tasks for evaluation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Exploring theoretical results from physics for deep learning is an interesting direction.\\n\\n2. This work considers multiple computer vison tasks, e.g., classification, segmentation, etc.\\n\\n3. Both ViT and ResNet are adopted in evaluation to demonstrate the effectiveness of the proposed loss function.\", \"weaknesses\": \"1. The motivation is unconvincing. In L247, it shows that the system is with zero net force when $r=\\\\sigma$. However, the gradient does not equal to zero in that case and the value of the loss function can be smaller than 0 by minimizing it. Moreover, the analysis below Enq.7 is inconsistent with that below Enq.5. For example, when $r<\\\\sigma$, which means the pair of examples are similar, the analysis for Eqn.5 shows that the repulsive term dominates. On the contrary, that for Eqn.7 states that the attractive forces dominate. According to my understanding, the loss function in Eqn.6 just pushes all pairs to a pre-defined similarity $\\\\sigma$, which is more close to the statement after Eqn. 5 and it cannot help representation learning significantly.\\n\\n2. The data sets for experiments are quite limited. Note that only small data sets, e.g., CIFAR and TinyImageNet, are applied for classification. Lager benchmark data sets, e.g., ImageNet, should be included for comparison.\\n\\n3. The effectiveness of the proposed method is not well justified. The performance of baseline with the proposed loss function is even worse than the original baseline as illustrated in Fig. 5. \\n\\n4. The parameter $\\\\sigma$ is crucial for the success of the loss function and can be sensitive for different tasks. Including an ablation study on parameters can be better for demonstration the proposed method.\", \"questions\": \"Please kindly check the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new loss function called LJ loss for self-supervised learning. LJ loss can automatically constrain similar sample features to attract each other and dissimilar samples to stay away from each other. Experimental results show the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"If I have to say the highlight of this article, it is that it proposes a new loss function based on the lennard-jones potential, which automatically constrains similar samples to be close to each other and dissimilar samples to be separated from each other.\", \"weaknesses\": \"1. The motivation for this article can be summarized as the search for a means by which similar samples can be automatically constrained to attract each other and dissimilar samples to stay away from each other. Similar solutions have been proposed by researchers long ago, e.g., in the literature [1]. However, this article does not analyze it comparatively.\\n\\n[1] Song, Zeen, et al. \\\"On the Discriminability of Self-Supervised Representation Learning.\\\" arXiv preprint arXiv:2407.13541.\\n\\n2. The related work section is pretty weak. First, the authors ignore very recent hot network architectures such as transformer and KAN. Second, the authors have a one-sided understanding of self-supervised learning. There are many self-supervised methods that do not require positive and negative samples, such as BYOL, Barlow Twins, and MAE. Finally, this understanding of physically-guided deep learning is also inadequate. For example, the authors do not mention ReduNet, a learning framework based on the Principle of Maximizing Rate Reduction.\\n \\n3. Section THEORY is over-claimed. This chapter deals with the general form and characterization of the Lennard-Jones potential and contains no new concepts, definitions, and theorems. At the same time, the authors do not give the logical relationship between Section 3.2 and Section 3.3, and it seems that Section 3.2 is redundant.\\n\\n4. As shown in SimCLR, MoCo, BYOL, and Barlow Twins, we can see that self-supervised learning not only performs well on tasks similar to the training data, but also on transfer tasks. There are often significant differences between different tasks, and more mining of valid information in the training task may lead to overfitting of the training task, thus affecting the transferability of the self-supervised learning method. Does the LJ loss proposed in this paper reduce feature mobility? If not, please conduct the corresponding theoretical analysis.\", \"questions\": \"1. In Lines 288-293 is confusing. It is always known that for classification tasks, the larger the distance between classes, the more helpful it is for classification, and this idea is also the core idea of SVM. But the authors go on to emphasize that classification problems should focus only on intra-class distances.\\n\\n2. The authors repeatedly emphasize the problems in self-supervised learning, but the experimental design does not include a comparison with self-supervised learning methods, which is puzzling.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you for the reviewers' professional comments. The theoretical foundation of this work is not solid, and the experiments are insufficient. Therefore, we need to withdraw the manuscript. We apologize for any inconvenience caused.\"}" ] }
BgcapX9ers
Hierarchical Object-Oriented POMDP Planning for Object Rearrangement
[ "Rajesh Devaraddi Mangannavar", "Alan Fern", "Prasad Tadepalli" ]
We present an online planning framework for solving multi-object rearrangement problems in partially observable, multi-room environments. Current object rearrangement solutions, primarily based on Reinforcement Learning or hand-coded planning methods, often lack adaptability to diverse challenges. To address this limitation, we introduce a novel Hierarchical Object-Oriented Partially Observed Markov Decision Process (HOO-POMDP) planning approach. This approach comprises of (a) an object-oriented POMDP planner generating sub-goals, (b) a set of low-level policies for sub-goal achievement, and (c) an abstraction system converting the continuous low-level world into a representation suitable for abstract planning. We evaluate our system on varying numbers of objects, rooms, and problem types in AI2-THOR simulated environments with promising results.
[ "rearrangement", "POMDP", "planning", "reinforcement learning", "object search" ]
Reject
https://openreview.net/pdf?id=BgcapX9ers
https://openreview.net/forum?id=BgcapX9ers
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wicTpuWn0h", "wVOvnyHWgQ", "mhsthJu0Sd", "jkn9Rd4CyT", "jYZx9r5h3p", "hxxhPhwjBo", "etipbkESpj", "dZK5sbU1xq", "cRZEvAmw8I", "cDXWTS8zS3", "abqz8Q3Ejl", "aOV9EDpQC3", "XmyOaXXs3L", "Vvvk5bjIu3", "RlaAEOeS10", "RYSaOmmkdB", "QalVjOjzpZ", "HaGKg4JQmf", "Fsl0BPZFmK", "DhEI14ERUv", "BWa78kx2gy", "552dDg50DI", "4zfPZUIh2W", "0ebIB4r2wi" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732694618842, 1732592783019, 1732642583175, 1732788174058, 1730705077007, 1730690029143, 1732641814528, 1737524264166, 1731845193908, 1731845405262, 1731845047616, 1731845455774, 1730679606171, 1731845157217, 1731845344783, 1732642138605, 1731844940753, 1732562169596, 1732687297164, 1731845313631, 1730679768573, 1734855859673, 1732788155879, 1732527865834 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_Jcan" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_fzvQ" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_fzvQ" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_BB3F" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_Jcan" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_9vs1" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_Jcan" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_9vs1" ], [ "ICLR.cc/2025/Conference/Submission13503/Area_Chair_i5p7" ], [ "ICLR.cc/2025/Conference/Submission13503/Authors" ], [ "ICLR.cc/2025/Conference/Submission13503/Reviewer_BB3F" ] ], "structured_content_str": [ "{\"comment\": \"Ablation on Planning TSP solvers are limited to fully observable domains. The OR methods either assume full observability or (methods like POMCP [7]) are too inefficient for our purpose. The current planner being used (modified PO-UCT) can be turned into a greedy planner by reducing the look-ahead depth (currently 12) to 1 step. We can run and present results for different depths (from 1-4) to show the importance of look-ahead for planning.\\n\\nI understand that TSP and OR methods are fully observable in nature. But can they leverage the belief and abstraction module to plan even under partial observability. I mean can the authors decouple the aforementioned modules from their pipeline and make it work with any downstream planner. Also if partial observability is the limitation, how does the proposed method compare to TSP and OR under full observability? I would also like to see the impact of the look-ahead depth on the planning performance.\", \"motivation_for_multiroomr\": \"The main motivation for the MultiRoomR dataset is to have more rooms and more objects than existing datasets (RoomR and ProcThorRerrangement), and most importantly, blocked path scenes that do not exist in any dataset. For details on the generation of the Novel dataset and room-object distribution, please refer to the common response.\\n\\nNearly 70% of the dataset is composed of only 2 room scenarios, similar to ProcThorRearrangement. Only new addition, the dataset brings to the community is the blocked path scenario.\"}", "{\"title\": \"Official Comment by Reviewer fzvQ\", \"comment\": \"Thank you for your clarifications and comments.\\n\\nAfter reading the rest of the reviews and comments, \\n* I agree with Reviewer BB3F about adding the details of the actual setup, it will enhance the paper's readability.\\n* I agree with Reviewer Jcan about the clarity of motivation for MultiRoomR.\\n\\nPlease revise the submission with what you provided in the responses.\\n\\n> We provide results for the flat object-oriented POMDP here(ablation on hierarchy).....\\n\\nThank you for providing these results. Now, it is clear that the proposed hierarchical planning is essential. Thus, I raise my score to 6 instead of 3.\\n \\n**Why not higher?** \\n\\nI still believe that the novelty is limited. \\n> Extend OO-POMDP originally designed for object search to rearrangement tasks.\\n\\nDespite explaining the extended problem formulation, I think it is not enough for a higher score. I have raised the score due to the shown increase in performance using hierarchical planning, which aligns with the second novelty mentioned.\\n\\nFurthermore, it is hard to evaluate the performance of this work without providing any direct comparison to other baselines (also commented by other reviewers). I fully understand the constraints mentioned in the responses (unavailability of codes and different problem settings), so I do not consider this drawback. However, I believe revising the submission with the reasons provided in the responses is important.\"}", "{\"comment\": \"We thank the reviewer for these constructive comments.\\n\\n> \\u201cDiscussion of limitations. The authors address the example I provided, which helps flesh out the limitations sections, but that is not sufficient for addressing the critique. \\u201cThis assumption is fairly strong, and presents a stumbling block in environments where object classes might not be fully known [...]\\u201d I would like the authors to expand upon their analysis here. What could one do if the state wasn't factored? How would they imagine future work (I note that Section 7 is titled 'Conclusion and Future Work' but does not mention any future work).\\u201d\\n\\n**Limitations:**\\n\\nYes, currently, we cannot handle an unknown class of objects. We could potentially handle them by categorizing all of the known object types into a single \\u2018unknown\\u2019 class. The difficult part, however, is to plan to find an empty space to move the unknown object to. In the worst case, this could lead to complicated packing problems which are NP-hard, but assuming that the space is relatively free, it can be handled with a little more additional search. \\n\\n**Future Work**: \\n\\nOne of the ways to expand the scope is to relax the assumption of object independence partially. We can allow objects to be dependent on a small number of objects (e.g., objects in their close vicinity). Belief updates can now consider a small set of objects at any time. This relaxation helps maintain the efficient belief update while accounting for more real-world situations such as object-object interaction. Another potential future work is to handle stacking of objects and more cramped spaces, where more careful reasoning about object interactions is needed to plan the actions and order them appropriately. \\n\\nWe will add a summary of the above in the paper and fix the nits. We will upload an updated PDF with all of the changes discussed in the rebuttal period soon.\"}", "{\"title\": \"Citations for response\", \"comment\": \"[1] Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki. 2022. TIDEE: Tidying Up Novel Rooms Using Visuo-Semantic Commonsense Priors. In Computer Vision \\u2013 ECCV 2022.\\n\\n[2] Kant, Y.; Ramachandran, A.; Yenamandra, S.; Gilitschenski, I.; Batra, D.; Szot, A.; and Agrawal, H., Housekeep: Tidying Virtual Households using Commonsense Reasoning. In European Conference on Computer Vision, 2022. \\n\\n[3] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick. Task Planning for Visual Room Rearrangement under Partial Observability. In The Twelfth International Conference on Learning Representations, 2024. \\n\\n[4] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick.Task Planning for Object Rearrangement in Multi-Room Environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp.10350\\u201310357, 2024.\"}", "{\"summary\": \"This paper proposes a framework for the multi-object rearrangement problem within a Partially Observable Markov Decision Process (POMDP) setting. The authors introduce a hierarchical, object-oriented POMDP framework that utilizes a high-level planner to generate sub-goals and deploys low-level policies to accomplish these sub-goals effectively. To benchmark their approach, the authors present a new dataset, \\u201cMulti RoomR,\\u201d designed to address more complex scenarios. This dataset includes a larger number of objects (10 objects) and more extensive environments (2-4 rooms), providing a more challenging testbed. The authors evaluate different variants of their method on this new dataset and two existing benchmarks, demonstrating that their framework achieves performance comparable to the method with perfect knowledge, even under partial observability constraints.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors demonstrated that their framework in a partial observability setting achieves results comparable to those obtained with perfect knowledge.\\n2. The new dataset introduces more complex scenarios, which is evident from the performance gap. However, additional details about these scenarios would strengthen their contribution, and I suggest they provide further explanation, perhaps in the appendix, to clarify the dataset\\u2019s design and the specific challenges it presents.\", \"weaknesses\": \"1. I find the work is incremental with limited novelty \\u201cZheng et al. (2023) and Zheng et al. (2022) extend this formulation to perform object search in 3D environments. However, they are all limited to the task of object search and do not include any tasks that require rearrangement. In our work, we build on their formulation of object-oriented POMDP and extend it to include rearrangement actions and their corresponding belief updates.\\u201d (lines 128-132). Despite, I find you have added a hierarchical POMDP planning as well, but I find it already in the literature, for example [1]. Adding a more distinct methodological advancement or exploring further applications beyond rearrangement might strengthen the impact of this work.\\n\\n2. The paper lacks details about the newly introduced dataset, which is a key part of the stated contributions in the introduction. For instance, specifics on the types of objects included and the rationale behind their selection are missing. Additionally, there\\u2019s little information on how the scenarios were designed\\u2014such as the criteria for object placement, room configuration, or how these factors contribute to the complexity of the rearrangement tasks. Providing this information, perhaps in the appendix or a dedicated section, would give readers better insight into the dataset's structure and its intended challenges, thus strengthening the contribution.\\n\\n3. Their method is evaluated only against variants of itself, lacking comparisons with other baseline approaches. This makes it difficult to assess the true advantages of their approach. I would expect, at a minimum, an ablation study that removes the object-oriented hierarchical planning component to demonstrate its effectiveness compared to flat planning methods.\", \"questions\": \"**Minor Improvements (Not considered in the score)**\\n\\n1. BeliefUpdate instead of UpdateBelief function \\u2192 line 10 in Algorithm 1\\n\\n**Questions:**\\n\\n1. Could you clarify your statement at the end of Section 1, where you describe the system as \\\"an end-to-end planning system\\\"? My understanding is that the detection model and low-level policies are trained independently, suggesting a modular rather than fully end-to-end approach.\\n2. What is the last row of Table1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses a variation of a challenging POMDP setting, an object rearrangement task over multiple rooms, with imperfect object detection. The authors introduce a hierarchical approach to a solver, with planning over a computed abstract state, and trained low-level policies to execute high-level plans. The work tests this method on existing object rearrangement tasks, and introduces a dataset of additional, harder tasks, based on the AI2Thor simulator. The authors also provide experimental evaluations of their solver on these domains.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper has several strengths, with a new method proposed for a relatively novel setting, with experimental results and a contribution to the field in the form of a dataset of tasks. On originality, the paper tackles a more challenging extension of the multi-room rearrangement problem, i.e. adding imperfect detection and integrated decision making. They additionally do not assume perfect navigation, motion planning or manipulation, instead having trained or computed low-level policies. The work contributed is mostly of high quality (with some reservations described below), with experimental results that demonstrate their method works in harder domains (that were also contributed and represent a meaningful improvement over the current domains). The lack of assuming perfect object detection and perfect low-level control renders this contribution significant to the field.\", \"weaknesses\": \"However, there are some concerns with the paper. Firstly, the paper's presentation and clarity desperately needs to improve. Specifically around sections 3, 4.1, and 4.2, there are many open questions, missing details and unclear statements around the task definition and setup. For instance, how the agent works with the 2D and 3D maps, how does it learn information about the receptacles at the beginning, or how the observations are discretized before being passed to the belief update is unclear. These can impact the evaluation of the results, making the environment and task easier than initially understood, and could potentially rely on unrealistic assumptions, for which is there is also no discussion on. I have listed a lot of questions I had around this section later on, and I can only be confident in the results provided, if the authors can improve the clarity of the paper here.\\n\\nAdditionally, there are other concerns with the results. The authors claim in Section 6 that existing baselines differ from other work in key aspects. I think a detailed comparison between your domain and method, versus other selected variants of the task and accompanying methods would significantly improve the quality of the paper's results. At the moment, all the authors say is:\\n\\n> The primary distinction lies in the prior knowledge available to our system: we are given information\\nabout the classes of objects to be moved, whereas other systems operate without this advantage.\\nbut do not cite other systems or prior work that studies the other settings. \\n\\nFurther, they claim that:\\n\\n> In particular, while existing systems report initial visibility of approximately 60% of target objects\\nat the outset of their tasks, our scenarios present a more demanding exploration challenge. Only\\nabout 20% of the objects are initially visible in our problem settings, necessitating more extensive\\nand strategic exploration. \\n\\nbut again, lack a reference for this information. Additionally, from a brief overview, it appears that [Mirakhor et al. 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28902) (note, it was published before the period ICLR allows for concurrent work) is a relevant comparison, as they operate in similar conditions (i.e. multiple rooms, rearrangement task, with similar setups such as swap and blocked goal cases) and provide similar contributions (i.e. novel planner and new dataset). I want to see how the proposed method and provided datasets compare, to contextualize the effectiveness of this approach. \\n\\nLastly, the authors mention this (lines 64, 508), but do not elaborate on this in the Limitations section. The proposed method requires a factored object-oriented state representation, and the proposed belief update and abstraction method rely on this fact as well (see enumerations over objects in Algorithm 2 and \\\"Generating Abstract State\\\"). This assumption is fairly strong, and presents a stumbling block in environments where object classes might not be fully known, i.e. imagine you see an unknown object that's blocking the goal location for a known object. I'm not suggesting that the proposed method needs to be able to handle such cases, but a fuller discussion of limitations needs to go beyond just the independence assumption and include ideas of how this might be relaxed in the future or how other methods in literature handle such cases. \\n\\nMy suggestion is to combine and extend the limitations and comparison to existing baselines section and address both weaknesses together. I would be willing to increase my score if these concerns around the clarity of the paper, the discussion around baselines and the discussion of limitations of the method were improved.\", \"questions\": [\"Questions:\", \"it is not clear how the agent generates the 2D map. The paper says it discretizes the world into grids of size 0.25m, but how is that done? Does the environment provide it? If not, how is it computed from the sequences of observations during the exploration phase? It's even less clear how the agent generates the 3D map mentioned in line 154.\", \"The setup of the task is unclear. Is the first phase of exploration (where the agent \\\"traverses the world\\\" and \\\"gains location information about the receptacles\\\") something that the agent has to to plan how to do and output a sequence of actions? Or is this information provided by the environment? How does the agent know what the type of the object is during this phase? From the previous paragraph, the environment simply outputs the RGBD image of the current view from the agent POV, the location and if the action was successful. If the agent does have to do this, you must include more detail about how this works, and how it interacts with the planning system provided.\", \"The definition of the abstract POMDP is not clear. What change does the 'object independence' assumption make to the mathematical formulation of the OO-POMDP provided above? You provide some detail in the observation model bullet point but you reference \\\"conditional independence\\\" here and \\\"object independence\\\" above, so it's not clear to me what the relationship is between this and the above's \\\"abstract\\\" nature. My suggestion is to describe the overall system first (like the initial paragraph of section 4.3), since that provides much needed context to understand your formalism. Alternatively, I would try and make the abstraction system clearer when you define the abstract POMDP. The current presentation is very confusing.\", \"How are the object locations in the ground truth image observation discretized to the 2D map? is this done by the environment or the perception system? The understanding is the perception system just runs object detection and grabs the 3D location via the depth map.\", \"Nits (not affecting score):\", \"typo on line 87 in caption (sawp -> swap)\", \"typo on line 354 (the OO-POMDP planner,uses Partially)\", \"typo in line 162, the title of section 4 should have the full form of the acronym, HIERARCHICAL OBJECT ORIENTED POMDP (HOO-POMDP), and should have a space between oriented and the parenthesis.\", \"formatting of line 230 is wrong (should be in latex math mode, something like: [$cost = -1 \\\\times N_{a}$] where $N_{a}$ is the number of required actions).\", \"typo in line 248, space should be there between z and This\", \"the section in lines 196 to 201 are really difficult to read because of how compressed the math definitions are. it would be good to expand them to be easier to read.\", \"typo in line 212 (null is malformed)\", \"no need to redefine the acronym in line 316\", \"space around hypen in line 323\", \"randomly repeated twice in line 369\", \"A* is formatted incorrectly in line 380.\", \"missing period on line 452\", \"minor, but please bold the best result in the results Table 1, or otherwise easily indicate which performed the best.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for these constructive comments.\\n\\n>Q1: It is still unclear what the high-level actions are and what makes something a subgoal.\\n\\nThe high-level actions are (action space of the Abstract POMDP Planner): \\n1. **MoveAB** - move action that moves the agent from location A to location B\\n2. **Rotate_angle** - The rotate action rotates the agent to a given angle\\n3. **PickPlace** - The PickPlace action picks Object_i from the current position of the robot and places it at the given goalloc\\n\\nA **subgoal is an instantiated high-level action**. For example, **MoveAB((5,5), (10,10))** is a subgoal instantiated from the high-level action MoveAB. (An instantiated action sequence is what the abstract planner outputs after planning). The low-level policies are initialized using this information - A* is initialized with the starting position (5,5) and goal position (10,10) to find a sequence of low-level actions to move between these locations. \\n\\n> Q2: \\u201cThe authors seem to have missed the point. The question is: At the execution time, given the state is unknown and only access to observation is available, how are actions determined to have failed? A failure can only be known if the actual state of the system is known - making it an MDP and not a POMDP\\u201d\\n\\nBy definition, in a POMDP, we do not know the full state of the world and hence maintain a belief over all possible states. In our case, the state is represented by (s_r, s_objects) - the state of the robot (s_r), and state of the objects (s_objects)\\n\\nAction success/failure information is part of s_r (along with robot position (x,y, pitch, yaw)). This part of the state is fully known at any given time from the observation z_r, which is deterministic (the simulator gives agent position along with success/failure information at each step). **What is NOT known is the s_objects - the locations of the objects in the world**. For this, we get only PARTIAL information about the world - in the form of RGB and depth images (which is a first-person view, so only a small part of the environment is visible at any given time). \\nOur perception system converts this to an observation based on detection (which can also fail), and we update our belief based on this. \\n\\n>Q3: \\u201cThe third point raises a few more questions (and sorry for raising them now): How does A* work for a POMDP? How is RL model trained in a POMDP?\\u201d\\n\\nBoth A* and RL are policies that work at a low level and are not affected by the partial knowledge of the world. \\n\\nWe train 2 RL policies - Pick RL and Place RL. They take the RGB and Depth images as input along with the object information (object name for Pick policy and goal location for Place policy) \\nBoth of these are meant to interact with a single object at any given time and do not use the full state (which is unknown due to partial observability). \\n\\nFor example, **PickPlace (book, (11,11) )** and the agent\\u2019s position is at (5,5) currently. This means we must pick the object book from the current location and place it at (11,11).\", \"how_this_gets_broken_down_and_solved_is_as_follows\": \"1. First, the Pick RL policy is called to Pick(Book) in its vicinity only (note that if there is no book nearby, then the pick will simply fail). In case of failure, the abstract POMDP planner is expected to instantiate pick from a different location to be able to pick this object. \\n2. Once pick has happened -> Recall that we have a 2D obstacle map of the world (of only stationary objects). We compute the nearest position to (11,11), that is free to move to based on this obstacle map. Let\\u2019s say this is (10,10). A* is initialized - to move from (5,5) to (10,10). A* finds a sequence of actions to move from (5,5) to (10,10) while avoiding these obstacles in the 2D map.\\n( (It plans a path as if there is nothing on the way - if our agent finds something on the way later (in a blocked goal setting), the abstract POMDP planner is expected to output a different high-level goal during re-planning - which happens after each low-level action has been taken))\\n3. Place RL: Once we are at (10,10), the place policy takes over and tries to place the book at (11,11). \\n\\nNote that, in all cases, we only used known information for the low-level policies\\u2014the location of the obstacles for A*. The RL policies also only care about the object of interest and not the state of any other objects.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Responses to weaknesses\", \"comment\": \"Response continued\\n\\n>W2: \\\"Additionally, there are other concerns with the results. The authors claim in Section 6 that existing baselines differ from other work in key aspects. I think a detailed comparison between your domain and method, versus other selected variants of the task and accompanying methods would significantly improve the quality of the paper's results. \\u2026 \\u201d\\n\\n**Missed citations:** Thank you for pointing it out. We will add citations for both of these. For the claim about other papers needing extra information, it is every other system that does rearrangement, we will cite the most relevant ones in the paper (Sarch et al.[1], Kant et al.[2] and Mirakhor et al. [3,4]). **For the claim of 60%, it is taken from MiraKhor et al.[4], table 1** (where #V/#O gives the initially visible percentage, which is 60% for their settings). \\n\\n>W3: \\u201cI want to see how the proposed method and provided datasets compare, to contextualize the effectiveness of this approach.\\u201d\\n\\nWe were unable to get the code or their dataset from the authors (after multiple requests) of Mirakhor et al [4] and hence unable to make a direct comparison.\\n\\n>W4: \\u201cLastly, the authors mention this (lines 64, 508), but do not elaborate on this in the Limitations section. The proposed method requires a factored object-oriented state representation, and the proposed belief update and abstraction method rely on this fact as well. \\u2026 a fuller discussion of limitations needs to go beyond just the independence assumption and include ideas of how this might be relaxed in the future or how other methods in literature handle such cases.\\u201d\\n\\nIndeed, we cannot currently handle cases where an unknown object is in the way. A simple way to handle this is to group all unknown objects into one class, and **whenever an unknown class object blocks a path, we place it into an empty receptacle** (similar to how we handle swap/blocked goal cases). We will add this in the limitations section. \\n\\n[1] Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki. 2022. TIDEE: Tidying Up Novel Rooms Using Visuo-Semantic Commonsense Priors. In Computer Vision \\u2013 ECCV 2022.\\n\\n[2] Kant, Y.; Ramachandran, A.; Yenamandra, S.; Gilitschenski, I.; Batra, D.; Szot, A.; and Agrawal, H., Housekeep: Tidying Virtual Households using Commonsense Reasoning. In European Conference on Computer Vision, 2022.\\n\\n[3] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick. Task Planning for Visual Room Rearrangement under Partial Observability. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[4] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick.Task Planning for Object Rearrangement in Multi-Room Environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp.10350\\u201310357, 2024.\"}", "{\"title\": \"Response part 1\", \"comment\": \"We thank the reviewer for these constructive comments.\\n\\n>W1: \\u201cDon\\u2019t you think it would be better to use some commonsense knowledge about the object-receptacle-room relationships as a bias for belief initialization and update? Because in indoor scenarios, multiple methods have shown that the application of commonsense knowledge for indoor environments aids in planning - Sarch et al.[1], Kant et al.[2] and Mirakhor et al. [3,4].\\u201d\\n\\n**Common-sense priors:**\\nYes, for an unseen object, the planner starts execution based on a randomly sampled location. While common-sense priors can help, a common application of rearrangement is cleaning homes, and untidy homes may only sometimes follow common-sense priors. Our system can handle all cases where objects can be anywhere with similar efficiency. It is also **easy to add common-sense priors** to our framework (by changing initial belief from random to initialized based on common-sense priors) if we have more information about the domain we operate in (for ex: warehouse).\\n\\n>W2: \\u201c\\u200b\\u200b This paper shows no comparison study with the existing state-of-the art (SOTA) method. \\u2026. What is the reference or study for the claim on the existing methods initial visibility being approximately 60%? Please show empirically how difficult the problem becomes with variations in the percentage of initial object visibility.\\u201d\\n\\n>W6: \\u201cMetrics and problem configurations : To show the efficacy of planning, will it now be more beneficial to show the time and distance the agent took to solve the entire task? Moreover, to highlight the difficulty of partial observability, can you specify how many or what percent of objects are initially visible and how many actions or time does the agent take to find them?\\u201d\\n\\n**Comparison to SOTA methods**\\nIt is important to note that our system addresses a **variant of the multi-object rearrangement problem** that differs in key aspects from those tackled by prior works (Gadre et al.[5], Sarch et al.[1], Trabucco et al.[6], and Mirakhor et al.[4]). While our approach leverages prior knowledge of target object classes, this design choice enables broader generalization capabilities. In existing approaches, **where such information is not provided, agents must perform a walkthrough phase for each new goal configuration to identify movable objects**. In contrast, our formulation **requires only a single initial walkthrough** to map stationary objects in the environment. Subsequently, our system can **efficiently handle multiple goal configurations** without additional walkthroughs, significantly enhancing its adaptability to diverse scenarios. This fundamental difference in problem formulation makes direct performance comparisons potentially misleading despite operating in similar environments.\\n\\nWe acknowledge this limitation in direct comparability but believe our results demonstrate the effectiveness of our approach in solving a practically relevant variant of the rearrangement problem. We will revise Section 6 to make these distinctions clearer and better contextualize our contributions relative to prior work.\\n\\n**Clarification on Object Classes**: - We do not restrict which objects can be moved - Rather, for each specific rearrangement task, the system needs to know which object classes are targets - This allows flexibility while maintaining efficient exploration and planning.\\n\\nFor the claim about 60%, **it refers to MiraKhor et al., table 1**, where #V / #O gives the initially visible percentage, which is 60% for all settings in their settings. We will add this information in the paper along with the visibility percentage of objects in our datasets. \\n\\n**New Metrics**: We will also add a separate table in the appendix with results on how much time was taken to complete the tasks.The distance the agent took to solve the problem is directly proportional to the total actions taken and, hence, not very beneficial in providing any new insight into the method's effectiveness. \\n\\n>W3: \\u201cTo understand whether this method is scalable to an increasing number of objects and rooms, more results need to be shown with the number of objects varying from 5, 10, 15 say up to 20 on the same dataset\\u2026. This makes it difficult to establish a trend for results with an increasing number of objects and rooms.\\u201d\\n\\n**Scalability**: We currently have results for 2,3,4 rooms with the same number of objects (10) in the MultiRoomR dataset. RoomR and ProcThor dataset results also show this - both datasets have 5 objects but are different in the number of rooms (1-2). This shows how the method behaves with different numbers of rooms with the same number of objects.\\n\\nWe will provide results for larger numbers of objects and add them in the paper (up to 20)\"}", "{\"comment\": \"We thank the reviewer for these constructive comments.\\n\\n>Q1: Could you clarify your statement at the end of Section 1, where you describe the system as \\\"an end-to-end planning system\\\"? My understanding is that the detection model and low-level policies are trained independently, suggesting a modular rather than fully end-to-end approach.\\n\\nThis is a valid point that it is not an end-to-end trained system. We will use the terminology of modular system in the paper. \\n\\n>Q2: \\u201cWhat is the last row of Table 1?\\u201d\\n\\n The last row of the table is our system run in the Multi-room setting with **10 objects for 3-4 room** settings (the previous two rows are for 1-2 room settings). We will remove that horizontal line to avoid any confusion.\\n\\n>W1: \\u201cI find the work is incremental with limited novelty\\u2026 . Despite, I find you have added a hierarchical POMDP planning as well, but I find it already in the literature, for example [1]. Adding a more distinct methodological advancement or exploring further applications beyond rearrangement might strengthen the impact of this work.\\u201d\\n\\n**Novelty 1:** Extend OO-POMDP originally designed for object search to rearrangement tasks. (state abstraction by factoring state based on objects)\\\\\\n**Novelty 2:** We further extend this Rearrangement OO-POMDP to a hierarchical planning setting (through action abstraction). \\nWe will add this clarification about the exact novelty in the paper. \\n\\n>W2: \\u201cThe paper lacks details about the newly introduced dataset, which is a key part of the stated contributions in the introduction.\\u2026 Providing this information, perhaps in the appendix or a dedicated section, would give readers better insight into the dataset's structure and its intended challenges, thus strengthening the contribution.\\u201d\\n\\nPlease refer to the **common response** for details about the dataset. \\n\\n>W3: \\u201cTheir method is evaluated only against variants of itself. I would expect, at a minimum, an ablation study that removes the object-oriented hierarchical planning component\\u201d\\n\\nWe provide results for the flat object-oriented POMDP here(ablation on hierarchy). **OURS represents our method, OURS-HP represents the approach without the hierarchical planning** in the table below which clearly shows the importance of hierarchy and abstraction (The planner directly outputs low-level actions). Unfortunately object-oriented representation and the independent belief update are too critical to the method in that without them even the simplest of problems are not going to be solved in a reasonable time. We will add this in the appendix. \\n\\n| Dataset | Objs | #BG | #Swap | #BP | #RM | OURS (SS) | OURS (OSR) | OURS (TA) | OURS-HP (SS) | OURS-HP (OSR) | OURS-HP (TA) |\\n|------------------|------|------|--------|------|------|-------------|-------------|-------------|----------------|----------------|----------------|\\n| **RoomR** | 5 | 1 | 0 | 0 | 1 | **49** | **71** | **211** | 13 | 33 | 302 |\\n| | 2 | 2 | 2 | 1 | 1 | **39** | **61** | **289** | 8 | 27 | 392 |\\n| **Proc** | 5 | 1 | 0 | 0 | 2 | **46** | **68** | **352** | 9 | 29 | 410 |\\n| | 2 | 2 | 1 | 2 | 1 | **31** | **53** | **398** | 4 | 19 | 565 |\\n| **Multi RoomR** | 10 | 1 | 0 | 0 | 2 | **32** | **65** | **710** | 5 | 25 | 1029 |\\n| | 10 | 2 | 1 | 1 | 2 | **21** | **49** | **789** | 2 | 19 | 1092 |\\n| | 10 | 2 | 1 | 1 | 3-4 | **18** | **44** | **1321** | 1 | 7 | 1549 |\"}", "{\"title\": \"Response Part 2\", \"comment\": \"Response continued\\n\\n>W4: \\u201cThe two baselines - PK and PD used in the paper study only the perception efficacy, what about the planning efficacy? Can you replace your planner with some alternatives such as a classical traveling salesman problem (TSP) solver, an optimizer based OR-Tools[7] planner, a greedy planner etc. This will give an insight of how close to the optimal is this planner and how much of an improvement is this method over the heuristic strategies.\\u201d\\n\\n**Ablation on Planning**\\nTSP solvers are limited to fully observable domains. The OR methods either assume full observability or (methods like POMCP [7]) are too inefficient for our purpose. The current planner being used (modified PO-UCT) can be turned into a greedy planner by reducing the look-ahead depth (currently 12) to 1 step. We can run and present results for **different depths (from 1-4)** to show the importance of look-ahead for planning. \\n\\n>W5: \\u201cAs far as I know, ProcThor has multi-room scenarios with up to 5 rooms and about 20 objects. But, the authors have stated in Line 418-420 about ProcThor having - \\u201c2 rooms, 5 objects\\u201d. Are you sure about this? This begs the question regarding the motivation of the new dataset - Multi RoomR? What was missing in ProcThor? What is the object, receptacle, room type distribution in the new dataset? How do we gauge the complexity of this new dataset, if there are no comparison results with SOTA methods?\\u201d\\n\\nYes, you are right, the ProcThor dataset itself has multi-room scenarios with more rooms (up to 10). We meant the **ProcThor Rearrangement dataset** (<https://github.com/allenai/procthor-10k/tree/rearrangement-2022>), which contains multi-room settings with 2 rooms and 5 objects. We will correct this in the paper. \\n\\n**Motivation for MultiRoomR:** The main motivation for the MultiRoomR dataset is to have more rooms and more objects than existing datasets (RoomR and ProcThorRerrangement), and most importantly, blocked path scenes that do not exist in any dataset. \\nFor details on the generation of the Novel dataset and room-object distribution, please refer to the **common response**. \\n\\n[1] Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki. 2022. TIDEE: Tidying Up Novel Rooms Using Visuo-Semantic Commonsense Priors. In Computer Vision \\u2013 ECCV 2022. \\n[2] Kant, Y.; Ramachandran, A.; Yenamandra, S.; Gilitschenski, I.; Batra, D.; Szot, A.; and Agrawal, H., Housekeep: Tidying Virtual Households using Commonsense Reasoning. In European Conference on Computer Vision, 2022. \\n[3] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick. Task Planning for Visual Room Rearrangement under Partial Observability. In The Twelfth International Conference on Learning Representations, 2024. \\n[4] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick.Task Planning for Object Rearrangement in Multi-Room Environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp.10350\\u201310357, 2024. \\n[5] Gadre, S.Y., Ehsani, K., Song, S., & Mottaghi, R. (2022). Continuous Scene Representations for Embodied AI. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14829-14839. \\n[6] Brandon Trabucco and Gunnar A Sigurdsson and Robinson Piramuthu and Gaurav S. Sukhatme and Ruslan Salakhutdinov, A Simple Approach for Visual Room Rearrangement: 3D Mapping and Semantic Search, The Eleventh International Conference on Learning Representations, 2023. \\n[7] Silver D, Veness J. Monte-Carlo planning in large POMDPs. Advances in neural information processing systems. 2010;23\"}", "{\"summary\": \"The paper deals with the problem of Multi-room rearrangement using a Hierarchical POMDP approach. The problem is difficult as it involves a number of difficulties including combinatorial expansion in complexity with increasing number of objects, partial observability due to limited field of view, scalability etc. The paper tackles the partial observability by maintaining an object oriented belief state to account for the possible locations of the objects. From this belief, the object state is abstracted which indicates whether the object is picked, placed, is held etc. Based on this state space for all the objects, the POMDP planner generates a high-level action to be executed such as the PickPlace, Move, Rotate etc. These high-level actions are then executed by the low-level policies which are heuristic as well as RL based. The paper claims to achieve a scalable and efficient rearrangement plan in Multi-room rearrangement scenarios. The paper also introduces a novel dataset MultiRoom R.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles a very complex and understudied problem of Multi-room rearrangement.\", \"Usage of POMDP based planners to effectively address uncertainities in a large multi-room space is very interesting.\", \"The supplementary video was good.\", \"The paper also presents a new dataset Multi RoomR which includes \\\"blocked path scenarios\\\" as an additional complexity in rearrangement.\"], \"weaknesses\": \"- **Uncertainty over objects start locations :** As stated in the paper line 338-339 : \\u201cThe location $pick_i$ is sampled based on the belief distribution of where the object could be\\u2026\\u201d So for an unseen object, is the initial belief distribution a uniform probability over the entire 2D map? If yes, doesn\\u2019t it imply that your planner starts execution based on a randomly sampled location? Doesn\\u2019t it make planning less effective? Moreover, for predicting $loc_i$ of each object, wouldn\\u2019t the use of any common sense priors make the $loc_i$ prediction better and lead to faster convergence?\\n - Don\\u2019t you think it would be better to use some commonsense knowledge about the object-receptacle-room relationships as a bias for belief initialization and update? Because in indoor scenarios, multiple methods have shown that the application of commonsense knowledge for indoor environments aids in planning - Sarch et al.[1], Kant et al.[2] and Mirakhor et al. [3,4].\\n- **Comparison Results :** This paper shows no comparison study with the existing state-of-the art (SOTA) methods. As stated in Sec 2 : Related Work - Mirakhor et al. [4] addresses the multi-room rearrangement problem. Even other rearrangement methods that show results in single room rearrangement such as Gadre et al.[5], Sarch et al.[1] and Trabucco et al.[6] should be compared with to ground the claims of this paper. In fact, all these methods have shown their results in Ai2Thor. As stated in Line 507-508 the author mentions that they have an advantage over the SOTA method because \\u201cclasses of objects to be moved are known\\u201d - Is it known for every rearrangement scenario, if so why do you need it? Or Is this method limited to only these classes of objects? Please clarify. Also, as you stated in Line 509-513 that your problem initialization is much more difficult due to initial object visibility. What is the reference or study for the claim on the existing methods initial visibility being approximately 60%? Please show empirically how difficult the problem becomes with variations in the percentage of initial object visibility.\\n- **Scalability :** To understand whether this method is scalable to an increasing number of objects and rooms, more results need to be shown with the number of objects varying from 5, 10, 15 say up to 20 on the same dataset. Similar results can be shown with the number of rooms varying from 2,3,4 & 5, keeping the number of objects constant. Presently, the paper shows results for only 5 objects on RoomR and Procthor, whereas it shows results for 10 objects on Multi-RoomR. This makes it difficult to establish a trend for results with an increasing number of objects and rooms.\\n- **Ablation study :** The two baselines - PK and PD used in the paper study only the perception efficacy, what about the planning efficacy? Can you replace your planner with some alternatives such as a classical traveling salesman problem (TSP) solver, an optimizer based OR-Tools[7] planner, a greedy planner etc. This will give an insight of how close to the optimal is this planner and how much of an improvement is this method over the heuristic strategies.\\n- **Novel Dataset (Multi RoomR) :** As far as I know, ProcThor has multi-room scenarios with up to 5 rooms and about 20 objects. But, the authors have stated in Line 418-420 about ProcThor having - \\u201c2 rooms, 5 objects\\u201d. Are you sure about this? This begs the question regarding the motivation of the new dataset - Multi RoomR? What was missing in ProcThor? What is the object, receptacle, room type distribution in the new dataset? How do we gauge the complexity of this new dataset, if there are no comparison results with SOTA methods?\\n- **Metrics and problem configurations :** To show the efficacy of planning, will it now be more beneficial to show the time and distance the agent took to solve the entire task? Moreover, to highlight the difficulty of partial observability, can you specify how many or what percent of objects are initially visible and how many actions or time does the agent take to find them?\\n\\n[1] Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki. 2022. TIDEE: Tidying Up Novel Rooms Using Visuo-Semantic Commonsense Priors. In Computer Vision \\u2013 ECCV 2022.\\\\\\n[2] Kant, Y.; Ramachandran, A.; Yenamandra, S.; Gilitschenski, I.; Batra, D.; Szot, A.; and Agrawal, H., Housekeep: Tidying Virtual Households using Commonsense Reasoning. In European Conference on Computer Vision, 2022.\\\\\\n[3] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick. Task Planning for Visual Room Rearrangement under Partial Observability. In The Twelfth International Conference on Learning Representations, 2024.\\\\\\n[4] Karan Mirakhor, Sourav Ghosh, Dipanjan Das, and Brojeshwar Bhowmick.Task Planning for Object Rearrangement in Multi-Room Environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp.10350\\u201310357, 2024.\\\\\\n[5] Gadre, S.Y., Ehsani, K., Song, S., & Mottaghi, R. (2022). Continuous Scene Representations for Embodied AI. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14829-14839.\\\\\\n[6] Brandon Trabucco and Gunnar A Sigurdsson and Robinson Piramuthu and Gaurav S. Sukhatme and Ruslan Salakhutdinov, A Simple Approach for Visual Room Rearrangement: 3D Mapping and Semantic Search, The Eleventh International Conference on Learning Representations, 2023.\\\\\\n[7] Laurent Perron and Fr\\u00e9d\\u00e9ric Didier, Google, https://developers.google.com/optimization/cp/cp_solver/\", \"questions\": \"Same as the Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer questions\", \"comment\": \"We thank the reviewer for these constructive comments.\\n\\n\\n>\\u201cQ1: it is not clear how the agent generates the 2D map. The paper says it discretizes the world into grids of size 0.25m, but how is that done? Does the environment provide it? If not, how is it computed from the sequences of observations during the exploration phase? It's even less clear how the agent generates the 3D map mentioned in line 154.\\n\\n>Q2: The setup of the task is unclear. Is the first phase of exploration (where the agent \\\"traverses the world\\\" and \\\"gains location information about the receptacles\\\") something that the agent has to to plan how to do and output a sequence of actions?... If the agent does have to do this, you must include more detail about how this works, and how it interacts with the planning system provided.\\u201d\\n\\n\\n**Overall Task setup and map building:**\\nRearrangement is done in **2 phases**. **Walkthrough** phase and **rearrange** phase. The walkthrough phase is meant to get information about stationary objects. The 2D occupancy map is generated in this phase, as well as the corresponding 3D Map. First, we get the size of the house(width and length) information from the environment. Then, we uniformly sample points in the environment then take steps to reach these locations (if possible - some might be blocked). This simple algorithm ensures we go all around the house and see every part of it. At each of the steps involved in reaching these locations, we receive the RGB and Depth image from the environment. Using this, **we create a 3D point cloud at each step and combine them all together to get the overall 3D point cloud** of the house with stationary objects. We then **discretize this point cloud into 3D map voxels of size 0.25m, we further flatten this 3D map into a 2D map** (location in the 2D map is occupied if there exists a point at that 2D location at any height in the 3D map - after flattening, voxels becomes grid blocks of size 0.25m a side). While doing this traversal, we also get information about the receptacles by detector on the RGB images we receive during this traversal. This ends the walkthrough phase. (this walkthrough process is similar to other works solving the rearrangement problem [1][2], except that it needs to be done only once for any house configuration of stationary objects - walls, doors, tables, etc.). Then, objects are placed at random locations (done using AI2Thor environment reinitialization). This is when the rearrangement phase begins, with the planner taking the following as **input - the map generated in the walkthrough phase, the set of object classes to move, and their goal locations**.\\n\\nThis is not explicitly mentioned in the paper because it is standard practice for rearrangement tasks (Mirakhor et al [4]), but we will add a summary for completeness in the task setup section. \\n\\n> Q3: \\u201cThe definition of the abstract POMDP is not clear. What change does the 'object independence' assumption make to the mathematical formulation of the OO-POMDP provided above? \\u2026 . My suggestion is to describe the overall system first (like the initial paragraph of section 4.3), since that provides much needed context to understand your formalism. Alternatively, I would try and make the abstraction system clearer when you define the abstract POMDP. The current presentation is very confusing.\\u201d\\n\\n**The object independence assumption** : defined as \\u201cthe observation and belief of any object do not depend on any other object\\u201d, can be formally stated as **(P(z_i| s_j,z_j, s_i ) = P(z_i | s_i) for all j != i)**. Observation z_i is independent of the states and observations of other objects, given its own state s_i. Similarly, we also assume **P(s\\u2019_i|s_i,s_j,a) = P(s\\u2019_i|s_i,a)** when j != i, i.e., the next state of object i only depends on its own previous state and the action. **These two assumptions help us go from equation 1 to 3, as well as perform belief updates independently for each object (algorithm 2)**. We will add this clarification in the paper to make it clearer. \\n\\nThank you for the suggestion about moving sections. To improve clarity, we will move the initial paragraph of section 4.3 to the beginning of section 4 and then describe the rest of the system.\\n\\n> Q4:\\u201cHow are the object locations in the ground truth image observation discretized to the 2D map? is this done by the environment or the perception system? The understanding is the perception system just runs object detection and grabs the 3D location via the depth map.\\u201d\\n\\nYes, we get the local 3D coordinates (x,y,z) from the depth map w.r.t. Agent. We convert this to global coordinates using the agent\\u2019s global position. We then drop the third dimension to map it to a grid in the 2D map. \\n\\n>Typos: Thank you for pointing out the typos, we will fix them\"}", "{\"title\": \"Response part 2\", \"comment\": \"Response contd.\\n\\n>W2: \\u201cThe paper claims that this the first approach that does object rearrangement in multiple rooms formulating it as a POMDP and the existing methods assume object locations. However, most of the current approaches such as TAMP (Curtis et al. 2022, Shah et al. 2020) operate in continuous state and action settings. The presented approach operates in discrete state and action spaces making the problem much simples. It is unclear how the approach is any different from any existing POMDP solvers?\\u201d\\n\\n**Comparison to other works in the area:**\\nAmong the mentioned papers (both [1] and [2]) operate in a fully observable world - they have no uncertainty over object locations and do not need to account for that in their planning, which is major part of the problem we are solving - planning to reduce this uncertainty and achieve the goal efficiently. Yes, our action space is discrete, but the state space is not. The input to our system is RGBD image and goal locations which are in R^3 (continuous 3D coordinates). We discretize the world using our perception system to make the problem solvable by the planner. \\n\\nExisting POMDP Planners are **limited to solving object search** (Zheng et al, 2022[3]; Zheng et al. [4], 2023) or **object rearrangement in a small region** (Caelen et al [5]; Pajarinen et al [6]). Our work is the first to apply **POMDP to rearrangement in multi-room environments**. It is made possible by extending the object-oriented formulation defined in Zheng et al, 2022 to rearrangement tasks and abstracting this formulation to make it efficient in solving rearrangement tasks in multi-room environments. \\n\\n\\n[1] Shah, Naman, et al. \\\"Anytime integrated task and motion policies for stochastic environments.\\\" 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.\\n\\n[2] Curtis, Aidan, et al. \\\"Long-horizon manipulation of unknown objects via task and motion planning with estimated affordances.\\\" 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.\\n\\n[3] Kaiyu Zheng, Rohan Chitnis, Yoonchang Sung, George Konidaris, and Stefanie Tellex. Towards optimal correlational object search. In 2022 International Conference on Robotics and Automation (ICRA), pp. 7313\\u20137319. IEEE, 2022.\\n\\n[4] Kaiyu Zheng, Anirudha Paul, and Stefanie Tellex. A System for generalized 3d multi-object search.In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 1638\\u20131644.IEEE, 2023\\n\\n[5] Caelan Reed Garrett, Chris Paxton, Tomas Lozano-Perez, Leslie Pack Kaelbling, and Dieter Fox. Online replanning in belief space for partially observable task and motion problems. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 5678\\u20135684. IEEE, 2020b\\n\\n[6] Joni Pajarinen, Jens Lundell, and Ville Kyrki. Pomdp manipulation planning under object composition uncertainty. arXiv preprint arXiv:2010.13565, 2020.\"}", "{\"title\": \"Response Part 2\", \"comment\": \"> Q4. Thanks for clarifying the moving of different objects. However, it is a strong assumption. Especially for an object-centric observation space where operating on one object would change the state of another object, and we would know in a POMDP setting. Mathematically speaking, in reality, $pick_i$ is still there. But only it is not accessible. TAMP papers make this clear. I would suggest that authors refer to them.\\n\\nIt is true that in the real world, an object may be impacted while another one is moved. However, our high-level planner still makes this optimistic independence assumption and produces a plan. If, in fact, there are unintended interactions during the execution, e.g., another object falls down while the first object is picked up, the detector is (hopefully) going to detect this, and the planner replans. This approach reflects the optimistic planning people typically engage in rather than getting bogged down in modeling and considering all possible interactions at the planning time. \\n\\n>Q5: \\u201cCan you clarify how a single-room setting is different than a multi-room setting in POMDPs? Here, My original point refers to older POMDP solvers such as POMCP. Especially given a discrete state and action spaces\\u201d\\n\\nTheoretically, a multi-room setting is no different from a single-room setting for a POMDP (assuming you had infinite compute resources, you could solve them both with the basic POMDP formulation). \\n\\nPractically, there are two issues - \\n\\n1. **State space increase**: Let\\u2019s consider a room of size 5*5. If we have 10 objects, the state space, where any object can be in any location, is 25^10 ~ 10^14. But if we have 4 rooms - then we have a house of size 10*10, and state space size goes up to 100^10 ~ 10^20. As we can see, the state space of a multi-room problem is orders of magnitude larger than a single-room setting.\\n\\nAs we further increase the size of the room and the number of objects, the state space increases exponentially. Algorithms like POMCP[1] can handle this size of state space but need a very large number of MCTS simulations (10^4 to 10^5 (figure 2, POMCP [1])), whereas we use only 500 simulations. This is possible due to our object-oriented belief update (POMCP[1] uses a particle-based approximate belief update, which leads to inaccuracies and needs more simulations, whereas we can do a full Bayes update for each object independently). Hence, our first contribution of extension of OOPOMDP to rearrangement POMDP makes it possible to handle multi-room scenarios. \\n\\n2. Second, **The percentage of the environment visible becomes much smaller** - we can see only 10% of the environment in any given multi-room setting, whereas we can potentially see 50% of the world in single-room settings. This implies we need to perform a lot more actions to interact in the full world (we need to go to each room and explore it).\\n\\nHence, the depth of search required is very high in a rearrangement task - upto 1000 steps of low-level actions (with 4 rooms and 10 objects) to solve a single problem, making each simulation very expensive and also unlikely to find a solution. That is where our second contribution of abstract OO-POMDP becomes extremely useful - with the provided abstraction, the depth of the plan scales linearly with the number of objects and does not depend on the size of the room - thereby speeding up the planning process considerably (planning depth is about 3-4x the number of objects - hence a depth of ~40 for the abstract planner). \\n\\nHence, our contribution of the Abstract OO-POMDP for rearrangement is a vital factor in making a POMDP solution viable in a multi-room setting. \\n\\n[1] Silver D, Veness J. Monte-Carlo planning in large POMDPs. Advances in neural information processing systems. 2010;23\"}", "{\"title\": \"Common response\", \"comment\": \"Common Response:\\n\\nWe thank the reviewers for their thoughtful feedback. We are encouraged they found our work to be novel in addressing the complex multi-room rearrangement problem (R2, R4), with significant contributions through our HOO-POMDP framework, achieving comparable results to perfect knowledge baselines (R1), and effectively handling uncertainties in large multi-room spaces (R4). We appreciate the recognition of our technical contributions in handling imperfect detection and low-level control (R2), and addressing the challenging blocked path scenarios (R3). We are pleased that reviewers found our new Multi RoomR dataset to be valuable (R1, R2, R4), and our supplementary video to be informative (R4). We are glad they recognized our work's importance for long-term generalist robots (R3) and its significance to the field (R2). We address reviewer comments below and will incorporate all feedback.\\n\\n**Proposed MultiRoomR Dataset details**\\n\\n1. Size of dataset: 300 room configurations, ten rearrangements each\\n2. Types of objects selected: Present in the appendix section A.1.2 (table 2)\\n3. The rationale behind selecting them: Almost all object types in AI2Thor are selected. Objects that are too small are removed, as they are undetectable even from a close distance. \\n4. Room size information:\\\\\\n a. 200 room configurations of 2 rooms. (50% contain blocked path) \\n b. 50 room configurations of 3 rooms. (100% contain blocked paths) \\n c. 50 room configurations of 4 rooms. (100% contain blocked paths). \\n\\n5. Criteria for object placement:\\\\\\n a. Criteria 1: At least one object needs to be moved in every room This ensures that the agent must explore all rooms to complete the task. \\n b. Criteria 2: For blocked goal and swap cases: We generate scenes where one object blocks the goal location of another object or two objects block each other\\u2019s goal (swap). \\n c. Criteria 3: For blocked path scenes, the location of the object blocking the path is chosen to maximize the area of the house that is inaccessible. \\n\\nWe will add all these dataset details in the appendix.\"}", "{\"comment\": \"Thank you, authors, for your response.\", \"from_the_response\": [\"It is still unclear what the high-level actions are and what makes something a subgoal.\", \"The authors seem to have missed the point. The question is: At the execution time, given the state is unknown and only access to observation is available, how are actions determined to have failed? A failure can only be known if the actual state of the system is known - making it an MDP and not a POMDP.\", \"The third point raises a few more questions (and sorry for raising them now): How does A* work for a POMDP? How is RL model trained in a POMDP?\", \"Thanks for clarifying the moving of different objects. However, it is a strong assumption. Especially for an object-centric observation space where operating on one object would change the state of another object, and we would know in a POMDP setting. Mathematically speaking, in reality, $pick_i$ is still there. But only it is not accessible. TAMP papers make this clear. I would suggest that authors refer to them.\", \"(From the second part) Can you clarify how a single-room setting is different than a multi-room setting in POMDPs? Here, My original point refers to older POMDP solvers such as POMCP. Especially given a discrete state and action spaces.\", \"Because of all these issues, I would like to keep my current score as it is. Looking forward to response from the authors.\"]}", "{\"comment\": \"I thank the authors for their responses and appreciate their effort.\", \"common_sense_priors\": \"Yes, for an unseen object, the planner starts execution based on a randomly sampled location. While common-sense priors can help, a common application of rearrangement is cleaning homes, and untidy homes may only sometimes follow common-sense priors. Our system can handle all cases where objects can be anywhere with similar efficiency. It is also easy to add common-sense priors to our framework (by changing initial belief from random to initialized based on common-sense priors) if we have more information about the domain we operate in (for ex: warehouse).\\n\\nIf it is easy to integrate the belief model into the pipeline, I request the authors to kindly show some results for the method with commonsense prior based initial belief v/s random initial belief.\\n\\n Comparison to SOTA methods It is important to note that our system addresses a variant of the multi-object rearrangement problem that differs in key aspects from those tackled by prior works (Gadre et al.[5], Sarch et al.[1], Trabucco et al.[6], and Mirakhor et al.[4]). While our approach leverages prior knowledge of target object classes, this design choice enables broader generalization capabilities. In existing approaches, where such information is not provided, agents must perform a walkthrough phase for each new goal configuration to identify movable objects. In contrast, our formulation requires only a single initial walkthrough to map stationary objects in the environment. Subsequently, our system can efficiently handle multiple goal configurations without additional walkthroughs, significantly enhancing its adaptability to diverse scenarios. This fundamental difference in problem formulation makes direct performance comparisons potentially misleading despite operating in similar environments.\\n \\n We acknowledge this limitation in direct comparability but believe our results demonstrate the effectiveness of our approach in solving a practically relevant variant of the rearrangement problem. We will revise Section 6 to make these distinctions clearer and better contextualize our contributions relative to prior work\\n\\nI understand that the method proposed can perform multiple goal configuration. However, from my understanding it is clear that the method can perform single goal configuration as well and that is what the existing methods do. Making the experimental setup for single goal configuration, the comparison study will tell us how the proposed method compares to the existing methods in a traditional rearrangement setting with goal stage walkthrough and shuffle phase rearrangement.\", \"clarification_on_object_classes\": \"- We do not restrict which objects can be moved - Rather, for each specific rearrangement task, the system needs to know which object classes are targets - This allows flexibility while maintaining efficient exploration and planning.\\n \\n For the claim about 60%, it refers to MiraKhor et al., table 1, where #V / #O gives the initially visible percentage, which is 60% for all settings in their settings. We will add this information in the paper along with the visibility percentage of objects in our datasets.\\n\\nSo the tradeoff for not performing each goal configuration walkthrough is the prior knowledge of target object classes? Please add the information for 60% visibility setting in the paper.\", \"new_metrics\": \"We will also add a separate table in the appendix with results on how much time was taken to complete the tasks.The distance the agent took to solve the problem is directly proportional to the total actions taken and, hence, not very beneficial in providing any new insight into the method's effectiveness.\\n\\nI did not find the results in the Appendix. Is the paper PDF updated? If no, kindly update the PDF.\", \"scalability\": \"We currently have results for 2,3,4 rooms with the same number of objects (10) in the MultiRoomR dataset. RoomR and ProcThor dataset results also show this - both datasets have 5 objects but are different in the number of rooms (1-2). This shows how the method behaves with different numbers of rooms with the same number of objects.\\n\\n We will provide results for larger numbers of objects and add them in the paper (up to 20)\\n\\nI could not find the results in the paper, kindly update the PDF.\"}", "{\"title\": \"Response part 1\", \"comment\": \"We thank the reviewer for these constructive comments.\\n\\n\\n>W1.1: \\u201cIt is unclear what kind of hierarchies are being used in the paper after mentioning it multiple times throughout the paper. The paper mentions about high-level subgoals and low-level actions, however, never defines what constitutes as a high-level subgoal and what actions are low-level actions? The problem definition describes actions but doesn't make the distinction. The paper should clarify this.\\u201d\\n\\n**Low-level action and sub-goal clarification:**\\nThe paper currently mentions in **Line 168** that the low-level actions are picked from $A_{s}$ ($A_s$ is defined in the previous section on **Line 142**). For clarity, we will add the information that these are the low-level actions in line 142. The actions output by the abstract planner are the sub-goals for the low level control. The relation between them is mentioned in lines 380 onwards. We will add this to the definition of abstract POMDP for further clarity.\\n\\n>W1.2: \\u201cThe paper assumes that whether an action is executed successfully or not is known. How does it ensure this in a POMDP setting where there is a probabilistic observation model.\\u201d\\n\\n**This is an optimistic model of planning** that assumes the abstract actions will succeed and plans accordingly. The failures are handled through replanning at execution time. We get the information about action failure (action success/failure part of the observation is deterministic; object detection is probabilistic) from the environment and update the current state accordingly - since we re-plan at each step, the planning now starts with the state where the action has failed and hence our system can solve problems where actions can fail. \\n\\n>W1.3: \\u201cThe transition model includes a \\\"PickPlace\\\" action. Is it a single action? If it is does it mean the agent can take this and the object magically transforms to the target location?\\u201d\\n\\n**PickPlace details:**\\n**No**, the agent cannot magically transform the object to the target location. The PickPlace is an abstract action defined for the abstract POMDP planner. **There exists a low-level policy that takes this single PickPlace action** (output by the planner) and executes low-level actions to achieve it. The details are provided from line 390 onwards on how the policy uses the information from the abstract action to come up with a sequence of low-level actions to achieve the pick, move, and place. \\n\\n>W1.4: \\u201cThe approach compares in settings where the agent has to remove obstacles in order to move from one location to another. This is good! However, how is known that picking some action would free up that space? It requires more than a object oriented state space which the paper doesn't clearly explain.\\u201d\", \"the_state_of_an_object_is_represented_with_the_following_information\": \"(loc_i, pick_i, place_locs, is_held, at_goal). When the path to an object obj_1 is blocked, there will be no pick_i locations present (hence, the planner will not have picking of obj_1 as part of the plan). When an object obj_2 is picked up, **the transition model updates the state that this object is no longer occupying this location**, the state abstraction system's sampler can now sample location pick_i from where obj_1 can be picked up, thereby allowing the planner to have picking of obj_1 as part of the plan. (more details in section 4.3, generating abstract state).\\n\\n>W1.5: \\u201cIt is unclear how the approach collects initial belief of the object location to even build any policies?\\u201d\\n\\n**Initial belief and behaviour:**\\n**We start with uniform belief over the entire state**. Hence, in the very beginning, the planner is operating with no information, but as it plans, executes, and receives observation, its belief about object location keeps improving (at any given step, if an object is detected, then our belief about the object's viewed location is greatly increased. If an object is not seen, then the belief that the object exists in any of the visible locations is reduced). (Algorithm 2, line 7 shows this update. P(z_i | s_ij) is detailed in Appendix A.1.2)\"}", "{\"summary\": \"The approach presents a POMDP based hierarchical planning approach for object rearrangement in multi-room settings. As far as I understand, the approach starts with high-level and low-level actions and performs hierarchical POMDP planning. Before the planning, the approach runs a SLAM equivalent and builds a map of the environment. After, it performs POMDP planning and executes actions. The paper evaluates the approach in AI2THOR multi-room settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Object rearrangement, i.e., object pick and place tasks are important. The idea that only a subset of objects are visible and the overall setting is a POMDP is important for long-term generalist robots.\\n\\nThe also addresses problems with block paths where it has to re-arrange objects in order to free up the space in order to move. This problem is rarely handled by many approaches as it is a hard problem.\", \"weaknesses\": [\"The paper suffers from the following weaknesses:\", \"**Lack of clarity**: A few claims in the paper are not clear and unsubstantiated. E.g.,\", \"It is unclear what kind of hierarchies are being used in the paper after mentioning it multiple times throughout the paper. The paper mentions about high-level subgoals and low-level actions, however, never defines what constitutes as a high-level subgoal and what actions are low-level actions? The problem definition describes actions but doesn't make the distinction. The paper should clarify this.\", \"The paper assumes that whether an action is executed successfully or not is known. How does it ensure this in a POMDP setting where there is a probabilistic observation model.\", \"The transition model includes a \\\"PickPlace\\\" action. Is it a single action? If it is does it mean the agent can take this and the object magically transforms to the target location?\", \"The approach compares in settings where the agent has to remove obstacles in order to move from one location to another. This is good! However, how is known that picking some action would free up that space? It requires more than a object oriented state space which the paper doesn't clearly explain.\", \"It is unclear how the approach collects initial belief of the object location to even build any policies?\", \"**Lack of Novelty**: The paper claims that this the first approach that does object rearrangement in multiple rooms formulating it as a POMDP and the existing methods assume object locations. However, most of the current approaches such as TAMP (Curtis et al. 2022, Shah et al. 2020) operate in continuous state and action settings. The presented approach operates in discrete state and action spaces making the problem much simples. It is unclear how the approach is any different from any existing POMDP solvers?\"], \"references\": \"Shah, Naman, et al. \\\"Anytime integrated task and motion policies for stochastic environments.\\\" 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.\\n\\nCurtis, Aidan, et al. \\\"Long-horizon manipulation of unknown objects via task and motion planning with estimated affordances.\\\" 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The submission presents a method for solving multi-room object rearrangement problems based on planning in a hierarchical object-oriented POMDP and evaluates in the AI2-THOR environment. It also introduces MultiRoomR, a new set of multi-room object rearrangement task instances.\\n\\nAccording to the reviews, the paper has the following strengths:\\n\\n- The proposed method works well, despite partial object observability and without assuming perfect navigation, motion planning or manipulation, on many challenging task instances. This includes instances from the newly introduced MultiRoomR set, which are generally harder than the existing ones.\\n\\n- The new MultiRoomR set is a welcome contribution to the research community.\", \"the_weaknesses_identified_by_the_reviewers_are\": \"- Arguable novelty. Methodologically, the paper's approach is an extension of object-oriented POMDPs from object search tasks to rearrangement in multi-room environments.\\n\\n- Clarity and (lack of) comparisons to the existing methods. These two got partly addressed during the discussion.\\n\\nThe metareviewer finds that in addition to the submission's pros and cons surfaced in the reviews and the ensuing discussion, an important additional consideration in this paper's case is the issue of its contributions' scope. The proposed method and dataset are structurally engineered to solve a very specific class of embodied AI tasks, multi-room object rearrangement under partial observability. It is unclear which of the proposed method's aspects can be applied to other tasks, let alone how to extend the entire method to them. But this class of tasks is entirely synthetic. The research community uses its instances as benchmarks for evaluating embodied AI systems' planning and reasoning capabilities and focuses on the instances artificially designed to be hard from the planning and reasoning standpoint. These tasks were originally inspired by real-life object rearrangement but are very far from the rearrangement instances an embodied AI agent is likely to encounter in the real world. This gap is fine if the intent is to use these tasks for the usual purpose of a benchmark, i.e., comparing the performance of different methods, but is problematic from the standpoint of crafting a method for solving this specific benchmark, since it's difficult to make a claim that such a method solves, or will ever be able to solve, a real problem. The paper's approach unintentionally illustrates this issue: separating rearrangement in two distinct stages of information gathering and object manipulation is OK when solving an artificial benchmark but would be very unnatural in reality. It is also hard to imagine that, if faced with a rearrangement problem, an embodied AI agent would switch to using this highly specialized solver rather than a more general reasoning module. \\n\\nThese considerations make the metareviewer recommend rejection, since the paper's contribution is essentially engineered to solve and extend a benchmark and doesn't seem to generalize beyond that benchmark. This contribution can still be of interest at a venue that focuses specifically on benchmarks like this, but its scope is too narrow for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The discussion has helped addresses some concerns around clarity and evaluation. With regards to novelty, the discussion has surfaced useful details but overall confirmed the reviewers' original opinions that the methodological contribution is incremental.\"}", "{\"title\": \"Response\", \"comment\": \">1: \\u201cIf it is easy to integrate the belief model into the pipeline, I request the authors to kindly show some results for the method with commonsense prior based initial belief v/s random initial belief.\\u201d\\n\\nWhile it is easy to integrate a common-sense prior when available, we do not have a common-sense sense-prior available for any of the datasets that we have created. The MultiRoomR dataset was created with object placement being random, and hence, the only prior in this case is uniform random distribution. \\nThere is no easily available prior dataset for the RoomR or the ProcThorRearrangement dataset.In [2], they are learning in a different environment [habitat datasets] and, hence, are not applicable to the RoomR dataset. In [1], they learn an Out of Place detector to detect objects out of place and use common sense prior only for goal locations - where the objects must end up and not where could be in an untidy house. Priors are learned over RoomR objects in [3] and [4], but their code and datasets are not available (we did not get them from the authors after multiple requests). Hence, to test with a prior, we would have to learn one from scratch from the dataset, which is non-trivial, , [3] and [4] (they learn specialized networks for it), and beyond the scope of our work. \\n\\n>2: \\u201cI understand that the method proposed can perform multiple goal configurations. Making the experimental setup for single goal configuration, the comparison study will tell us how the proposed method compares to the existing methods in a traditional rearrangement setting with goal stage walkthrough and shuffle phase rearrangement.\\u201d\\n\\n**Yes**, we can compare our results with existing work on the RoomR dataset (we cannot compare on other datasets as we do not have access to their code or dataset). We have run our system on the RoomR dataset, and we have results from [3] and [4] on the RoomR dataset. \\n\\n| Method | Ours | Mirakhor et al[3] | Mirakhor et al[4] |\\n|:----------------------:|:----:|:-------------------:|:-------------------:|\\n| | | [Table 2] | [Table 4] |\\n| | | | |\\n| Scene Success Rate (%) | 49 | 43 | 34 |\\n\\n\\nWe can see that our success rate is slightly higher (49 vs. 43 and 34), but this is with the caveat that we use more information. \\n\\n\\n>3: \\u201cSo the tradeoff for not performing each goal configuration walkthrough is the prior knowledge of target object classes? Please add the information for 60% visibility setting in the paper.\\u201d\\n\\n**Yes**. Our formulation makes the problem easier by providing the object target classes for the scene being solved but this information helps avoid the need of walkthrough each time. Citation has been added to the paper for 60%. \\n\\n>4 and 5 - Additional Results\\n\\n**Table 1** has updated numbers with results on larger number of objects (15 and 20 objects in 3-4 rooms setting). **Table 3** (in appendix A.4) has time information\\n\\n>6: \\u201cI understand that TSP and OR methods are fully observable in nature. But can they leverage the belief and abstraction module to plan even under partial observability. I mean can the authors decouple the aforementioned modules from their pipeline and make it work with any downstream planner. Also if partial observability is the limitation, how does the proposed method compare to TSP and OR under full observability? I would also like to see the impact of the look-ahead depth on the planning performance.\\u201d\\n\\nMCTS over belief states is well-suited for our needs because the search can be interleaved with execution and does not need an exhaustive look-ahead in depth or width. It is possible that it can be replaced with some OR methods that have similar properties, but we could not think of any obvious candidates. With full observability, the proposed method reduces to MCTS over world states (at the high level). Some solutions to TSP, e.g., simulated annealing, might be suitable and competitive with MCTS. However, we note that partial observability is a fundamental aspect of the problem we address, and comparisons under full observability are not very relevant.\", \"for_results_at_different_depth\": \"We have provided results for MCTS depth 1 in **table 3 (Appendix A.4)**. We will have results for **depth 2 and 4 ready for camera-ready version**.\\n\\n>7: \\u201cNearly 70% of the dataset is composed of only 2 room scenarios, similar to ProcThorRearrangement. Only new addition, the dataset brings to the community is the blocked path scenario.\\u201d\\n\\nYes, a large percentage of the dataset consists of 2 room settings similar to ProcThorRearrangement, but all of the 2 room settings in our dataset contain 10 objects, whereas ProcThorRearrangement rooms contain 5 objects. Also, the dataset has now been greatly expanded to include more scenes with larger number of rooms as well as larger number of objects[upto 20]. Details in Appendix A.2.\"}", "{\"comment\": \"Thank you for the clarifications.\\n\\nGoing over my main concerns with the paper, I will evaluate the author's rebuttal and provide an updated review statement. \\n\\n1. Clarity. The authors provide additional detail in the rebuttal and update the paper. I appreciate these changes, and I think it's a stronger paper for it. I still maintain that this paragraph (Section 3, subheading \\\"Challenge\\\") is unclear:\\n\\n> First, the agent traverses the world, gains location information about all receptacles,\\nand stores it in a list R = {ri\\n, i = 1, ..k}, representing their centroids. The agent also builds a 2D\\nmap (M2D) of the world for navigation during rearrangement. This is done by discretizing the world\\ninto grids of size 0.25m. The agent also builds a 3D map (M3D) and stores it. Then, a set of objects\\nare placed at random locations in the environments. The agent is put back into the environment at\\na random location using the startloc action.\\n\\nand it would be better if the authors modified this part of the response and updated that paragraph. \\n\\n> Overall Task setup and map building: Rearrangement is done in 2 phases. Walkthrough phase and rearrange phase. The walkthrough phase is meant to get information about stationary objects. The 2D occupancy map is generated in this phase, as well as the corresponding 3D Map. First, we get the size of the house(width and length) information from the environment. Then, we uniformly sample points in the environment then take steps to reach these locations (if possible - some might be blocked). This simple algorithm ensures we go all around the house and see every part of it. At each of the steps involved in reaching these locations, we receive the RGB and Depth image from the environment. Using this, we create a 3D point cloud at each step and combine them all together to get the overall 3D point cloud of the house with stationary objects. We then discretize this point cloud into 3D map voxels of size 0.25m, we further flatten this 3D map into a 2D map (location in the 2D map is occupied if there exists a point at that 2D location at any height in the 3D map - after flattening, voxels becomes grid blocks of size 0.25m a side). While doing this traversal, we also get information about the receptacles by detector on the RGB images we receive during this traversal. This ends the walkthrough phase. (this walkthrough process is similar to other works solving the rearrangement problem [1][2], except that it needs to be done only once for any house configuration of stationary objects - walls, doors, tables, etc.). Then, objects are placed at random locations (done using AI2Thor environment reinitialization). This is when the rearrangement phase begins, with the planner taking the following as input - the map generated in the walkthrough phase, the set of object classes to move, and their goal locations.\\n\\nThe response is a much clearer detailing of the actual setup. While the authors suggest they have left this out from the paper because it is standard practice, I believe it hurts the readability of the paper. \\n\\nAdditionally, thanks for fixing the nits, but I did notice many more where there's are missing periods at the ends of many sentences, in line 227, 230, 331, 236, 264 (in the caption), etc. I recommend the authors do a thorough copy-edit and grammatical check of the paper before a camera-ready version--whether for this conference or another. \\n\\n2. Concern around baselines. \\n a. Missing citations for claims. Thanks for providing the citations and improving the discussion. This satisfies my critique. \\n b. Comparison to Mirakhor et. al. Ah, it is unfortunate you were unable to access the code or the datasets after reaching out. I will not hold that against this work. Upon a brief check, I was also unable to locate the code for the paper, including released supplementary material. \\n\\n3. Discussion of limitations. The authors address the example I provided, which helps flesh out the limitations sections, but that is not sufficient for addressing the critique. \\n\\n> This assumption is fairly strong, and presents a stumbling block in environments where object classes might not be fully known [...]\\n\\nI would like the authors to expand upon their analysis here. What could one do if the state wasn't factored? How would they imagine future work (I note that Section 7 is titled 'Conclusion and Future Work' but does not mention any future work). \\n\\nRegardless, the proposed updates to the paper in the author response do help increase the clarity of the paper somewhat and I am increasing my score to reflect this. However, there are still improvements that can be made in this direction.\", \"note\": \"that the PDF has not been updated to reflect these changes. I would expect the authors ensure an updated copy is present soon during the rebuttal period.\"}" ] }
BgYbk6ZmeX
What Matters When Repurposing Diffusion Models for General Dense Perception Tasks?
[ "Guangkai Xu", "Yongtao Ge", "Mingyu Liu", "Chengxiang Fan", "Kangyang Xie", "Zhiyue Zhao", "Hao Chen", "Chunhua Shen" ]
Extensive pre-training with large data is indispensable for downstream geometry and semantic visual perception tasks. Thanks to large-scale text-to-image (T2I) pretraining, recent works show promising results by simply fine-tuning T2I diffusion models for a few dense perception tasks. However, several crucial design decisions in this process still lack comprehensive justification, encompassing the necessity of the multi-step diffusion mechanism, training strategy, inference ensemble strategy, and fine-tuning data quality. In this work, we conduct a thorough investigation into critical factors that affect transfer efficiency and performance when using diffusion priors. Our key findings are: 1) High-quality fine-tuning data is paramount for both semantic and geometry perception tasks. 2) As a special case of the diffusion scheduler by setting its hyper-parameters, the multi-step generation can be simplified to a one-step fine-tuning paradigm without any loss of performance, while significantly speeding up inference. 3) Apart from fine-tuning the diffusion model with only latent space supervision, task-specific supervision can be beneficial to enhance fine-grained details. These observations culminate in the development of GenPercept, an effective deterministic one-step fine-tuning paradigm tailored for dense visual perception tasks exploiting diffusion priors. Different from the previous multi-step methods, our paradigm offers a much faster inference speed, and can be seamlessly integrated with customized perception decoders and loss functions for task-specific supervision, which can be critical for improving the fine-grained details of predictions. Comprehensive experiments on a diverse set of dense visual perceptual tasks, including monocular depth estimation, surface normal estimation, image segmentation, and matting, are performed to demonstrate the remarkable adaptability and effectiveness of our proposed method. Code: https://github.com/aim-uofa/GenPercept
[ "Transfer Learning", "Diffusion Models", "Visual Perception" ]
Accept (Poster)
https://openreview.net/pdf?id=BgYbk6ZmeX
https://openreview.net/forum?id=BgYbk6ZmeX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wG4m4aNz4e", "ux6cg4IwfY", "r5xf78IhS2", "nUOPZvoUgW", "mc1wdvx7CA", "lg93VtiJvJ", "hhSodrWcu5", "h0XUgZWGq7", "gzbnoX1E7R", "foP2WfnPpY", "eYNXkeOxO0", "dfRM9MlENz", "cKWpnBbcnr", "Yqc6YpfiGS", "VYYdyk3jiK", "V6lkPAFjtx", "UpxGDcZ9lX", "LkIIlueQ2Q", "JabSCuDyPO", "GpmyzCvPAZ", "GBRN2MKeif", "FDMscjDyjs", "92O3TsDPE0", "6PL0XAOztp", "1ailxNLGYH", "0tLqMlPiSL" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732107438183, 1734490724962, 1732344434762, 1732109480622, 1732803753526, 1732536620188, 1732723024289, 1732554717660, 1732541504027, 1737523593325, 1732108649616, 1730474519368, 1730628108760, 1732289308318, 1729829164528, 1732109967970, 1732393163322, 1733023245325, 1732107047518, 1732106429468, 1732543361171, 1732109842870, 1732723185597, 1732361553560, 1730702430505, 1732392244628 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Area_Chair_BWeU" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_7LHd" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_6WHQ" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_nfdP" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_7LHd" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_6WHQ" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_7LHd" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_QgJw" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_7LHd" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_QgJw" ], [ "ICLR.cc/2025/Conference/Submission3737/Reviewer_nfdP" ], [ "ICLR.cc/2025/Conference/Submission3737/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer 6WHQ\", \"comment\": \"Thank you so much for your thoughtful recognition of our goals and strategy. The questions and critiques you\\u2019ve shared are crucial for our progress, and we are committed to addressing them comprehensively in the following sections.\\n\\n> W1: The biggest weakness is that all conclusions are based on experiments using large synthetic datasets trained on deep estimation tasks. I have two concerns. First, would these conclusions still hold if the datasets were small? Second, would these conclusions still hold if the dataset was completely real?\\n\\nThe training dataset in our study includes 50K Hypersim images and 40K Virtual KITTI images, whose data volume is relatively small compared to traditional methods such as DPT, which utilizes 1.4M images. To address this, we conducted an additional experiment exploring the impact of data volume, with results detailed in the latest version of the supplementary. These experiments demonstrate some robustness in training with respect to data volume. \\n\\nFor realistic datasets, we provide a fair comparison between models trained exclusively on synthetic datasets and those trained on real datasets. Quantitative and qualitative results are presented in Table 5 of the main paper and Figure 3 of the supplementary. Models trained purely on real data perform worse in terms of both quantitative metrics and visualization quality compared to their synthetic-trained counterparts, highlighting the effectiveness of synthetic data.\\n\\n> W2: It is not appropriate to use the P3M 10K dataset to verify the robustness of GenPercept in image matting. This is because the human matting dataset is extremely similar to the dichotomous image segmentation task if only the boundary regions are considered and the deterministic foreground regions are ignored. Instead, if the authors wish to validate its robustness in the image matting task, more types of objects (e.g., semi-transparent objects) should be included and then its performance should be observed.\\n\\nWe appreciate your valuable suggestion. Besides human matting, we also train GenPercept on a more general image matting task on the Composition-1k[a] dataset, and the qualitative results have been updated in Figure 11 of the supplementary. It shows robustness on more types of objects such as semi-transparent objects, hollow objects, etc. Besides, the human matting GenPercept model shows much more robustness on general objects. Please see Figure 10 of supplementary for details.\\n\\n> Q1: What would be the comparison of generalization for OOD data for models trained using synthetic and real data?\\n\\nThanks so much for your suggestion. We compare the generalization performance of models trained on synthetic and real data for out-of-distribution scenarios, and the quantitative results are shown in Figure 3 of the supplementary. The model trained on synthetic data achieves comparable robustness to that trained on real data, but achieves better performance on transparent objects and geometric details. Generally, our GenPercept can generalize well to diverse scenes unseen during training. For example, the surface normal estimation of animation images in Figure 6 of the supplementary, and the keypoint estimation of animation and cat images in Figure 4 of the supplementary.\\n\\n[a] Xu, Ning, et al. \\\"Deep image matting.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\"}", "{\"metareview\": \"The paper introduces GenPercept, a novel approach leveraging diffusion models to enhance dense visual perception tasks. The work claims improvements in inference speed and detail of predictions, substantiated through a series of experiments across multiple tasks.\\n\\nThe paper stands out for its rigorous experimental validation and the clear demonstration of the versatility and efficiency of diffusion models in dense perception contexts. The design space analysis and the results presented are commendable and represent the paper's key strengths.\\n\\nConcerns are raised about the heavy reliance on extensive synthetic datasets, leading to questions about the model's performance with smaller or real-world data. The initial comparison with existing methods was not entirely convincing, and the paper lacked a quantitative evaluation of model efficiency. Additionally, some theoretical claims, such as \\\"ground truth leakage,\\\" were not sufficiently clear or substantiated.\\n\\nThe decision leans toward acceptance due to the paper's robust experimental work and the novelty of applying diffusion models in this manner. Despite this, the unresolved issues around the real-world applicability and some theoretical ambiguities slightly detract from the paper's impact.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewers raised critical points during the discussion:\", \"Reviewer nfdP was concerned about the sim2real gap.\", \"Reviewer 6WHQ felt the performance comparison with existing methods was lacking.\", \"Reviewer 7LHd noted the absence of quantitative model efficiency evaluation.\", \"Reviewer QgJw pointed out unclear theoretical claims.\", \"In response, the authors:\", \"Offered comparisons between synthetic and real data to address nfdP's concerns, though questions about robustness remain.\", \"Updated the manuscript to better compare with state-of-the-art methods, alleviating 6WHQ's concern.\", \"Included a runtime analysis to satisfy 7LHd's call for model efficiency evaluation.\", \"Clarified theoretical aspects and revised their explanation of \\\"ground truth leakage,\\\" somewhat addressing QgJw's criticism.\", \"In reaching the final decision, I considered the extent to which the authors' responses alleviated the reviewers' concerns. While not all issues were fully resolved, the authors demonstrated a commitment to enhancing the clarity and rigor of their work, contributing to the decision to accept.\"]}", "{\"title\": \"Response to reviewer 7LHd\", \"comment\": \"Thank you for your fast and valuable feedback. We sincerely appreciate your thoughtful comments and have carefully addressed the concerns raised, as detailed below.\\n\\n\\n> Regarding W1: I thank the authors for the explanation. However, it is likely that the default Marigold and DSINE settings are not optimal for DepthFM and GeoWizard, so I\\u2019m not convinced that using the Marigold and DSINE evaluation settings to generate the evaluation scores for these methods is completely fair. To present a complete picture to the reader, I believe it is fair to (in addition to the reproduced scores that are already in Tab. 6 and Tab. 7) also present the originally reported scores of DepthFM and GeoWizard, with a note about the difference in evaluation settings.\\n\\nWe fully acknowledge and agree on the importance of reporting the origin scores. We sincerely appreciate the valuable suggestion and have revised the Table 6 and Table 7 accordingly.\\n\\n\\n\\n> Regarding W4: I thank the authors for including the runtime analysis in the revised paper. This demonstrates the positive impact of the one-step inference procedure on the model\\u2019s speed. However, it is not yet clear how the prediction speed compares to that of existing models listed in Tab. 6 and Tab. 7. Could the authors report the runtime for some of these state-of-the-art methods, such that is clear how GenPercept compares in terms of efficiency?\\n\\n\\nThank you once again for your valuable suggestion. We fully agree on the importance of comparing the GenPercept inference speed with existing state-of-the-art methods. Accordingly, we have incorporated this comparison into Table 1 of the supplementary materials. Compared to existing state-of-the-art diffusion-based methods, our proposed GenPercept achieves a notable improvement in inference speed, attributed to the innovative one-step inference paradigm and the customized head. While our method demonstrates inference speeds comparable to Metric3Dv2 and DSINE, it falls behind DepthAnythingV2. Note that the superior performance of DepthAnythingV2 is facilitated by its training on a relatively lightweight model, bolstered by extensive labeled and unlabeled datasets, and supported by substantial computational resources distributed across multiple GPUs.\\n\\n\\n\\n> Regarding W5: I do not completely follow what the authors mean by \\u201cground truth leakage\\u201d as mentioned in the rebuttal and L182 of the revised manuscript. Why is it a bad thing that the blended input image still contains part of the ground truth during the forward diffusion process, e.g., at timestep t=200? Why does this lead to decreased depth estimation performance? This is not clearly explained.\\n\\n\\nIn each iteration of training diffusion models for visual perception, a timestep t is sampled to control the proportion of noise added to the ground truth latent, and the network is trained to recover a clean ground truth latent from the noisy latent. For smaller timesteps like t=200, as illustrated in Figure 2(a), the input to the network retains significant ground truth information, making it comparatively easier to recover the clean ground truth latent than attempting recovery in the absence of any ground truth information. In contrast, the experimental setting of GenPercept involves inputting purely a noisy latent devoid of ground truth information, presenting a greater challenge. This hypothesis can be proved by the experiments summarized in Table 1, where blending the ground-truth latent with an increasing proportion of noise consistently leads to stable performance improvements during training. We have updated the related modification in Section 3.\"}", "{\"title\": \"Part 2 of Response to reviewer 7LHd\", \"comment\": \"> W4: One of the main benefits of the one-step inference procedure of GenPercept is the improved efficiency compared to Marigold and other diffusion-based methods. The efficiency is also mentioned as a key characteristic of the method (L539). However, the paper does not explicitly evaluate the efficiency of the model. Therefore, it is not clear what the exact speedup is over existing diffusion-based models, or how the efficiency compares to the other methods reported in Tab. 6, 7, 8, and 10. The paper would be significantly stronger if the efficiency of GenPercept was shown quantitatively, by reporting the inference time of GenPercept and other methods.\\n\\nWe are very grateful for your valuable suggestion. The runtime analysis has been updated and highlighted in the \\\"Runtime Analysis\\\" section of the supplementary. Our proposed one-step inference paradigm demonstrates a runtime that is _94% and 57% less than_ those of multi-step methods with ensemble and without ensemble, respectively. Besides, by incorporating a customized head such as a DPT head, both runtime and GPU memory requirements are _further reduced by 27%_ without compromising performance, maintaining a competitive level of efficiency.\\n\\n\\n> W5: L187-L188 states that the results indicate that increasing the \\u2018training challenges\\u2019 leads to improved model performance. However, the paper does not provide any explanation for this. Why does the performance increase when the training task becomes more challenging? This seems counterintuitive. The value of the paper would improve if the authors provided an explanation (or hypothesis) for this phenomenon, as this could provide insights into the way these diffusion models learn dense perception tasks.\\n\\n**The \\\"training challenge\\\" here represents \\\"preventing the ground truth leakage\\\"**. For the training process, the input of diffusion models is derived from the forward diffusion process, a linear blending process of noise and ground truth with different timesteps and noise forms, as illustrated in Figure 2(a). For small timesteps like \\\"t=200\\\", _the input blended image still contains part of ground truth information_ (e.g., the purple color of the surface normal) and may lead to a \\\"ground truth leakage\\\" problem. \\n\\nOn the other hand, the blending proportion is controlled by the beta values ($\\\\beta_{start}$, $\\\\beta_{end}$) of the diffusion model scheduler. As shown in Figure 2(b) and Figure 2 of the supplementary, _increasing the beta values can increase the noise proportion and decrease the ground truth proportion_. Training with larger beta values can consistently achieve better performance for both Gaussian noise and RGB noise, which is proved in Table 1 and Figure 2(c). The detailed revisions have been updated and highlighted in red in Section 3.\\n\\n> W6: The paper does not clearly define $\\\\beta_{start}$ and $\\\\beta_{end}$, even though first experiment and finding (Tab. 1 & L204) are fully focused around changing the values of these parameters. Fig. 2(b) illustrates that they impact the proportion of noise that is added to the latents in different time steps, but an exact definition is not provided. In L107, the paper refers to the supplementary material for more details, but the supplementary material only mentions $\\\\beta_{s}$ and $\\\\beta_{t}$. The clarity of the paper would improve if a clear definition of $\\\\beta_{start}$ and $\\\\beta_{end}$ was provided.\\n\\nWe are very grateful for your guidance. The proportion of Gaussian noise is related to $\\\\alpha_t$, and $\\\\alpha_t$ is computed by cumulative production when the scheduler of $\\\\beta$ values is known. The scheduler is parameters with two hyperparameters $\\\\beta_{start}$ and $\\\\beta_{end}$, which defines the $\\\\beta$ values of t=0 and t=1000, respectively. For a casual timestep $s$, $\\\\beta_s$ is computed by linearly interpolating between $\\\\sqrt{\\\\beta_{start}}$ and $\\\\sqrt{\\\\beta_{end}}$, then squaring each interpolated value. The detailed revisions have been updated and highlighted in red in Section 1 of the supplementary.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks to the authors for their detailed answer, and for updating the manuscript accordingly.\\n\\nAfter the extensive discussion, and taking into account the other reviews and responses, I have decided to upgrade my rating from 5 to 6.\\n\\nBy answering my questions and revising the manuscript, the authors have convincingly addressed the majority of my concerns. Importantly, the paper now compares the proposed method more fairly to existing methods, provides insightful inference speed results, better explains the operation of the method, and discusses the cause of the impact of different $\\\\beta$ values in a more nuanced way. As such, the I believe the paper is stronger than the initial submission, and I upgrade my rating.\"}", "{\"comment\": \"Thanks for the rebuttal. Overall, my concerns are addressed. I will maintain my initial rating.\"}", "{\"title\": \"Response to reviewer 7LHd\", \"comment\": \"Thanks to reviewer 7LHd for the important findings and suggestions. We have revised our clarification to remain neutral and impartial in our analyses.\\n\\n> First, I would not refer to this phenomenon as \\\"ground truth leakage\\\". \\\"Ground truth leakage\\\" could suggest that the ground truth is somehow used by the model during inference, which is not the case here.\\n\\nWe appreciate and agree with the feedback regarding the term \\\"ground truth leakage\\\". We have modified our description and now refer to it as a hypothesis of \\\"a certain level of ground-truth label information being part of the input during training.\\\" Thank you for your valuable guidance.\\n\\n> While L189 states that the authors consider this to be caused by the randomness of Gaussian noise, the same trend can be observed both with and without multi-resolution noise, so the effect does not appear to be completely random. Moreover, using large noise proportions (1.0, 1.0) does not lead to better results than using (0.0034, 0.048).\\n\\nTo investigate whether the observed performance decrease is attributable to the inherent randomness of diffusion models, we followed the reviewer\\u2019s suggestion and conducted experiments by varying the random seed during both the training and inference process. However, the results showed only slight variations, which led us to reconsider our original hypothesis. These findings suggest that the performance decline is not due to randomness as initially proposed, and we have changed the related description. For the result using noise proportion (1.0, 1.0), we consider it better than using (0.0034, 0.048) according to the \\\"Rank\\\" performance, which means the average rank of ten evaluation performance.\\n\\n> Specifically, the results indicate that there are multiple different somewhat optimal values for $\\\\beta_{start}$ and $\\\\beta_{end}$, i.e., (a) somewhat low values like the baseline Marigold settings and (b) pure noise. This contradicts the claim that reducing \\\"ground truth leakage\\\" improves the performance, as the performance does not consistently improve when \\\"ground truth leakage\\\" is reduced by using larger noise proportions.\\n\\nWe fully agree with the finding that \\\"there are multiple different somewhat optimal beta values\\\". The relevant revisions have been updated in Section 3.1. We rule out the influence of randomness and propose this hypothesis, which indicates there may exist various factors besides the \\\"ground-truth label being part of the input\\\". For the related specific questions, the points mentioned above can help address them.\"}", "{\"comment\": \"Thanks for the rebuttal. My concerns are mostly addressed. I prefer to maintaining my initial rating.\"}", "{\"title\": \"Response to reviewer 6WHQ\", \"comment\": \"Thank you for your feedback. We appreciate your time and effort in reviewing our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Part 1 of Response to reviewer 7LHd\", \"comment\": \"We sincerely thank you for your very careful and detailed review. We are delighted that you found our work to be comprehensive and convincing. The questions and weaknesses you've pointed out are incredibly helpful to us, and we will do our utmost to address them below.\\n\\n> W1: The scores for DepthFM in Tab. 6 and GeoWizard in both Tab. 6 and Tab. 7 do not correspond to the scores originally reported in the respective papers. The numbers in these tables should be altered to reflect the originally reported numbers, or the text should explain why the numbers differ from these original numbers. Some examples:\\n>\\n>a) GeoWizard: 9.7 AbsRel and 92.1$\\\\delta_1$ on KITTI in original paper [a], but 12.9 AbsRel and 85.1 $\\\\delta_1$ in this submitted manuscript.\\n>\\n>b) DepthFM: 8.3 AbsRel and 93.4 $\\\\delta_1$ on KITTI in original paper [b], but 17.4 AbsRel and 71.8 $\\\\delta_1$ in this submitted manuscript.\\n\\nBoth Geowizard and DepthFM reported the final results with little evaluation details, but they didn't release the evaluation code. For geometry evaluation, the ensemble size, inference resolution, valid evaluation depth range (specific for depth estimation), and evaluation average paradigm (average by pixels or average by the number of images) can be different for each method. **To compare these approaches fairly**, we follow the _open-source evaluation code of Marigold for depth and DSINE for surface normal_, and evaluate the performance of existing SOTA methods with their officially released model weights. Therefore, the performance can be different from that reported in their paper. We add an explanation of the performance in red at the beginning of Section 4 of the main paper.\\n\\n> W2: The results in Tab. 8 (and Tab. 2 of the supp) make it seem like GenPercept achieves state-of-the-art results in dichotomous image segmentation. However, this table does not contain the results of top-performing model MVANet [c]. This model achieves a max\\n$F_{\\\\beta}$ score of 0.916 on Overall DIS-TE (1-4), compared to 0.863 by GenPercept. Even with results that are inferior to MVANet, GenPercept has value, but it should be clear to the reader that GenPercept does not achieve state-of-the-art results, so these results should be added to the tables. \\n>\\n> W3: For the image matting task on the P3M-500-NP dataset, GenPercept is only compared to the ResNet-34 variant of P3M [d], not the Swin-T or ViTAE-S variants which achieve much better results, e.g., 7.59 SAD for ViTAE-S compared to 11.23 SAD for ResNet-34. By only reporting the results for the ResNet-34 variant, it seems like GenPercept performs similarly to the state of the art, whereas this is not the case. The results of P3M for ViTAE-S and/or Swin-T should be added to the paper, or the text should clearly explain why such a comparison is not necessary.\\n\\n\\nMore comparisons of dichotomous image segmentation and image matting. Thank you very much for pointing out this issue. We fully agree with your guidance to accurately reflect the correct ranking of GenPercept. The quantitative and qualitative comparisons of these two tasks have been updated in Table 8, Table 10, Figure 5, and Figure 6 of the main paper, and Table 4, Figure 7, and Figure 10 of the supplementary. Although GenPercept performs lower than existing SOTA methods in these two tasks, we find that **GenPercept exhibits much more enhanced robustness when applied to in-the-wild images of both dichotomous image segmentation and image matting** thanks to the robust pre-training knowledge of stable diffusion. Unlike achieving the highest performance on a specific dataset, _GenPercept offers a general network architecture_ that shows much robustness for dense perception tasks and challenging in-the-wild images and possesses its unique value.\"}", "{\"summary\": \"This paper researches the process of fine-tuning pre-trained text-to-image diffusion models for dense perception tasks like monocular depth estimation and semantic segmentation, and analyzes several design choices in that process. In this analysis, it considers the model architecture, training procedure, model initialization, dataset selection, and fine-tuning protocol. Based on this analysis, five findings are made:\\n\\n1.\\tDiffusion models can be fine-tuned accurately and efficiently for dense perception tasks with a one-step, deterministic approach.\\n2.\\tThe most useful prior knowledge of the pre-trained diffusion model is contained in the U-Net denoiser. The VAE decoder can be replaced with other components without problems.\\n3.\\tWith the one-step model, multi-timestep training is irrelevant, and additional text inputs do not significantly impact performance.\\n4.\\tFine-tuning the U-Net denoiser leads to better results than freezing it or applying low-rank adaptation.\\n5.\\tTraining data quality affects the prediction quality.\\n\\nBased on these findings, the paper presents GenPercept, a paradigm for fine-tuning diffusion models for dense prediction tasks. Experiments show that GenPercept is an effective method to fine-tune Stable Diffusion for various dense prediction tasks. On several benchmarks, GenPercept achieves a competitive performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tMost importantly, the analysis of the design choices that play a role when fine-tuning diffusion models for monocular depth estimation, described in Sec. 3, leads to various relevant and useful insights.\\n\\n a)\\tThe finding that fine-tuning can be done accurately with a one-step, deterministic approach (Tab. 1) is useful because it allows for significantly more efficient inference. \\n\\n b)\\tMoreover, by formulating the task as one-step estimation, the model can now te trained with task-specific losses, such as the image angular loss for surface normal estimation. With experiments (Tab. 7), this is shown to improve the results.\\n\\n c)\\tThe finding that most of the useful prior knowledge is contained in the U-Net of the pre-trained diffusion model, and not in the VAE decoder (Tab. 2), is useful because it means that the VAE decoder can be replaced with task-specific decoders. This is shown to be significantly boost the semantic segmentation performance in Tab. 9 (per L418-L423). \\n\\n d)\\tThe finding that the use of multiple timesteps during training is not necessary (Tab. 3) is useful because it allows the training procedure to be simplified to single-timestep training.\\n\\n e)\\tThe finding that the U-Net denoiser should be fine-tuned fully, without low-rank adaptation (Tab. 4), is useful because it provides clear guidelines for future methods that aim to fine-tune diffusion models for depth estimation and other dense perception tasks.\\n\\n2.\\tThe presentation of the main findings of the paper in the text boxes (e.g., L283 and L296) is very helpful and useful. These text boxes clearly summarize the impact of the results that have been discussed thus far, and they help the reader to see the most interesting results of the paper at a glance.\\n\\n3.\\tThe presented GenPercept model is valuable because it takes advantage of the aforementioned findings of the design-choice analysis. The application and evaluation of this model on multiple tasks and datasets provides the reader with insights into the performance that can be obtained when combining the best individual design choices. Moreover, these results can serve as a baseline for future works that aim to fine-tune diffusion models for multiple downstream perception tasks.\", \"weaknesses\": \"1.\\tThe scores for DepthFM in Tab. 6 and GeoWizard in both Tab. 6 and Tab. 7 do not correspond to the scores originally reported in the respective papers. The numbers in these tables should be altered to reflect the originally reported numbers, or the text should explain why the numbers differ from these original numbers. Some examples:\\n\\n a)\\tGeoWizard: 9.7 AbsRel and 92.1 $\\\\delta_{1}$ on KITTI in original paper [a], but 12.9 AbsRel and 85.1 $\\\\delta_{1}$ in this submitted manuscript. \\n\\n b)\\tDepthFM: 8.3 AbsRel and 93.4 $\\\\delta_{1}$ on KITTI in original paper [b], but 17.4 AbsRel and 71.8 $\\\\delta_{1}$ in this submitted manuscript.\\n\\n2.\\tThe results in Tab. 8 (and Tab. 2 of the supp) make it seem like GenPercept achieves state-of-the-art results in dichotomous image segmentation. However, this table does not contain the results of top-performing model MVANet [c]. This model achieves a max$F_{\\\\beta}$ score of 0.916 on Overall DIS-TE (1-4), compared to 0.863 by GenPercept. Even with results that are inferior to MVANet, GenPercept has value, but it should be clear to the reader that GenPercept does not achieve state-of-the-art results, so these results should be added to the tables.\\n3.\\tFor the image matting task on the P3M-500-NP dataset, GenPercept is only compared to the ResNet-34 variant of P3M [d], not the Swin-T or ViTAE-S variants which achieve much better results, e.g., 7.59 SAD for ViTAE-S compared to 11.23 SAD for ResNet-34. By only reporting the results for the ResNet-34 variant, it seems like GenPercept performs similarly to the state of the art, whereas this is not the case. The results of P3M for ViTAE-S and/or Swin-T should be added to the paper, or the text should clearly explain why such a comparison is not necessary.\\n4.\\tOne of the main benefits of the one-step inference procedure of GenPercept is the improved efficiency compared to Marigold and other diffusion-based methods. The efficiency is also mentioned as a key characteristic of the method (L539). However, the paper does not explicitly evaluate the efficiency of the model. Therefore, it is not clear what the exact speedup is over existing diffusion-based models, or how the efficiency compares to the other methods reported in Tab. 6, 7, 8, and 10. The paper would be significantly stronger if the efficiency of GenPercept was shown quantitatively, by reporting the inference time of GenPercept and other methods.\\n5.\\tL187-L188 states that the results indicate that increasing the \\u2018training challenges\\u2019 leads to improved model performance. However, the paper does not provide any explanation for this. Why does the performance increase when the training task becomes more challenging? This seems counterintuitive. The value of the paper would improve if the authors provided an explanation (or hypothesis) for this phenomenon, as this could provide insights into the way these diffusion models learn dense perception tasks.\\n6.\\tThe paper does not clearly define $\\\\beta_{start}$ and $\\\\beta_{end}$, even though first experiment and finding (Tab. 1 & L204) are fully focused around changing the values of these parameters. Fig. 2(b) illustrates that they impact the proportion of noise that is added to the latents in different time steps, but an exact definition is not provided. In L107, the paper refers to the supplementary material for more details, but the supplementary material only mentions $\\\\beta_{s}$ and $\\\\beta_{t}$. The clarity of the paper would improve if a clear definition of $\\\\beta_{start}$ and $\\\\beta_{end}$ was provided.\\n7.\\tThe paper does not evaluate if the multi-class image segmentation performance of GenPercept is competitive with existing image segmentation methods. Currently, the paper only conducts an experiment where GenPercept is trained on HyperSim images with 40 semantic classes, and evaluated on ADE20K images with 40 classes. As this is a newly proposed setting, these results are not comparable with existing segmentation methods. To see if fine-tuning diffusion models is truly valuable for image segmentation, GenPercept should be fine-tuned and evaluated on standard segmentation benchmarks like ADE20K, and compared to existing segmentation models like Mask2Former [e]. \\n8.\\tA minor weakness of the paper is that there is not a version of GenPercept that clearly works better than the other. As discussed in L357-L359 and shown in Tab. 6, the GenPercept model trained for depth estimation scores better on NYU, ScanNet and ETH3D, while the model trained for disparity estimation scores significantly better on KITTI and DIODE. As a result, two different models are necessary for different situations, and it is not always clear which of the models works best for an unknown application domain. \\n\\nSome minor weaknesses, which do not significantly impact my rating:\\n\\n9.\\tL230: The acronym \\u201cDPT\\u201d is not defined, nor is there a reference to a work where it is presented. As a result, it\\u2019s not clear what a \\u201cDPT encoder\\u201d is.\\n10.\\tThe meaning of the \\u201cfine-tune\\u201d column in Tab. 9 is not clear. From the text in L418-L423, it appears that it refers to the use of the UPerNet decoder. If so, the text \\u201cUPerNet\\u201d seems more appropriate than \\u201cfine-tune\\u201d. If it means something else than this, this should be better specified in the paper.\\n11.\\tThe ordering of the labels in the legend of Fig. 2 (b) is confusing, as there appears to be no logical ordering. The legend, and the figure as whole, would be easier to interpret if the labels were ordered in descending or ascending manner, e.g. starting with (0.0002125, 0.003) and ending with (1.0, 1.0), or vice versa.\\n\\n12.\\tThere are a few errors in the text:\\n\\n a) L024: \\\"tailed for\\\" - Do the authors mean \\\"tailored for\\\"?\\n\\n b)\\tL187-L188: \\\"increasing training challenges lead to\\\" => \\\"increasing training challenges leads to\\\"\\n\\n[a] Fu et al., \\u201cGeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image,\\u201d ECCV 2024.\\n\\n[b] Gui et al., \\u201cDepthFM: Fast Monocular Depth Estimation with Flow Matching,\\u201d arXiv:2403.13788, 2024.\\n\\n[c] Yu et al., \\\"Multi-view Aggregation Network for Dichotomous Image Segmentation,\\\" CVPR 2024.\\n\\n[d] Ma et al., \\\"Rethinking Portrait Matting with Privacy Preserving,\\\" IJCV 2023.\\n\\n[e] Cheng et al., \\u201cMasked-attention Mask Transformer for Universal Image Segmentation,\\u201d CVPR 2022.\", \"questions\": \"The main reason that I currently give a slightly low rating is because of the incorrect numbers of existing models, the missing quantitative comparisons to some other existing models, the missing efficiency experiments, and some missing explanations (see the \\u2018weaknesses\\u2019 section). I would like to ask the authors to carefully address my concerns, answer the questions posed in the \\u2018weaknesses\\u2019 section, and revise the manuscript accordingly.\\n\\nAdditionally, I have some other questions/suggestions:\\n\\n1.\\tFrom the text in L411-L423, it appears like the classes for semantic segmentation are always encoded into 3-channel colormaps, even when UPerNet is used. Is this really the case? If so, why isn\\u2019t the \\u2018regular\\u2019 semantic segmentation format used, with one channel for each individual class? If not, please clarify this in the text.\\n2.\\tIn the related work section, it seems appropriate to also mention DINOv2, because it has shown to be very suitable for downstream visual perception tasks like depth estimation (e.g., with Depth Anything [g]) and semantic segmentation.\\n\\n[f] Oquab et al., \\u201cDINOv2: Learning Robust Visual Features without Supervision,\\u201d TMLR 2024.\\n\\n[g] Yang et al., \\u201cDepth Anything: Unleashing the Power of Large-Scale Unlabeled Data,\\u201d CVPR 2024.\\n\\n---\\n\\n**Update after author discussion.** After reading the different reviews, the authors' response, and the revised manuscript, and having a discussion with the authors, I have decided to upgrade my rating from 5 to 6. \\n\\nBy answering my questions and revising the manuscript based on my review and follow-up questions, the authors have convincingly addressed the majority of my concerns. Importantly, the paper now compares the proposed method more fairly to existing methods, provides insightful inference speed results, better explains the operation of the method, and discusses the cause of the impact of different $\\\\beta$ values in a more nuanced way. As such, the I believe the paper is stronger than the initial submission, and I upgrade my rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper reveals key elements of diffusion models for downstream dense perception tasks. The authors did a full range of validation from the perspectives of model design and training data, unveiled some IMPORTANT factors, and proposed a new model named GenPercept,\\nThe authors have conducted extensive experiments on five dense perception tasks, including monocular depth estimation, surface normal estimation, image segmentation, and matting. This extensive experimentation serves as a testament to the effectiveness and universality of their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors conduct extensive ablation studies from different aspects and finally figure out the key factors affecting transfer efficiency and performance when using pre-trained diffusion models.\", \"Starting from the inconsistency of the results of two existing methods, the authors dig deeper into the influencing factors and then propose their GenPercept. This approach can provide some new ideas for doing research.\", \"These findings are inspiring and can provide constructive insights into model design when adapting pre-trained models for downstream tasks.\"], \"weaknesses\": [\"The biggest weakness is that all conclusions are based on experiments using large synthetic datasets trained on deep estimation tasks. I have two concerns. First, would these conclusions still hold if the datasets were small? Second, would these conclusions still hold if the dataset was completely real?\", \"It is not appropriate to use the P3M 10K dataset to verify the robustness of GenPercept in image matting. This is because the human matting dataset is extremely similar to the dichotomous image segmentation task if only the boundary regions are considered and the deterministic foreground regions are ignored. Instead, if the authors wish to validate its robustness in the image matting task, more types of objects (e.g., semi-transparent objects) should be included and then its performance should be observed.\"], \"questions\": \"What would be the comparison of generalization for OOD data for models trained using synthetic and real data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I would like to thank the authors for their detailed response to all the reviews. In their response, the authors have adequately addressed the majority of my concerns. However, I have a few remaining concerns and some follow-up questions.\\n\\n---\\n\\n**Regarding W1:** I thank the authors for the explanation. However, it is likely that the default Marigold and DSINE settings are not optimal for DepthFM and GeoWizard, so I\\u2019m not convinced that using the Marigold and DSINE evaluation settings to generate the evaluation scores for these methods is completely fair. To present a complete picture to the reader, I believe it is fair to (in addition to the reproduced scores that are already in Tab. 6 and Tab. 7) also present the originally reported scores of DepthFM and GeoWizard, with a note about the difference in evaluation settings.\\n\\n---\\n\\n**Regarding W4:** I thank the authors for including the runtime analysis in the revised paper. This demonstrates the positive impact of the one-step inference procedure on the model\\u2019s speed. However, it is not yet clear how the prediction speed compares to that of existing models listed in Tab. 6 and Tab. 7. Could the authors report the runtime for some of these state-of-the-art methods, such that is clear how GenPercept compares in terms of efficiency?\\n\\n---\\n\\n**Regarding W5:** I do not completely follow what the authors mean by \\u201cground truth leakage\\u201d as mentioned in the rebuttal and L182 of the revised manuscript. Why is it a bad thing that the blended input image still contains part of the ground truth during the forward diffusion process, e.g., at timestep t=200? Why does this lead to decreased depth estimation performance? This is not clearly explained.\\n\\n---\\n\\nI look forward to reading the authors\\u2019 response. Thanks in advance!\"}", "{\"summary\": \"This paper presents a novel approach to harnessing pretrained diffusion models for general dense perception tasks, including monocular depth estimation, surface normal estimation, dichotomous image segmentation, semantic segmentation, and human pose estimation. The central idea of this work is to utilize the diffusion model as a robust pretrained backbone, subsequently fine-tuning it for a variety of downstream applications.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. This paper provides systematical analysis of the design space of diffusion model for dense perception task, as shown in the 5 findings in paper. The core design of this paper is to employ a deterministic one step perception.\\n2. By using the proposed training protocol, the proposed GenPercept harnesses the power of the pre-trained UNet from diffusion models for dense visual perception tasks, including monocular depth estimation, surface normal estimation, dichotomous image segmentation and semantic segmentation. \\n3. This paper presents a solid analysis accompanied by extensive experiments. The results are not only intuitive but also promising, demonstrating a strong potential for downstream application.\", \"weaknesses\": \"1. Given that GenPercept is currently trained on a relatively small dataset, it tends to lag behind models that benefit from extensive data training. To enhance its performance, it would be beneficial to scale GenPercept's training with a larger volume of data in the future.\\n2. Given the robust prior established by training on the extensive LAION dataset, the question arises: what would be the outcome of employing alternative self-supervised methods, such as MAE or CLIP, using the same LAION dataset? A comparative analysis of these approaches against the diffusion pretrain would provide valuable insights into their relative efficacy and potential advantages.\", \"questions\": \"1. Trained with synthetic data, why does GenPercept not suffer from the sim2real domain gap?\\n2. In the supplementary materials, line 22 should utilize the citation format 'citep'.\\n3. Can GenPercept be applied to general perception tasks, such as detection?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 6WHQ\", \"comment\": \"We deeply appreciate your acknowledgment of our motivation and approach. The insights you've provided, especially regarding the questions and potential areas for improvement, are immensely helpful. Below, we strive to address these in detail.\\n\\n> W1: Given that GenPercept is currently trained on a relatively small dataset, it tends to lag behind models that benefit from extensive data training. To enhance its performance, it would be beneficial to scale GenPercept's training with a larger volume of data in the future.\\n\\nWe sincerely appreciate your valuable guidance on expanding the training data volume. This insight holds significant value not only for practical applications but also for advancing academic research on the effectiveness of large-scale data training. We have incorporated this suggestion into the conclusion section and plan to design and conduct experiments to explore this further in future work.\\n\\n\\n> W2:Given the robust prior established by training on the extensive LAION dataset, the question arises: what would be the outcome of employing alternative self-supervised methods, such as MAE or CLIP, using the same LAION dataset? A comparative analysis of these approaches against the diffusion pretrain would provide valuable insights into their relative efficacy and potential advantages.\\n\\n\\nThank you for your insightful suggestion. It is quite valuable to explore whether the detailed visual perception predictions generated by existing diffusion models primarily benefit from the extensive LAION dataset or the diffusion pretraining paradigm itself. We have incorporated this suggestion into the conclusion section and will continue to investigate the advantages and unique contributions of diffusion models in future research.\\n\\n\\n> Q1: Trained with synthetic data, why does GenPercept not suffer from the sim2real domain gap?\\n\\nWe attribute this phenomenon to two reasons. First, the diffusion models pre-trained on the LAION dataset have the ability to generate both realistic and stylized images, which may decrease the sim2real domain gap, because it can regard the simulated images as a specific style of real image. Second, with the development of simulators, the synthetic datasets tend to be more and more realistic, which decreases the sim2real domain gap on perception tasks a lot. \\n\\n> Q2: In the supplementary materials, line 22 should utilize the citation format 'citep'.\\n\\nThanks so much for your reminder. We have fixed it and updated it in the latest version of our manuscript.\\n\\n> Q3: Can GenPercept be applied to general perception tasks, such as detection?\\n\\nWe attempted the human pose estimation (keypoint detection) task by reformulating it as a 3-channel dense map estimation in the supplementary, and it shows some robustness to out-of-domain images like cats and animation. An alternative approach would be to replace the VAE decoder with a customized detection head and corresponding loss functions. Similarly, it can be extended to implement tasks like object detection.\"}", "{\"title\": \"An extra analysis about the \\u201cground truth leakage\\u201d\", \"comment\": \"Reviewer QgJw raised a similar concern regarding W5: specifically, why reducing \\u201cground truth leakage\\u201d is effective, and this seems to deviate from the principles of original diffusion models. We copy the question and our response here and hope this clarification will be helpful to you.\\n\\n> Q1 of reviewer QgJw: This claim seems to deviate from the principles of original diffusion models. Preventing \\u201cground truth leakage\\u201d in your experiments involves training the diffusion model with almost pure Gaussian noise. However, diffusion training typically requires varying noise levels to help the model learn diverse denoising capabilities for the iterative denoising process. The depth or normal map you aim to predict does not appear to affect the diffusion formulation, as it can be treated as a specific type of \\\"image.\\\" Could you clarify why reducing \\u201cground truth leakage\\u201d is effective in your approach?\\n\\nIn _text-guided image generation_, a single textual input can correspond to an immense variety of potential images. This **inherent uncertainty** makes generating a high-quality image directly from random noise in a single step extremely challenging. Therefore, the _multi-step generation_ enables the model to incrementally remove noise, progressively refining details and textures at each stage, which effectively **simplifies the task**. However, _visual perception tasks_ conditioned on an RGB image are **deterministic without any randomness**, and such an easy _injective mapping_ can be estimated with a _one-step inference process_, as most of the traditional visual perception methods do.\\n\\nWhile Marigold series algorithms aim to leverage diffusion models' ability of generating highly detailed images to enhance visual perception with precise details, reformulating straightforward deterministic tasks as a denoising process can **further simplify this problem**, leading to what is described as \\\"ground truth leakage\\\" in Section 3.1 of the main paper and illustrated in Figure 2 of the supplementary. In summary, our experiments and theoretical analysis can prove the unnecessity of employing the denoising process for visual perception tasks. We have updated this analysis in the supplementary material.\"}", "{\"title\": \"Response to reviewer 7LHd\", \"comment\": \"Thank you for your thorough and constructive feedback. We greatly appreciate the time and effort you invested in reviewing our manuscript. Your positive feedback is encouraging, and the detailed comments and thoughtful suggestions have significantly improved the quality of our work. We are grateful and pleased for your recognition of the revisions we made in response to your questions.\"}", "{\"title\": \"Response to reviewer nfdP\", \"comment\": \"We sincerely appreciate your thoughtful recognition of our approach and motivation. The questions you have raised provide us with valuable insights, which we will address with careful consideration and detailed analysis in the following sections.\\n\\n> Q1: What training set was used for Table 1? Are you following the training data of DMP or Marigold? If so, why is the first row of Table 5 identical to the baseline? If not, why did you begin directly with a synthetic dataset?\\n\\nThe motivation of GenPercept is to analyze the crucial designs of methods like Marigold, which leverages diffusion models for perception tasks and achieves excellent geometry detail. So we follow Marigold's training setting for Table 1 and all other experiments in Section 3. The training dataset contains 50K Hypersim images and 40K Virtual KITTI images, and these images are sampled with a sampling rate of 90% for Hypersim and 10% for Virtual KITTI. We provide a more detailed explanation in Section 4 of the supplementary. Therefore, the first row of Table 5 is identical to \\\"Our baseline\\\" in Table 1.\\n\\n> Q2: For monocular depth estimation, why is DMP not included as a baseline?\\n\\nThanks again for your suggestion. We download the official code and depth checkpoint of DMP, and compare with it fairly following the zero-shot monocular depth estimation setting in the main paper. The origin DMP method uses ZoeDepth[a] to generate the pseudo ground truth for each image, and therefore performs poorer than our trained DMP presented in Table 1. The relevant revisions have been highlighted in Table 6 in red in the latest version of the manuscript.\\n\\n> Q3: The authors mention that using a customized head and loss could accelerate inference time. Could you provide a comparison to demonstrate this improvement?\\n\\nWe appreciate the valuable advice. We fully agree on the significance of inference time comparison, and it has been updated and highlighted in red in the \\\"Runtime Analysis\\\" section of the supplementary. With a customized DPT head, we can achieve 0.24s inference time, which is 27% faster than that of 0.33s for the VAE decoder model.\\n\\n[a] Bhat, Shariq Farooq, et al. \\\"Zoedepth: Zero-shot transfer by combining relative and metric depth.\\\" arXiv preprint arXiv:2302.12288 (2023).\"}", "{\"title\": \"General Response and Summary of Updates to Manuscript\", \"comment\": [\"We would like to express our sincere gratitude to the reviewers and editors for their time, effort, and constructive feedback in evaluating our work. Your thoughtful comments and suggestions have been invaluable in enhancing the quality and rigor of our manuscript. We have carefully read all feedback and have provided detailed responses to each of the reviewers' comments. We trust that these responses adequately address the concerns and suggestions raised.\", \"We are greatly encouraged by the reviewers\\u2019 recognition of our efforts in investigating key factors influencing transfer performance (**nfdP**, **6WHQ**, **7LHd**, **QgJw**), the effectiveness and efficiency of our novel one-step GenPercept approach (**nfdP**, **6WHQ**, **7LHd**, **QgJw**), and the robustness of our experiments (**nfdP**, **QgJw**). Below, we provide a high-level summary of the revisions made to the manuscript in response to the reviewers' feedback, followed by a restatement of the key contributions of our work.\", \"---\", \"Here is the summary of updates that we've made to the draft, and the relevant revisions have been highlighted in red in the latest version of the manuscript.\", \"Clarified some experimental settings and analysis. (**nfdP**, **7LHd**)\", \"Compared with more related approaches in Table 6, Table 8, Table 9, and Table 10 of the main paper, and Table 4 of the supplementary. (**nfdP**, **7LHd**)\", \"Added the quantitative efficiency improvement experiment in Table 1 of the supplementary. (**nfdP**, **7LHd**)\", \"Explored the effectiveness of data volume in Table 1 of the supplementary. (**6WHQ**)\", \"Added more qualitative analysis experiments in Figure 5 and Figure 6 of the main paper and Figure 1, Figure 2, Figure 3, Figure 7, Figure 10, and Figure 11 of the supplementary material, which shows the generalization ability of GenPercept. (**7LHd**)\", \"Added more types of objects in image matting in Figure 11 of the supplementary. (**6WHQ**)\", \"To end this update, we would like to summarize the primary contributions of our work. GenPercept presents an efficient and effective approach to leveraging the prior knowledge embedded in diffusion models trained on large-scale data for dense perception tasks. Unlike approaches only focused on achieving state-of-the-art performance on a specific dataset, _GenPercept emphasizes robustness, detail, and generalizability_ in dense visual perception tasks while _maintaining a unified network architecture_. This work opens new possibilities for realizing more generalizable dense perception frameworks with the benefits of diffusion model priors.\"]}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks to the authors for providing further clarifications and for further revising the manuscript.\\n\\nAfter reading the different comments about the \\\"ground truth leakage\\\", including the comment by reviewer QgJw, I still have some remaining concerns though.\\n\\nFirst, I would not refer to this phenomenon as \\\"ground truth leakage\\\". \\\"Ground truth leakage\\\" could suggest that the ground truth is somehow used by the model during inference, which is not the case here. \\n\\nSecond, L186-L187 states the following:\\n\\n> Our quantitative and qualitative analyses, presented in table 1 and fig. 2(c), indicate that increasing the noise proportion leads to improved model performance.\\n\\nHowever, Tab. 1 shows that the performance decreases when increasing the noise proportions ($\\\\beta_{start}$, $\\\\beta_{end}$) to (0.1360, 0.192) and (0.5440, 0.768) in combination with Marigold. While L189 states that the authors consider this to be caused by the randomness of Gaussian noise, the same trend can be observed both with and without multi-resolution noise, so the effect does not appear to be completely random. Moreover, using large noise proportions (1.0, 1.0) does not lead to better results than using (0.0034, 0.048). Therefore, the situation appears to be a bit more nuanced than sketched by the authors. Specifically, the results indicate that there are multiple different somewhat optimal values for $\\\\beta_{start}$ and $\\\\beta_{end}$, i.e., (a) somewhat low values like the baseline Marigold settings and (b) pure noise. This contradicts the claim that reducing \\\"ground truth leakage\\\" improves the performance, as the performance does not consistently improve when \\\"ground truth leakage\\\" is reduced by using larger noise proportions.\", \"some_specific_questions_related_to_this\": [\"Have the authors conducted multiple different training runs (with different random seeds) with noise proportions (0.1360, 0.192) and (0.5440, 0.768) and observed large variations in performance between runs, suggesting that it is really due to randomness? Or were the results quite consistent, suggesting that it is not due to randomness?\", \"If the results are not due to randomness, how do the authors explain them? In this case, the \\\"ground truth leakage\\\" is still lower for noise proportions (0.5440, 0.768) than for (0.00085, 0.012), but the performance is worse. Do the authors still think the results can be explained by \\\"ground truth leakage\\\"?\", \"Similarly, if reducing \\\"ground truth leakage\\\" improves performance, why does the performance not improve when increasing noise proportions from (0.0034, 0.048) to (1.0, 1.0)?\", \"Even if it not possible to find a single, conclusive reason/explanation for the results in Tab. 1, I think the experiment has value as it shows the behavior of models across different settings and identifies that using pure noise during allows for a good performance, which subsequently enables single-step inference. However, if such a reason cannot be found, this should be mentioned honestly and clearly in the paper. Of course, the authors can still provide hypotheses for the results, but in this case it should be clear that these are hypotheses and that it is not certain if they are correct.\"]}", "{\"title\": \"Part 3 of Response to reviewer 7LHd\", \"comment\": \"> W7: The paper does not evaluate if the multi-class image segmentation performance of GenPercept is competitive with existing image segmentation methods. Currently, the paper only conducts an experiment where GenPercept is trained on HyperSim images with 40 semantic classes, and evaluated on ADE20K images with 40 classes. As this is a newly proposed setting, these results are not comparable with existing segmentation methods. To see if fine-tuning diffusion models is truly valuable for image segmentation, GenPercept should be fine-tuned and evaluated on standard segmentation benchmarks like ADE20K, and compared to existing segmentation models like Mask2Former [e].\\n\\nThanks so much for the valuable advice. With a similar experimental setting, we train GenPercept with an UpperNet on ADE20K and Mask2Former, and the results are provided in Table 9. GenPercept outperforms ResNet50 and Swin-T of Mask2Former but achieves lower performance than that of Swin-L. Revisions have been updated in red in the main paper.\\n\\n> W8: A minor weakness of the paper is that there is not a version of GenPercept that clearly works better than the other. As discussed in L357-L359 and shown in Tab. 6, the GenPercept model trained for depth estimation scores better on NYU, ScanNet and ETH3D, while the model trained for disparity estimation scores significantly better on KITTI and DIODE. As a result, two different models are necessary for different situations, and it is not always clear which of the models works best for an unknown application domain.\\n\\nWe believe that the difference between the depth model and the disparity model inherently exists for all the network architectures. Experimentally, we suggest adopting the depth model for indoor scenes and the disparity model for outdoor scenes. \\n\\n---\", \"minor_weakness\": \"> W9: L230: The acronym \\u201cDPT\\u201d is not defined, nor is there a reference to a work where it is presented. As a result, it\\u2019s not clear what a \\u201cDPT encoder\\u201d is.\\n\\nDPT [a] is a classical architecture of vision transformers for dense prediction. We simply leverage its head to realize lightweight monocular depth estimation. We have updated and cited it in the paper.\\n\\n> W10:The meaning of the \\u201cfine-tune\\u201d column in Tab. 9 is not clear. From the text in L418-L423, it appears that it refers to the use of the UPerNet decoder. If so, the text \\u201cUPerNet\\u201d seems more appropriate than \\u201cfine-tune\\u201d. If it means something else than this, this should be better specified in the paper.\\n>\\n> W11: The ordering of the labels in the legend of Fig. 2 (b) is confusing, as there appears to be no logical ordering. The legend, and the figure as whole, would be easier to interpret if the labels were ordered in descending or ascending manner, e.g. starting with (0.0002125, 0.003) and ending with (1.0, 1.0), or vice versa.\\n>\\n> W12:There are a few errors in the text:\\n> \\n> a) L024: \\\"tailed for\\\" - Do the authors mean \\\"tailored for\\\"?\\n> \\n> b) L187-L188: \\\"increasing training challenges lead to\\\" => \\\"increasing training challenges leads to\\\"\\n\\nWe appreciate your suggestion. The relevant revisions have been highlighted in red in the latest version of the manuscript.\\n\\n> Q1: From the text in L411-L423, it appears like the classes for semantic segmentation are always encoded into 3-channel colormaps, even when UPerNet is used. Is this really the case? If so, why isn\\u2019t the \\u2018regular\\u2019 semantic segmentation format used, with one channel for each individual class? If not, please clarify this in the text.\\n\\nFor the UperNet segmentation head, we follow the traditional semantic segmentation format to use n-channel output where n is the number of categories. We have updated and clarified it in red in the paper.\\n\\n> Q2: In the related work section, it seems appropriate to also mention DINOv2, because it has shown to be very suitable for downstream visual perception tasks like depth estimation (e.g., with Depth Anything [g]) and semantic segmentation.\\n\\nThanks for your advice, we will cite DINOv2 in the related works.\\n\\n[a] Ranftl, Ren\\u00e9, Alexey Bochkovskiy, and Vladlen Koltun. \\\"Vision transformers for dense prediction.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\"}", "{\"title\": \"Response to reviewer nfdP\", \"comment\": \"Thank you for your valuable feedback. We truly appreciate the time and attention you've dedicated to reviewing our work. Your suggestions are vital in guiding our improvements.\"}", "{\"comment\": \"Thanks to the authors for the detailed response. After reading other reviewers\\u2019 reviews, I find that I share similar questions with reviewer 7LHd regarding why preventing \\u201cground truth leakage\\u201d leads to improved model performance:\\n\\n1. This claim seems to deviate from the principles of original diffusion models. Preventing \\u201cground truth leakage\\u201d in your experiments involves training the diffusion model with almost pure Gaussian noise. However, diffusion training typically requires varying noise levels to help the model learn diverse denoising capabilities for the iterative denoising process. The depth or normal map you aim to predict does not appear to affect the diffusion formulation, as it can be treated as a specific type of \\\"image.\\\" Could you clarify why reducing \\u201cground truth leakage\\u201d is effective in your approach?\\n2. As mentioned in Lines 189-190, as the \\u03b2 value increases, the impact of \\u201cmulti-resolution noise\\u201d diminishes, however, the performance is improved. Does this contradict your assumption in Lines 157-158, where you stated that \\u201cmulti-resolution noise\\u201d can enhance accuracy? Additionally, is there any relationship between \\u201cmulti-resolution noise\\u201d and the noise proportion?\\n3. In Tab. 1, which is the default Marigold configuration, why are these metrics significantly worse than those reported in the Marigold paper?\\n\\nThanks again for the authors' response. This appears to be a critical issue in your paper, and addressing it could significantly enhance its credibility and soundness.\"}", "{\"summary\": \"This paper investigates key factors affecting the transfer performance of pretrained diffusion models repurposed for dense visual perception tasks, emphasizing the importance of fine-tuning data quality, training strategies, and task-specific supervision. It introduces GenPercept, a one-step fine-tuning paradigm that enhances inference speed and improves fine-grained details in predictions across various perception tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The exploration of key factors influencing the transferability of text-to-image (T2I) diffusion models to dense visual perception tasks is intriguing and relevant.\\n2. The experiments are thorough and comprehensive, providing strong support for the findings.\\n3. The proposed deterministic one-step perception approach effectively integrates these findings and demonstrates comparable performance with minimal fine-tuning.\", \"weaknesses\": \"There are some confusing aspects in the experimental setup and comparisons for the downstream tasks. See questions.\", \"questions\": \"1. What training set was used for Table 1? Are you following the training data of DMP or Marigold? If so, why is the first row of Table 5 identical to the baseline? If not, why did you begin directly with a synthetic dataset?\\n2. For monocular depth estimation, why is DMP not included as a baseline?\\n3. The authors mention that using a customized head and loss could accelerate inference time. Could you provide a comparison to demonstrate this improvement?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer QgJw\", \"comment\": \"We sincerely appreciate your fast response and are delighted to have the opportunity to engage in further discussion with you.\\n\\n> Q1: This claim seems to deviate from the principles of original diffusion models. Preventing \\u201cground truth leakage\\u201d in your experiments involves training the diffusion model with almost pure Gaussian noise. However, diffusion training typically requires varying noise levels to help the model learn diverse denoising capabilities for the iterative denoising process. The depth or normal map you aim to predict does not appear to affect the diffusion formulation, as it can be treated as a specific type of \\\"image.\\\" Could you clarify why reducing \\u201cground truth leakage\\u201d is effective in your approach?\\n\\n\\nThank you for raising this insightful question. The reason hinds behind the difference between text-guided image generation and visual perception tasks. In _text-guided image generation_, a single textual input can correspond to an immense variety of potential images. This **inherent uncertainty** makes generating a high-quality image directly from random noise in a single step extremely challenging. Therefore, the _multi-step generation_ enables the model to incrementally remove noise, progressively refining details and textures at each stage, which effectively **simplifies the task**. However, _visual perception tasks_ conditioned on an RGB image are **deterministic without any randomness**, and such an easy _injective mapping_ can be estimated with a _one-step inference process_, as most of the traditional visual perception methods do.\\n\\nWhile Marigold series algorithms aim to leverage diffusion models' ability of generating highly detailed images to enhance visual perception with precise details, reformulating straightforward deterministic tasks as a denoising process can **further simplify this problem**, leading to what is described as \\\"ground truth leakage\\\" in Section 3.1 of the main paper and illustrated in Figure 2 of the supplementary. In summary, our experiments and theoretical analysis can prove the unnecessity of employing the denoising process for visual perception tasks. We have updated this analysis in the supplementary material. \\n\\n\\n> Q2: As mentioned in Lines 189-190, as the \\u03b2 value increases, the impact of \\u201cmulti-resolution noise\\u201d diminishes, however, the performance is improved. Does this contradict your assumption in Lines 157-158, where you stated that \\u201cmulti-resolution noise\\u201d can enhance accuracy? Additionally, is there any relationship between \\u201cmulti-resolution noise\\u201d and the noise proportion?\\n\\n\\nThanks so much for your valuable guidance. In lines 189\\u2013190, the phrase \\u201cthe impact of multi-resolution noise diminishes\\u201d indicates that the performance gap between \\u201cMarigold with multi-resolution noise\\u201d and \\u201cMarigold without multi-resolution noise\\u201d gradually narrows. Similarly, in lines 157\\u2013158, the statement \\u201cmulti-resolution noise can enhance accuracy\\u201d is supported by a comparison of the overall performance of \\u201cMarigold with multi-resolution noise\\u201d versus \\u201cMarigold without multi-resolution noise.\\u201d The former generally demonstrates superior performance, particularly when beta values are small. These two conclusions do not conflict with each other.\\n\\nWe agree that there does not exist any relationship between \\u201cmulti-resolution noise\\u201d and the noise proportion. To avoid misunderstanding, we have deleted the related description. \\n\\n\\n\\n> Q3: In Tab. 1, which is the default Marigold configuration, why are these metrics significantly worse than those reported in the Marigold paper?\\n\\nThe primary difference lies in the dataset composition. Marigold utilizes \\\"54K Hypersim + Virtual KITTI,\\\" whereas GenPercept employs \\\"50K Hypersim + 40K Virtual KITTI.\\\" For GenPercept, we adopt a stringent filtering policy to exclude invalid Hypersim scenes, and this may result in slightly lower performance. However, all experiments in Section 3 are conducted under fair and consistent experimental settings, ensuring that the conclusions remain unaffected. When comparing with Marigold in Section 4, we use its officially released weights for evaluation.\"}" ] }
Bff9RniI03
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
[ "Max Wilcoxson", "Qiyang Li", "Kevin Frans", "Sergey Levine" ]
Unsupervised pretraining has been transformative in many supervised domains. However, applying such ideas to reinforcement learning (RL) presents a unique challenge in that fine-tuning does not involve mimicking task-specific data, but rather exploring and locating the solution through iterative self-improvement. In this work, we showcase how unlabeled prior trajectory data can be leveraged to learn efficient exploration strategies. The key insight is to use unlabelled trajectories twice, 1) to extract a set of low-level skills offline, and 2) as additional data for a high-level policy that composes these skills to explore. We utilize a simple strategy of learning an optimistic reward model from online samples, and relabeling past trajectories into high-level, task-relevant examples. We instantiate these insights as SUPE (Skills from Unlabeled Prior data for Exploration), and empirically show that SUPE reliably outperforms prior strategies, successfully solving a suite of long-horizon, sparse-reward tasks.
[ "reinforcement learning", "exploration", "skills", "unsupervised pretraining", "offline to online rl" ]
Reject
https://openreview.net/pdf?id=Bff9RniI03
https://openreview.net/forum?id=Bff9RniI03
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xlscUu0bhm", "vlz5kbnqXJ", "up2etd0QQs", "t8NRmUTaj2", "pnCnxbIX6L", "nnxzPS6JVq", "nmtJZwS5kp", "mtB0jBT3qY", "jViBDkAqht", "jF8BvnJTqy", "irD3nxAIzY", "iGAnxQDYRn", "fRB7E9l9pv", "dpbHFIo8XA", "awISxxeV8i", "ZUh76a5piN", "Xrb9HlNcvN", "XNGnh1zWWA", "VKnNSmIcC0", "U75PRtJWZw", "QZrqEscDLX", "Ooz7gE9Pgd", "MbLkaNQoEs", "JDBSSJ7N3S", "H83DV6RFIt", "EaRoMNSUW4", "7JU89hOH7k", "6zBatj9SkZ", "5uJ0S2Xn8e", "2t6cmHPLoN", "2bGTp2bMH1", "2bEyTMUlT5" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732925233290, 1730426046080, 1732231687647, 1732231508480, 1732231532962, 1732779150445, 1732267765156, 1732231472875, 1732231612624, 1732231758918, 1732231580243, 1732231739238, 1732822826729, 1729315002554, 1733218471111, 1732231699391, 1732932070910, 1732931378520, 1732700904553, 1732703197038, 1734826656431, 1732231777539, 1732901173753, 1732232060479, 1732823944483, 1732824476572, 1730642840685, 1737524168879, 1732231633873, 1729608452769, 1732873931506, 1730652500426 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_eVtj" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_DC3r" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_t68W" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_28dh" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_t68W" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_CDzd" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_CDzd" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_CDzd" ], [ "ICLR.cc/2025/Conference/Submission12140/Area_Chair_YL7y" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_28dh" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12140/Authors" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_CDzd" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_eVtj" ], [ "ICLR.cc/2025/Conference/Submission12140/Reviewer_eVtj" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the quick reply. I appreciate the new experiments. Regarding the novelty, you mentioned the key insights are mainly i) using offline data twice and ii) removing the KL. For i) using the offline data twice, from the concept level, ExPLORe introduces using the offline data with optimistic rewards for better online learning. Built upon ExPLORe, the proposed method learns skills from offline data and shows better results, while the effectiveness of extracting skills is already demonstrated in many previous papers in both online and offline settings. For ii) removing the KL, it's more like an empirical observation (you included this in the practical implementation details section) rather than new insights. To make it a new insight for the community, you need to e.g., reveal the reason why removing KL is necessary and when (under which conditions) should we remove the KL, etc, which requires thorough analysis.\"}", "{\"summary\": \"This paper introduces SUPE, a method that leverages unsupervised learning to extract skills from unlabeled prior data, subsequently using hierarchical methods to explore more efficiently. These unlabeled data can also contribute to high-level policy training. Experimental results show that SUPE outperforms previous methods on the D4RL benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The approach of extracting latent \\u201cskills\\u201d from unlabeled data and employing hierarchical methods significantly enhances exploration.\", \"The approach of utilizing prior data twice ensures better use of the available data.\", \"The paper is well-structured and easy to follow.\", \"Extensive results demonstrate that this method outperforms previous approaches.\"], \"weaknesses\": [\"The concept of using a VAE to extract latent codes and employing a high-level policy for online exploration is not novel, and it shows limited progress compared to previous work [1].\", \"The ablation study lacks depth. I am interested in understanding the contribution of \\u201creusing prior data twice\\u201d to the final performance. Additionally, I\\u2019d like clarification on the design choice for the latent variable $z$ in skill discovery: how do you ensure this latent $z$ is sufficient for effective skill discovery in the dataset? Is employing trajectory-segment VAEs truly necessary for efficient exploration?\", \"[1] Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, and Sergey Levine. Accelerating exploration with unlabeled prior data. Advances in Neural Information Processing Systems, 36, 2024.\"], \"questions\": \"Please refer to the weakness part. I may consider increasing the score if my questions are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the detailed feedback and insightful comments! We addressed your concern on sensitivity of the hyperparameter by adding a new section in our experimental results that is dedicated to sensitivity analysis of various hyperparameters and how they were chosen (RND coefficient, skill horizon length). We also provided additional results on offline data with different qualities (expert/exploratory, noisy/non-noisy) with more insights on when our method is expected to work or not work depending on the dataset. Finally, we demonstrated that our method, when given access to ground truth offline reward, can outperform the state-of-the-art offline-to-online RL methods, showcasing the effectiveness of our method at leveraging structured exploration with skills.\\n\\n\\n**How was the RND coefficient $\\\\alpha$ set?**\\n\\n\\nWe did not find the performance of our approach to be sensitive to the RND coefficient. We ran a small hyperparameter tuning sweep on AntMaze-Large (top-right goal) and displayed the results in Figure 4, left. The performance remains almost the same for $\\\\alpha \\\\in \\\\{2, 4, 8, 16\\\\}$. We picked one of these values ($\\\\alpha=8$) and used the SAME value for all other tasks. All the non-skill-based methods use a RND coefficient that is $4\\\\times$ smaller to keep the reward scale consistent with the skill-based methods. This is because the transitions in skill-based methods have a reward value that is equal to the sum of the rewards in 4 low-level transitions (H=4 is the skill horizon length).\\n\\n\\n**Additional datasets for AntMaze**\\n\\n\\nWe include the results for the play datasets for the medium maze layout and the large maze layout (Figure 12). The results are consistent with our results on the diverse datasets. \\n\\n\\nIn addition, we experimented on a narrow, expert dataset and three other datasets used in an offline goal-conditioned RL benchmark (OGBench [1]), shown in Figure 16. We picked datasets from this benchmark as it features a more diverse set of offline data distributions and hoped that it would provide more insights on when our method works or fails. As a summary, we find that our method does not need a dataset that is collected by unsupervised RL agents. Completely exploratory dataset can actually break our method due to the lack of behavioral structure that can be extracted as skills. Our method excels at learning from datasets that contain segments of meaningful (e.g., navigating around the maze) behaviors. We discuss the results in detail below.\", \"the_four_datasets_we_consider_are_ordered_in_decreasing_difficulty\": \"- Expert: collected by a non-noisy expert policy that we train ourselves.\\n- Navigate: collected by a noisy expert policy that randomly navigates the maze (from OGBench).\\n- Stitch: collected by the same noisy expert policy but with much shorter trajectory length (from OGBench)\\n- Explore: collected by moving the ant in random directions, where the direction is re-sampled every 10 environment steps. A large amount of action noise is also added (from OGBench).\\n\\n\\nAs expected, the baseline ExPLORe shows a gradual performance degradation from Expert to Navigate, to Stitch, and to Explore. All skill-based methods (including our method) fail completely on Explore. This is to be expected because the Explore dataset contains too much noise and the skills extracted from the dataset are likely very poor and meaningless. The high-level policy then would have trouble composing these bad skills to perform well in the environment. On Navigate and Stitch, our method outperforms other baselines, especially on the more challenging Stitch dataset where it is essential to stitch shorter trajectory segments together. On Expert, all methods perform similarly with ExPLORe doing slightly better. We hypothesize that this is because with the expert data, online learning does not require as much exploration, and skill-based methods are mostly beneficial when there is a need for structured exploratory behaviors. \\n\\n\\nTo further test our method\\u2019s ability to handle different offline data quality, in our original submission, we included ablation studies where the offline data is corrupted. We tested a dataset without transitions near the goal location (Insufficient Coverage), and a low-data setting where 95% of the trajectories are removed from the dataset (5% Data). While we see performance degradation from such data erasure, our method is still the most robust, consistently outperforming the baselines (Figure 17).\\n\\n\\nWe hope that these additional experiments provide more insights on when we expect our skill-based method to work or not work. \\n\\n\\n[1] Park, Seohong, et al. \\\"OGBench: Benchmarking Offline Goal-Conditioned RL.\\\" arXiv preprint arXiv:2410.20092 (2024).\", \"title\": \"Author Response (1/2)\"}", "{\"comment\": \"**Limited discussion on HRL**\\n\\nThanks for pointing out these related works that are really relevant to our work. We have added two new sections in the related works to discuss the relationship between our work and prior works in hierarchical RL and options framework (please see Section 2, \\u201cHierarchical reinforcement learning\\u201d and \\u201cOptions framework\\u201d). While some prior HRL methods simultaneously learn the low-level skill policies and the high-level policy online, others opt for a simpler formulation where the low-level skills are pre-trained offline and kept frozen during online learning. None of the prior HRL methods simultaneously leverage offline skill pre-training and offline data as additional off-policy data for high-level policy learning online. As we show in our experiments, both of them are crucial in enabling extremely sample efficient learning on challenging sparse-reward tasks, sometimes even solving tasks that all prior methods cannot (e.g., Figure 16 on HumanoidMaze). \\n\\nIt is also worth noting that we have already discussed OPAL [1] in the unsupervised skill discovery section in the related work in our initial submission. Our unsupervised skill pre-training implementation closely follows the implementation of OPAL [1] and SPiRL [2] where the latent skills are extracted using a VAE. In addition, [3] does not use reinforcement learning and only focuses on extracting primitives, so we integrated this reference into our \\u201cunsupervised skill discovery\\u201d paragraph in the related work instead.\\n\\n[1] Ajay, Anurag, et al. \\\"Opal: Offline primitive discovery for accelerating offline reinforcement learning.\\\" arXiv preprint arXiv:2010.13611 (2020).\\n\\n[2] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n\\n[3] Paraschos, Alexandros, et al. \\\"Probabilistic movement primitives.\\\" Advances in neural information processing systems 26 (2013).\\n\\n**High-level policy that is updated every \\ud835\\udc3b timesteps and keeps the pre-trained skill and trajectory encoder fixed during the online phase. This limits the adaptability of the method.**\\n\\nWhile we acknowledge that this is indeed a limitation of our approach and using a more adaptive skill framework (e.g., options framework) can address this limitation (we add a new paragraph to discuss it in our related work section), our design (where the high-level policy outputs a skill at a regular interval) is a common design that appears in many prior methods [1-9] and many of them keep the skills fixed during online learning ([1-4]), and find it to be effective. In practice, we also find such design to be sufficient for a wide range of tasks (now with four new challenging domains in addition to the ones we tested in our initial submission). \\n\\n[1] Dalal, Murtaza, Deepak Pathak, and Russ R. Salakhutdinov. \\\"Accelerating robotic reinforcement learning via parameterized action primitives.\\\" Advances in Neural Information Processing Systems 34 (2021): 21847-21859.\\n\\n[2] Gehring, Jonas, et al. \\\"Hierarchical skills for efficient exploration.\\\" Advances in Neural Information Processing Systems 34 (2021): 11553-11564.\\n\\n[3] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n\\n[4] Ajay, Anurag, et al. \\\"Opal: Offline primitive discovery for accelerating offline reinforcement learning.\\\" arXiv preprint arXiv:2010.13611 (2020).\\n\\n[5] Xie, Kevin, et al. \\\"Latent skill planning for exploration and transfer.\\\" arXiv preprint arXiv:2011.13897 (2020).\\n\\n[6] Gupta, Abhishek, et al. \\\"Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning.\\\" arXiv preprint arXiv:1910.11956 (2019).\\n\\n[7] Fang, Kuan, et al. \\\"Dynamics learning with cascaded variational inference for multi-step manipulation.\\\" arXiv preprint arXiv:1910.13395 (2019).\\n\\n[8] Merel, Josh, et al. \\\"Neural probabilistic motor primitives for humanoid control.\\\" arXiv preprint arXiv:1811.11711 (2018).\\n\\n[9] Nachum, Ofir, et al. \\\"Data-efficient hierarchical reinforcement learning.\\\" Advances in neural information processing systems 31 (2018).\", \"title\": \"Author Response (2/3)\"}", "{\"comment\": \"**Dependence of Data Quality**\\n\\nTo gain more insights on when we expect our method to work and the data quality dependency, we experimented on a narrow, expert dataset and three other datasets used in an offline goal-conditioned RL benchmark (OGBench [1]), shown in Figure 19. We picked datasets from this benchmark as it features a more diverse set of offline data distributions and hoped that it would provide more insights on when our method works or fails. As a summary, we find that our method does not need the dataset to be very diverse (e.g., collected by unsupervised RL agents). Completely exploratory dataset can actually break our method due to the lack of behavioral structure that can be extracted as skills. Our method excels at learning from datasets that contain segments of meaningful (e.g., navigating around the maze) behaviors. We discuss the results in detail below.\", \"the_four_datasets_we_consider_are_ordered_in_decreasing_difficulty\": \"- Expert: collected by a non-noisy expert policy that we train ourselves.\\n- Navigate: collected by a noisy expert policy that randomly navigates the maze (from OGBench).\\n- Stitch: collected by the same noisy expert policy but with much shorter trajectory length (from OGBench)\\n- Explore: collected by moving the ant in random directions, where the direction is re-sampled every 10 environment steps. A large amount of action noise is also added (from OGBench).\\n\\n\\nAs expected, the baseline ExPLORe shows a gradual performance degradation from Expert to Navigate, to Stitch, and to Explore. All skill-based methods (including our method) fail completely on Explore. This is to be expected because the Explore dataset contains too much noise and the skills extracted from the dataset are likely very poor and meaningless. The high-level policy then would have trouble composing these bad skills to perform well in the environment. On Navigate and Stitch, our method outperforms other baselines, especially on the more challenging Stitch dataset where it is essential to stitch shorter trajectory segments together. On Expert, all methods perform similarly with ExPLORe doing slightly better. We hypothesize that this is because with the expert data, online learning does not require as much exploration, and skill-based methods are mostly beneficial when there is a need for structured exploratory behaviors. \\n\\n\\nTo further test our method\\u2019s ability to handle different offline data quality, in our original submission, we included ablation studies where the offline data is corrupted. We tested a dataset without transitions near the goal location (Insufficient Coverage), and a low-data setting where 95% of the trajectories are removed from the dataset (5% Data). While we see performance degradation from such data erasure, our method is still the most robust, consistently outperforming the baselines (Figure 20).\\n\\n\\nWe hope that these additional experiments provide more insights on when we expect our skill-based method to work or not work. \\n\\n\\n[1] Park, Seohong, et al. \\\"OGBench: Benchmarking Offline Goal-Conditioned RL.\\\" arXiv preprint arXiv:2410.20092 (2024).\\n\\n\\n**Training stability in HRL**\\n\\nMany instability problems in HRL methods stem from the fact that both the low-level policy and the high-level policy are learning simultaneously at the same time. We circumvent such issues by pre-training low-level skill policies offline using a static dataset and then keep them fixed during online learning. In addition, our method utilizes the offline data as additional off-policy data for the high-level actor-critic RL agent such that the high-level RL agent can sample high-level transitions from offline data to perform TD updates. This effectively increases the amount of data that the high-level RL agent has access to right from the beginning of online learning, further stabilizing high-level policy learning.\\n\\n**Hope these address all of your questions and concerns. Thank you again for your time to review and if you have any remaining questions or concerns, please let us know!**\", \"title\": \"Author Response (3/3)\"}", "{\"comment\": \"Thanks for the additional questions and comments.\\n\\n\\nWe are running new requested experiments right now on all of the 7 domains that we evaluated in our paper. In our rebuttal, we added results on 23 new tasks across 4 new domains which further showcase the effectiveness of our method (Figure 3, now using the rliable library to generate the aggregation plots). We provide a response for the questions and concerns that we can address below, and we will follow-up with an additional response upon the completion of our experiments before the rebuttal period ends.\\n\\n\\n**\\\"I am also not sure that a horizon of h=4 really strikes a good balance. The results in AntMaze Large suggest that horizon h=2 performs significantly better? Again, the authors should include results on more tasks.\\\"**\\n\\n\\nAs we mentioned in our analysis (Section 5.5), H=4 explores much faster in the beginning (achieving a much higher initial success rate). H=2 achieves a better final performance but learns much slower in the beginning. We do not only look at the final performance, but also the exploration efficiency in the beginning. We are running more experiments on new environments and we will update the thread and include the new results when they are finished. \\n\\n\\n**\\\"I do not think it is OK to bold based on overlapping confidence intervals as this does not indicate statistical significance. Please perform a t-test.\\\"**\\n\\n\\nWe redid the bolding of the table using a t-test with p=0.05. Please see the updated table (Table 3, blue indicates newly bolded values). We also included a new summary plot that uses the aggregated IQM with 95% stratified bootstrap confidence intervals (using rliable) in Figure 4. The plot shows that our method is the most effective with statistical significance. \\n\\n\\n\\n\\n**\\\"I would encourage the authors to evaluate on more tasks and report aggregate metrics (e.g. IQM) with stratified confidence intervals following Rliable.\\\"**\\n\\n\\nIn Figure 3, we show the comparison of our method with baselines on 7 domains (in total 42 tasks, We describe these domains in Section 5.1 in short and in Appendix D with more detail). We also include an aggregated result plot (Figure 4, IQM with 95% Stratified Bootstrap CIs plotted using the RLiable library) which shows that our method has the best sample efficiency and best final performance. In particular, on the HumanMaze domain, our method is the only method that solves all tasks whereas all other methods fail almost completely (see Figure 13 for the individual tasks in the Humanoid domain).\\n\\n\\n**Hope these address most of your concerns. For the remaining concerns on the sensitivity analysis and the ground-truth reward experiments, we will post a follow-up response once the experiments are done. Thank you again for your time to review and if you have any other remaining questions or concerns, please let us know!**\"}", "{\"comment\": \"Thank you for your response and the extensive additional experiments you provided. I apologize for overlooking some of the experiments detailed in the paper. Your response has addressed part of my concerns. However, considering prior work, I still think the contribution of this paper somewhat limited. I will raise the score appropriately to align with the quality of your work.\"}", "{\"comment\": \"Thanks for the detailed feedback and insightful comments. For your concern on the novelty of our method, we would like to highlight that important, careful design decisions in our current method enable significant performance gains over baselines whereas the naive combination of prior works falls short. We additionally evaluated our method on four additional domains and our method exhibits similar performance gains. We also addressed your concern on the lack of discussion on HRL by adding a new paragraph in the related work to discuss prior works in HRL and provided justification of our choice of the skill formulation.\\n\\n**Novelty**\\n\\nWe would like to emphasize several key design decisions in our current method that are different from prior methods, which contribute to performance gains over baselines. \\n\\nFirst of all, all prior work on online learning with skills extracted from offline data simply discards the offline data when learning the high-level policy (e.g., Pertsch et al. (2021), Ajay et al. (2020)). In our experiments, the baseline \\u201cOnline w/ Traj. Skill\\u201d does exactly this (learning trajectory skills from offline data, then learning a high level policy purely from online samples), and is consistently worse than our method that utilizes offline data during online learning (especially on more challenging tasks like the Large and Ultra AntMaze environments in Figure 11 and on all HumanoidMaze tasks in Figure 16). \\n\\nIn addition, our method is not a naive combination of SPiRL (Pertsch et al. (2021)) and ExPLORe (Li et al. (2024)). Pertsch et al. (2021) use a KL constraint between the high-level policy and a state-dependent prior obtained from offline pretraining. We found this design can actually hurt the online performance. We show that a simpler design without the KL constraint works much better. In Figure 7, we compare (Ours (KL)) with our method (Ours) and demonstrate that the final performance and the sample efficiency of the naive combination is much worse. As described in Appendix E, we borrow the policy parameterization from Haarnoja et al. (2018) and adopt a tanh policy parameterization with entropy regularization on the squashed space. Such design ensures that the online high-level policy is not explicitly constrained to a pre-trained prior, allowing the online policy to learn skill distributions that are more suitable for the task online. It also allows the addition of entropy regularization to the high level policy, which helps exploration. \\n\\nThese careful designs are what make our method extremely stable, sample efficient, and scalable to more complex tasks. We show additional results on four new domains with two locomotion domains and two manipulation domains in Figure 3. We selected these environments from OGBench [1], since they provided challenging, long-horizon tasks which also require exploration to solve, making them well-suited for testing the use of offline data to accelerate online exploration. In the challenging HumanoidMaze domain, our method is often the only method that achieves non-zero success rate on the four most difficult mazes. On manipulation tasks, our method consistently outperforms all prior methods on all domains with the only exception on Scene where one of the baselines (Offline w/ HILP) performs better. It is worth noting that Offline w/ HILP is a novel baseline that we introduced to also leverage offline data twice, both during offline and online learning with the only difference being that the unsupervised skill pre-training algorithm is HILP (instead of using trajectory VAE). This further demonstrates that the principle of leveraging offline data for both skill pre-training and online learning is effective. The effectiveness of our method across seven domains further highlights the importance of a careful combination of skill pre-training and effective online learning that utilizes the offline data.\\n\\n[1] Park, Seohong, et al. \\\"OGBench: Benchmarking Offline Goal-Conditioned RL.\\\" arXiv preprint arXiv:2410.20092 (2024).\", \"title\": \"Author Response (1/3)\"}", "{\"comment\": \"Thanks for the detailed feedback and insightful comments! For your concern on the novelty of our method, we would like to highlight that important, careful design decisions in our current method enable significant performance gains over baselines whereas the naive combination of prior works falls short. We additionally evaluated our method on four additional domains and our method exhibits similar performance gains. We provided clarification on how each of our baselines use prior data and showcase the effectiveness of the principle of \\u201creusing prior data twice\\u201d on the original domains and the new four domains that we added in this rebuttal.\\n\\n**Novelty**\\n\\nWe would like to emphasize several key design decisions in our current method that are different from prior methods, which contribute to the performance gains over baselines. \\n\\nFirst of all, all prior work on online learning with skills extracted from offline data simply discards the offline data when learning the high-level policy (e.g., Pertsch et al. (2021), Ajay et al. (2020)). In our experiments, the baseline \\u201cOnline w/ Traj. Skill\\u201d does exactly this (learning trajectory skills from offline data, then learning a high level policy purely from online samples), and is consistently worse than our method that utilizes offline data during online learning (especially on more challenging tasks like the Large and Ultra AntMaze environments in Figure 11 and on all HumanoidMaze tasks in Figure 16). \\n\\nIn addition, our method is not a naive combination of SPiRL (Pertsch et al. (2021)) and ExPLORe (Li et al. (2024)). Pertsch et al. (2021) use a KL constraint between the high-level policy and a state-dependent prior obtained from offline pretraining. We found this design can actually hurt the online performance. We show that a simpler design without the KL constraint works much better. In Figure 7, we compare (Ours (KL)) with our method (Ours) and demonstrate that the final performance and the sample efficiency of the naive combination is much worse. As described in Appendix E, we borrow the policy parameterization from Haarnoja et al. (2018) and adopt a tanh policy parameterization with entropy regularization on the squashed space. Such design ensures that the online high-level policy is not explicitly constrained to a pre-trained prior, allowing the online policy to learn skill distributions that are more suitable for the task online. It also allows the addition of entropy regularization to the high-level policy, which helps exploration. \\n\\nThese careful designs are what make our method extremely stable, sample efficient, and scalable to more complex tasks. We show additional results on four new domains with two locomotion domains and two manipulation domains in Figure 3. We selected these environments from OGBench [1], since they provided challenging, long-horizon tasks which also require exploration to solve, making them well-suited for testing the use of offline data to accelerate online exploration. In the challenging HumanoidMaze domain, our method is often the only method that achieves non-zero success rate on the four most difficult mazes. On manipulation tasks, our method consistently outperforms all prior methods on all domains with the only exception on Scene where one of the baselines (Offline w/ HILP) performs better. It is worth noting that Offline w/ HILP is a novel baseline that we introduced to also leverage offline data twice, both during offline and online learning with the only difference being that the unsupervised skill pre-training algorithm is HILP (instead of using trajectory VAE). This further demonstrates that the principle of leveraging offline data for both skill pre-training and online learning is effective. The effectiveness of our method across seven domains further highlights the importance of a careful combination of skill pre-training and effective online learning that utilizes the offline data.\\n\\n[1] Park, Seohong, et al. \\\"OGBench: Benchmarking Offline Goal-Conditioned RL.\\\" arXiv preprint arXiv:2410.20092 (2024).\", \"title\": \"Author Response (1/2)\"}", "{\"comment\": \"**Tasks are simplistic and monotonous.**\\n\\nWe show additional results on four new domains (23 new tasks in total!) with two locomotion domains and two manipulation domains in Figure 3. Each of these domains contains 2-5 tasks. We selected these environments from OGBench [1], since they provided challenging, long-horizon tasks which also require exploration to solve, making them well-suited for testing the use of offline data to accelerate online exploration. The two manipulation domains (Cube and Scene) contain different tasks that are long-horizon and require composition of multiple skills by design. For example, one of the tasks in Scene requires the robotic arm to 1) press a button to unlock a drawer, 2) open the unlocked drawer, and 3) pick an object and place the object in the drawer, and 4) close the drawer. The two additional locomotion domains are HumanoidMaze and AntSoccer. HumanoidMaze is more difficult than AntMaze, since it involves controlling a 21-DoF humanoid agent. The tasks have a significantly longer time horizon, with the giant maze requiring up to 4000 environment steps. AntSoccer is also much harder than AntMaze, since the ant needs to navigate to a soccer ball and then dribble it to the goal location. \\n\\nIn the challenging HumanoidMaze domain, our method is often the only method that achieves non-zero success rate on the four most difficult mazes. On manipulation tasks, our method consistently outperforms all prior methods on all domains with the only exception on Scene where one of the baselines (Offline w/ HILP) performs better. It is worth noting that Offline w/ HILP is a baseline that we introduced to also leverage offline data twice, both during offline and online learning with the only difference being that the unsupervised skill pre-training algorithm is HILP (instead of using trajectory VAE). This further demonstrates that the principle of leveraging offline data for both skill pre-training and online learning is effective. The effectiveness of our method across seven domains further highlights the importance of a careful combination of skill pre-training and effective online learning that utilizes the offline data.\\n\\nWe would be happy to add additional experiments in the final version if there are specific benchmark tasks that the reviewer could suggest that they believe to be a better test for the method.\\n\\n[1] Park, Seohong, et al. \\\"OGBench: Benchmarking Offline Goal-Conditioned RL.\\\" arXiv preprint arXiv:2410.20092 (2024).\\n\\n**Why isn\\u2019t SPiRL used as a baseline for comparison?**\\n\\nWe already compare our method to an improved version of SPiRL. Our baseline \\u201cOnline w/ Traj. Skill\\u201d in Figure 3 is an implementation of SPiRL but with additional improvements such as the addition of RND reward bonus, as well as replacing the KL constraint on the prior with entropy regularization in SAC. These improvements are crucial for online learning to work well in the challenging domains that we experiment with. This can be seen in our ablations for each of these improvements on AntMaze Large (Figure 6: KL, Figure 4, left: RND). We have made additional clarifications in our paper to reflect this.\\n\\n**How does trajectory segment length affect performance?**\\n\\nWe included a set of new sensitivity analysis results in our paper on the AntMaze-Large (Top-right goal). Figure 4, right shows how the performance of our method changes as we increase and decrease the trajectory segment length (skill horizon length), H. When we decreased the length to 2, we found that our method can actually achieve an even higher final performance at the cost of slower initial learning. This is likely due to the fact that having shorter skills in the beginning makes exploration less structured, slowing down learning. Shorter skills allow the high-level policy to stitch them more optimally, improving the final performance. When we increased the skill horizon length to 8, we found that our method can still solve the task, but much more slowly. We used a constant skill length of 4 for all our experiments and we found it to work well across all the domains we tested.\", \"title\": \"Author Response (2/3)\"}", "{\"comment\": \"Thanks for the detailed feedback and insightful comments! To address your questions on our design choices, we provided additional sensitivity analysis on design choices in our algorithm. We also provided more discussions on the contribution of the two uses of offline data. Hope these help provide more insights on how our method works.\\n\\n**Is the algorithm robust to different design choices? How important is the optimistic labelling?**\\n\\nWe include additional sensitivity analysis on the skill horizon (H) and the RND coefficient (the amount of optimism added in optimistic labeling) using the AntMaze-Large (top-right goal) task.\\n\\nOptimistic labeling is important for our method to successfully solve the task. In Figure 5, left, our method with no optimistic labeling ($\\\\alpha=0$) completely fails to solve the task. However, the performance of our method is robust to the value of the RND coefficient as long as it is non-zero \\u2013 the performance remains almost the same for $\\\\alpha \\\\in \\\\\\\\{2, 8, 16\\\\\\\\}$. \\n\\nFor the skill horizon (H), Figure 5, right shows how the performance of our method changes as we increase and decrease the trajectory segment length (skill horizon length), H. We find that while there is some variability across individual tasks (Appendix H, Figure 10), a skill horizon length of 4 generally performs the best. We used a constant skill length of 4 for all our experiments and we found it to work well across all the domains we tested. \\n\\n**Why can we use the offline data trajectories twice?**\\n\\nThe use of offline data during offline pre-training and online learning are along two axes that complement each other. Offline pre-training leverages the short horizon behavioral structure in offline dataset whereas online learning leverages the more high-level dynamics information of the environment (e.g., how a state at the current time step $s_t$ may be transformed to a state $H$ steps in the future $s_{t+H}$ via high-level skills). The behavioral structure helps construct the skills and the high-level dynamics information helps stitching/composing these skills together at a higher-level. Since our high-level policy is an off-policy RL agent, it can consume off-policy high-level transitions directly from the offline data to help it stitch/compose the low-level skills. This can be further justified by observing that on the domains where stitching is needed less (e.g., Single Cube tasks and Kitchen-complete where the demonstration of completing the full task is directly available in the offline data), the gap between our method and the Online w/ Traj. Skills baseline (that does not use the offline data trajectories during online learning) is lower.\\n\\n**Hope these address all of your questions and concerns. Thank you again for your time to review and if you have any remaining questions or concerns, please let us know!**\"}", "{\"comment\": \"Thanks for the detailed feedback and insightful comments! For your concern on the novelty of our method, we would like to highlight that important, careful design decisions in our current method enable significant performance gains over baselines whereas the naive combination of prior works falls short. For your concern on the current domains being too monotonous and simplistic, we additionally evaluated our method on four additional domains that are much more challenging and diverse than the three domains in our initial submission. We showed that our method exhibits similar performance gains on most domains. In the HumanoidMaze domain, our method is the only method that can solve all of the tasks.\\n\\n**Novelty.**\\n\\nWe would like to emphasize several key design decisions in our current method that are different from prior methods, which contribute to the performance gains over baselines. \\n\\nFirst of all, all prior work on online learning with skills extracted from offline data simply discards the offline data when learning the high-level policy (e.g., Pertsch et al. (2021), Ajay et al. (2020)). In our experiments, the baseline \\u201cOnline w/ Traj. Skill\\u201d does exactly this (learning trajectory skills from offline data, then learning a high level policy purely from online samples), and is consistently worse than our method that utilizes offline data during online learning (especially on more challenging tasks like the Large and Ultra AntMaze environments in Figure 8 and on all HumanoidMaze tasks in Figure 13). \\n\\nIn addition, our method is not a naive combination of SPiRL (Pertsch et al. (2021)) and ExPLORe (Li et al. (2024)). Pertsch et al. (2021) use a KL constraint between the high-level policy and a state-dependent prior obtained from offline pretraining. We found this design can actually hurt the online performance. We show that a simpler design without the KL constraint works much better. In Figure 6, we compare (Ours (KL)) with our method (Ours) and demonstrate that the final performance and the sample efficiency of the naive combination is much worse. As described in Appendix E, we borrow the policy parameterization from Haarnoja et al. (2018) and adopt a tanh policy parameterization with entropy regularization on the squashed space. Such design ensures that the online high-level policy is not explicitly constrained to a pre-trained prior, allowing the online policy to learn skill distributions that are more suitable for the task online. It also allows the addition of entropy regularization to the high level policy, which helps exploration. \\n\\nThese careful designs are what make our method extremely stable, sample efficient, and scalable to more complex tasks.\", \"title\": \"Author Response (1/3)\"}", "{\"title\": \"response to authors\", \"comment\": \"Thank you for your comments and addressing my questions. I believe that authors have still not addressed the weaknesses I mentioned, in particular some motivation for the empirical benefit for both uses of offline data is still requested. However, I will maintain my score as I still believe this is a good paper.\"}", "{\"summary\": \"This paper proposes a two-phase framework, SUPE, which leverages data in two stages: first, extracting low-level skills during the offline pre-training phase, and then using these skills and unlabeled data in the online phase to train a high-level strategy for more efficient exploration. Building on prior works like SPiRL [1] and ExPLORe [2], the key contribution of this paper is to integrate unlabeled data with online data to accelerate exploration and training in off-policy reinforcement learning (RL) methods. In the offline pre-training stage, the authors train a set of low-level skills, while in the online phase, they develop a high-level policy by utilizing both online data and relabeled offline data. To assess the method\\u2019s effectiveness, the authors compare SUPE with several baselines using benchmarks such as D4RL, and also discuss its limitations and potential directions for future research.\\n\\n[1] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n\\n[2] Li, Qiyang, et al. \\\"Accelerating exploration with unlabeled prior data.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is highly detailed, well-written and provides detailed motivation. The complete code is also provided.\", \"The authors conduct numerous experiments to thoroughly validate their method and address in detail several key issues that I am particularly concerned about, including its scalability, robustness.\"], \"weaknesses\": [\"The overall novelty of this work is somewhat limited, as it builds heavily on existing methods and concepts (mentioned in summary).\", \"Although numerous experiments are conducted, the selected tasks are relatively monotonous and simplistic. The experiments test only two types of tasks: AntMaze and Kitchen.\"], \"questions\": [\"See weakness above.\", \"Given the similarities between SPiRL [1] and this work, apart from the online reinforcement learning stage, why isn\\u2019t SPiRL used as a baseline for comparison (despite the numerous experiments conducted) ?\", \"In the pre-training stage, it would also be valuable to discuss whether trajectory segment length $H$ significantly impacts the method's performance.\", \"I am curious whether using expert data would result in better low-level skills during the pre-training stage.\", \"[1] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my concerns, especially those regarding statistical significance. In particular, thank you for updating the table so that bolding represents a t-test and for reporting aggregate metrics. I believe this improves the empirical study and the conclusions we can draw from it. I will raise my score accordingly.\\n\\nOne minor point about Figure 4. For the \\\"All Mazes\\\" plot the number of environment steps in each environment should probably be normalized before aggregating them.\"}", "{\"comment\": \"**How does your method compare to using offline-to-online RL methods which have access to reward labels?**\\n\\n\\nIn Figure 7, we show a comparison between our method and two state-of-the-art offline-to-online methods (CalQL [1] and IDQL [2]) on AntMaze-large (top-right goal). We also include a version of our method (Ours (Ground Truth)) where we assume access to the ground truth reward similar to all the offline-to-online RL methods. While Ours performs slightly worse than CalQL since we do not assume access to the offline reward, Ours (Ground Truth) performs better than CalQL with a much faster initial learning thanks to structured exploration using pre-trained skills.\\n\\n\\n[1] Nakamoto, Mitsuhiko, et al. \\\"Cal-QL: Calibrated offline RL pre-training for efficient online fine-tuning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n\\n[2] Hansen-Estruch, Philippe, et al. \\\"IDQL: Implicit Q-learning as an actor-critic method with diffusion policies.\\\" arXiv preprint arXiv:2304.10573 (2023)\\n\\n\\n## Questions\\n\\n\\n**What should I take from the coverage results?**\\n\\n\\nSince we DO NOT have reward labels in the dataset and the reward information is NOT given to the agent in the beginning of the online phase, on AntMaze, the agent must find the goal first before it can learn to reach it consistently. The coverage metric provides information on how quickly the agent is able to explore the maze. However, we do agree that the coverage plot adds little in addition to Table 1 (which already provides insights on how quickly each method can find the goal). We have moved it to the Appendix in the interest of space.\\n\\n\\n**What do the bold numbers represent in Table 1?**\\n\\n\\nThe table values are bolded if their confidence intervals overlap with the confidence interval of the best method (determined by the mean value) in each row. We included additional clarification in our paper.\\n\\n\\n**Typos and minor comments**\\n\\n\\nThanks for the detailed comments! We have fixed all the typos mentioned. We also moved the last paragraph of Section 5.3 to the beginning of Section 5.4. We also added a clarification in the paper about what the solid line indicates. It represents the mean value across seeds. We also improved all the figures such that the colors of the legend text match the line colors. \\n\\n\\n**Hope these address all of your concerns. Thank you again for your time to review and if you have any remaining questions or concerns, please let us know!**\", \"title\": \"Author Response (2/2)\"}", "{\"comment\": \"Thanks again for your time in reviewing. Sorry for sending multiple messages, but would you mind expanding in more detail why you believe the contribution of this paper is limited?\\n\\nIf novelty is still the main concern, we would like to emphasize that **none of the prior works have effectively used offline data twice for both skill pre-training and online learning**. Our novelty lies in the observation that offline data can be simultaneously used for both skill pre-training and as additional off-policy data (which none of the prior works have observed), as well as the careful design choices that lead to strong performance in practice. ExPLORe only uses the offline data during online learning and does not perform pre-training, and prior skill-based methods did not even consider using offline data as additional prior data during online learning. In prior works, the usage of unlabeled offline data has been completely disjoint: either fully offline for skill pre-training or fully online as additional data for online learning. We show that precisely this combination is what differentiates our method in terms of performance. For example, in the HumanoidMaze environment, all of the prior methods completely failed on the more difficult Large and Giant mazes with near 0 success rate throughout training, but our method, with this combination, solves all of these tasks with high success rate (from 55\\\\% to 80\\\\%).\\n\\nIn addition, we provided the necessary design details (e.g., removing the KL) that enables our method to achieve such sample efficiency, and we showed empirically that this design is beneficial. While our paper does not focus on these design details, we believe it is still a very valuable contribution to the community because **it, along with using offline data for both pre-training and as additional data for online RL, enables online RL to operate at a level of sample efficiency significantly better than previous state-of-the-art.**\\n\\n**We hope this can help address your concerns, and if there are any new experiments or specific questions that could help us improve the paper further, please let us know! Thanks again!**\"}", "{\"comment\": \"Thanks for your time in reviewing and for the quick response. Regarding the novelty, we would like to emphasize that **none of the prior works have effectively used offline data twice for both skill pre-training and online learning**. Our novelty lies in the observation that offline data can be simultaneously used for both skill pre-training and as additional off-policy data (which none of the prior works have observed), as well as the careful design choices that lead to strong performance in practice. ExPLORe only uses the offline data during online learning and does not perform pre-training, and prior skill-based methods did not even consider using offline data as additional prior data during online learning. In prior works, the usage of unlabeled offline data has been completely disjoint: either fully offline for skill pre-training or fully online as additional data for online learning. We show that precisely this combination is what differentiates our method in terms of performance. For example, in the HumanoidMaze environment, all of the prior methods completely failed on the more difficult Large and Giant mazes with near 0 success rate throughout training, but our method, with this combination, solves all of these tasks with high success rate (from 55\\\\% to 80\\\\%).\\n\\nIn addition, we provided the necessary design details (e.g., removing the KL) that enables our method to achieve such sample efficiency, and we showed empirically that this design is beneficial. While our paper does not focus on these design details, we believe it is still a very valuable contribution to the community because **it, along with using offline data for both pre-training and as additional data for online RL, enables online RL to operate at a level of sample efficiency significantly better than previous state-of-the-art.**\\n\\n**We hope this addressed your concerns, and if there are any new experiments or specific questions that could help us improve the paper further, please let us know! Thanks again!**\"}", "{\"comment\": \"I thank the authors for their efforts in addressing my questions. I have read your response and promise to review the changes carefully.\"}", "{\"comment\": \"1. **How was the RND coefficient set?** The sensitivity analysis on $\\\\alpha$ and horizon $h$ are a good start. However, I am not sure that evaluating on a single task is enough. I would encourage the authors to evaluate on more tasks and report aggregate metrics (e.g. IQM) with stratified confidence intervals following [Rliable](https://github.com/google-research/rliable). I am also not sure that a horizon of $h=4$ really strikes a good balance. The results in AntMaze Large suggest that horizon h=2 performs significantly better? Again, the authors should include results on more tasks.\\n2. **Additional datasets for AntMaze** This is a very nice addition to the paper, thank you for adding it.\\n3. **How does your method compare to using offline-to-online RL methods which have access to reward labels?** Again, this is a good start, but I do not trust the results obtained in a single task. Please add more tasks to improve the empirical study.\\n4. **What do the bold numbers represent in Table 1?** I do not think it is OK to bold based on overlapping confidence intervals as this does not indicate statistical significance. Please perform a t-test.\\n\\nI thank the authors for their hard work in addressing my comments. Whilst I think the paper has improved, I still have concerns regarding statistical significance in reporting results, so I will maintain my score.\"}", "{\"metareview\": \"This paper proposes using offline data to first extract a low-level skill policy $\\\\pi(a|s, z)$, and then learn a high-level policy $\\\\psi(a | s, z)$ that combines them during the online phase.\\n\\nStrengths\\n1. Impressive gains over several baselines in Figure 3.\\n2. Authors added new results that show gains over offline-to-online RL approaches in Figure 8 which is a common approach.\", \"weakness\": \"1. The approach doesn't introduce a major idea although I would say it skillfully uses existing ideas.\\n\\n2. Most of the experiments are on state-based observations and many of them are grid-based. For a paper for ICLR 2025, I think the domains to be addressed should be closer to real-world challenges especially given the progress in the ML community at large.\\n\\n3. The approach depends on the quality of the data. While offline data is unlabeled, meaning there are no rewards, which is nice compared to offline RL approaches, it still requires access to semantically meaningful trajectories from which skills can be extracted. E.g., if the trajectories are random walks, then such an approach wouldn't work. Where can we expect to find such data? If the approach used something such as video data which are more abundantly available, then it would make the approach more practical. \\n\\nOverall, I think this paper uses existing ideas in RL to make impressive improvements over a variety of common benchmarks. The main concern is a mismatch with real-world problems where high-quality datasets may be hard to get. Further, the experiments are not on real-world problems which matters more for a paper where the core contribution is a skillful combination of existing ideas. One way to address this is to use more visual environments. Authors have also added a lot of new experiments during the rebuttal period that need more scrutiny. E.g., Figure 8 compares offline-to-online RL approaches to the proposed approach, however, the proposed approach requires high-quality data while offline RL can work with low-quality and high-coverage data. Further, it is not clear to me how authors claim gains over Cal-QL in 6/7 domains in Figure 8. For both medium play and medium diverse both cal-ql (green) and the proposed approach (black) achieve a return of 1. Also, the green plots are truncated in some places as they are read from the previous papers. For a fair comparison, all results should have the same number of steps. \\n\\nFor now, I am recommending a rejection with strong encouragement to submit again with more experiments and add more clarity for the results.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised the following main concerns:\\n\\n1. Lack of novelty (reviewer t68W): Authors addressed that their approach isn't a trivial combination of ExPLORe and SPiRL and demonstrate that the naive combination doesn't work as well. Specifically, the authors claim that no past work uses the offline data twice -- once for learning low-level policy during pre-training and a second time during the online phase. I think this is a less important point and I have discounted the novelty concerns. However, I was expecting experiments in more realistic environments.\\n\\n2. Lack of comparison with baselines and domains: Authors have added a comparison with offline-to-online approaches in Figure 8 although I don't know how they read success from this. Authors have also added other ablations such as the RND coefficient.\\n\\n3. Concerns that the dataset should be of high quality: Authors agree that when the data is low-quality then their approach doesn't work as one would expect from looking at the approach. I think this limitation is fine provided authors can justify more where they would hope to get such data in practice. I don't think authors tackle this problem as they mostly rely on existing simulated benchmarks.\\n\\nOverall, my main addressable concerns are experiments being unclear (Figure 8) and the lack of more realistic problems (e.g., including more domains from OGBench can help). Second is that the approach relies on data being of high quality. It would be nice to show a comparison with offline-to-online approaches over different qualities of datasets, and/or argue where this dataset can be found in practice.\"}", "{\"comment\": \"**Can expert data lead to better skills?**\\n\\nWe conducted an additional set of experiments that use offline data with different qualities and reported the results in Figure 16. Among these datasets, the first one is an expert dataset collected by rolling out an expert policy. We find that our method does not have advantage over non-skill-based methods (e.g., ExPLORe) and even learns worse than the setting where a more diverse dataset is provided. We hypothesize that it is because completely expert data is very narrow and can lead to a very narrow set of pre-trained skills that may not behave well in states that are out of the distribution of the offline data. This in turn can harm online learning as the high-level policy can have trouble finding low-level skills that can correct the agent from these out-of-distribution states.\\n\\n**Hope these address all of your questions and concerns. Thank you again for your time to review and if you have any remaining questions or concerns, please let us know!**\", \"title\": \"Author Response (3/3)\"}", "{\"comment\": \"Thanks you again for your time to review our paper and read over our response. Could you be more specific about your concerns on the paper's novelty and how we did not sufficiently address your concerns in our response? We have showed with our experiments that our method outperforms prior approaches across **seven domains** and careful ablation to show that our key algorithm designs (e.g., using the offline data twice, removing the KL) are crucial in achieving the performance gain. We have also included a new sensitivity analysis experiments (Figure 5) that provides more insights to our algorithm. **If there are any new experiments or specific questions that could help improve the paper, please let us know!**\"}", "{\"title\": \"General Response\", \"comment\": \"We would like to thank all the reviewers for the insightful reviews and detailed feedback. We have responded to each reviewer individually for their specific concerns and questions and updated the submission PDF with the changes in blue color.\\n\\n\\nIn case the reviewers would like to read about our responses to other reviewers and other new experiments we conducted, we provide a summary below.\\n\\n\\n**Concern 1: Limited novelty. The proposed method makes limited progress.**\\n\\nFirst of all, our method is not just a naive combination of ExPLORe [1] and SPiRL [2]. We showed in Figure 7 that the naive combination (ours with KL) is worse in both sample efficiency and the final performance on AntMaze-Large. SPiRL uses a KL constraint between the high-level policy and a state-dependent prior obtained from offline pretraining. We found the KL constraint to hurt performance. Our implementation removed the KL constraint and replaced it with entropy regularization on a squashed tanh space.\\n\\nIn addition, our method makes significant improvements over all prior methods. We now evaluate our method on 7 domains (42 tasks in total!), and show that we outperform all baselines (Figure 3) on each domain except Scene. On Scene, the baseline that outperforms our method is a baseline that we introduce that leverages offline data twice (one of the key ideas behind our method), both during offline and online learning. In addition, on the HumanMaze domain, our method is the only method that solves all tasks whereas all other methods fail almost completely. \\n\\nThese additional results demonstrate the progress that our method has made in making RL algorithms more effective at leveraging unlabeled offline data to accelerate online RL to the level that none of the prior methods were able to achieve. \\n\\n[1] Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, and Sergey Levine. Accelerating exploration with unlabeled prior data. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n[2] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n\\n**Concern 2: Ablation is lacking.**\\n\\nWe have added additional sensitive analysis experiments for our method. Figure 5 shows the performance of our method under different RND coefficient values (for optimistic labeling) and different skill horizon length. While it is crucial to use a non-zero RND coefficient value to encourage exploration, the performance of our method is robust to changes in RND coefficient values. We use a fixed RND coefficient for all our experiments across all the domains. We also use a fixed horizon length (H=4) for all our experiments across all the domains. \\n\\n\\n**Other New Experiments:**\\n\\n\\n*Ablation experiment where the ground truth reward is available in the offline dataset. How does our method compare to offline-to-online RL methods? (Figure 8).*\\n- We found our method (with access to the ground truth reward), despite not designed for this setting, to even outperform SOTA offline-to-online RL methods on 4 AntMaze tasks and two of the three Kitchen tasks. This further highlights the effectiveness of our method.\\n\\n\\n*Experiments on D4RL Play datasets for AntMaze. Is the conclusion different from the Diverse datasets that we used in our initial submission? (Figure 15)*\\n- We found that on the Play datasets, our method also outperforms all baselines. The conclusion from the Play datasets is similar to the conclusion from the Diverse datasets.\\n\\n\\n*Experiments on datasets with different qualities. When do we expect our method to work/fail? (Figure 19)*\\n- We found that on an extremely exploratory dataset with largely random actions (plot on the left in Figure 19), our method is not able to extract meaningful skills, and fails to learn the task. \\n- Our method is the most suitable for datasets where there exists segments of meaningful behaviors (two middle subplots in Figure 19)\\n- Our method does not have an advantage over baselines on expert datasets (plot on the right in Figure 19).\\n \\n**We would like to thank all the reviewers again for their time. If there are any remaining questions or concerns, please let us know! We would be happy to run any additional experiments that you think might improve our paper!**\"}", "{\"comment\": \"Thanks again for your time to review and thanks for increasing the score. We have included some new analysis experiments (Figure 5) that provide more insights to the sensitivity of the hyperparameters in our method. Hope these experiments could further strengthen our paper. **If you have specific concerns or additional experiments that you would like to see that could improve our paper, please let us know!**\"}", "{\"title\": \"Follow-up on the new experiments\", \"comment\": \"Thanks for your patience! We would like to give an update on the latest experimental results for our sensitivity analysis (Section 5.5 - Figure 5: aggregated, Appendix H, Figure 9 and 10: individual domains) and ground-truth reward experiments (Appendix G, Figure 8).\\n\\n**Sensitivity Analysis**\\n\\nOur sensitivity analysis experiments were done and we have reported our results (aggregated metrics with IQM and stratified confidence interval following rliable over **7 tasks**) in Figure 5. Our sensitivity analysis shows that while having non-zero RND coefficient value is important, our method is not very sensitive to the RND coefficient value. For the skill horizon length, we find that H=4 is the best (among other values H=2 and H=8) when aggregating over tasks. *Ours with H=2* only did well on the AntMaze task in terms of the final performance. On all other tasks, *Ours with H=2* performs worse than *Ours with H=4* throughout training.\\n\\n**Comparison with Offline-to-Online Methods with Ground Truth Reward**\\n\\nOur ground-truth reward experiments were not fully completed by the PDF update deadline, but we have included the results we have so far in Figure 8 in the Appendix. We now have seven tasks (4 AntMaze tasks and 3 Kitchen tasks). The curves for both IDQ and CalQL are taken from the paper. IDQL only includes training curves for AntMaze Large. CalQL includes curves for all seven tasks, but for Kitchen they only have results for update-to-data ratio, UTD=1, while our method is UTD=20. Given that there was less than a day from the reviewer response to the PDF deadline, we did not have sufficient compute to reproduce IDQL and CalQL curves on all seven environments with the same UTD as our method. We are working these experiments right now with UTD=20 for a more fair comparison and we will provide a follow-up update once the results are out.\\n\\nAcross all four AntMaze tasks and two of the three Kitchen tasks, our method is able to outperform SOTA offline-to-online methods Cal-QL and RLPD. On the AntMaze-Large-Diverse and AntMaze-Large-Play tasks where we have results for IDQL, our method is better than IDQL.\\n\\n **Hope this addresses most of your concerns. We will post a follow-up update once the experiments for the offline-to-online comparison are fully completed. We hope that the current offline-to-online experiments still provide enough evidence that our method is effective, even under a setting that our method is not designed for. Thank you again for your time to review and if you have any other remaining questions or concerns, please let us know!**\"}", "{\"summary\": \"This paper presents SUPE, a method for using offline data (without rewards) in the online reinforcement learning setting. SUPE first extracts a set of low level skills using the offline data, and then optimistically labels the offline trajectories. It then uses an off policy high level update to update on a mix of offline (pseudo labeled trajectories) and online real trajectories. The paper empirically validates the new algorithm on three environments and does ablations on amounts of offline data.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This paper makes an insightful empirical benefit for using trajectories twice for both low level skill pretraining in addition to optimistic labelling.\", \"The paper thoroughly evaluates the proposed method.\", \"The paper does a good job explaining the proposed method and it's significance.\"], \"weaknesses\": [\"This paper could benefit from a bit deeper analysis of the contribution of the two uses of offline data. It's clear that both are necessary, but not necessarily why.\"], \"questions\": [\"Where do the authors think their empirical benefit is coming from? Why can we use trajectories twice?\", \"Is the algorithm robust to different design choices?\", \"How important is the optimistic labelling (from Li et al.)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Importance of Reusing Prior Data Twice**\\n\\nWe would like to clarify that the baseline Online w/ Traj. Skill only uses the offline data for unsupervised skill pretraining and the baseline ExPLORe only uses the offline data during online learning and does not have a skill pre-training phase. In Figure 3, we show that none of these baseline methods that only use the offline data once can achieve good performance. Our method consistently beats these two baselines, demonstrating that reusing the prior data twice is crucial.\\n\\nIn addition, one of the baselines that we introduce in this work, Online w/ HILP Skills, also only uses the offline data for unsupervised skill pretraining. We apply the same principle of reusing the prior data twice to this baseline which leads to the HILP w/ Offline Data baseline. Across domains, HILP w/ Offline Data consistently outperforms Online w/ HILP Skills, further highlighting the benefits of reusing the prior data during online learning.\\n\\n**Trajectory-Segment VAE**\\n\\nTrajectory-segment VAE is a common design choice for extracting a latent space of skill policies offline adopted by a range of prior works and has shown effectiveness in accelerating RL [1-2]. While such a design is certainly not the only approach that can extract useful skills from offline data, it is the simplest formulation that we found to be effective. In addition, the trajectory encoder in the VAE allows to conveniently transform the offline data into high-level off-policy data such that they can be readily used by the actor-critic RL agent online as additional off-policy data, allowing us to use the offline data twice. In addition, the idea of \\u201creusing the prior data twice\\u201d can be applied to potentially other unsupervised skill pre-training algorithms. In our work, we present one alternative where we use HILP [3], a recently proposed offline unsupervised skill pre-raining method and implement two baselines. The first baseline \\u201cOnline w/ HILP Skill\\u201d is the naive version that does not use the prior data twice (the online learning does not use the offline data as additional off-policy data). The second baseline \\u201cHILP w/ Offline Data\\u201d is the version that does use the prior data twice and we observe that the second baseline (that uses the data twice) performs consistently better than the first baseline (that only uses the data once) across all the domains (Figure 3).\\n\\n[1] Pertsch, Karl, Youngwoon Lee, and Joseph Lim. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n\\n[2] Ajay, Anurag, et al. \\\"Opal: Offline primitive discovery for accelerating offline reinforcement learning.\\\" arXiv preprint arXiv:2010.13611 (2020).\\n\\n[3] Park, Seohong, Tobias Kreiman, and Sergey Levine. \\\"Foundation policies with hilbert representations.\\\" arXiv preprint arXiv:2402.15567 (2024).\\n\\n\\n**Hope these address all of your questions and concerns. Thank you again for your time to review and if you have any remaining questions or concerns, please let us know!**\", \"title\": \"Author Response (2/2)\"}", "{\"summary\": \"This paper presents a pre-training method for reinforcement learning (RL) that can train on data sets that do not contain reward labels, i.e., the data sets are unlabeled.\\nThe problem setting resembles offline-to-online RL, except that there are no rewards in the data set.\\nIn the pre-training stage, the authors propose to learn a set of skills from this unlabeled offline data.\\nThen, in the online fine-tuning state, the authors learn a high-level policy that selects which skill to use in a given state.\\nThey utilize the unlabeled offline data during fine-tuning by learning an optimistic reward model and using it to add optimistic reward labels to the offline data.\\nThey evaluate their method in the D4RL AntMaze and Kitchen benchmarks as well as the D4RL Visual AntMaze.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Overall, I found the paper easy to follow and I think it is addressing an important problem -- pre-training in RL -- which is of interest to the community.\\n\\nThe results demonstrate that learning skills from offline data is a promising approach to leverage reward-free offline data.\\nI think this is an interesting result.\\nI also like the idea of labelling the offline data using a learned reward function.\", \"weaknesses\": \"The authors consider the setting of having access to offline data but no reward labels.\\nWhilst I see the value in this problem setting, it is not clear if practitioners should opt for this method over standard offline-to-online RL methods when\\ntheir data sets contain reward labels.\\nWhilst I appreciate this is out-of-scope, ideally methods would leverage data sets both with and without reward labels.\\nIt would be insightful if the authors could compare to offline-to-online RL methods which do leverage reward labels.\\nWhilst I do not expect their method to outperform these methods, I think it is an important baseline that we can gain insights from.\\n\\nIn my experience, optimistic-based exploration methods are very susceptible to the $\\\\alpha$ parameter.\\nHow was this set in practice?\\nDid it require a grid search to find the best value in each environment?\\nPlease can you provide details on any hyperparameter tuning process, including the range of values tested and how sensitivity varied across environments?\\nThis information would be valuable for reproducibility and understanding the robustness of the method.\\n\\nIs there a reason the authors only considered the diverse data set for the AntMaze experiments?\\nDoes this method require a diverse offline data set collected by an unsupervised RL method,\\nor can it leverage narrow offline data distributions? For example, data from solving a different task?\\nHow does the method perform when using the AntMaze \\\"play\\\" data set instead of the \\\"diverse\\\" data set?\\nEven if the method performs poorly, I think it would be valuable to include these results.\\n\\nI am not sure what to take from the coverage results.\\nI can understand why we care about coverage in unsupervised RL where our sole purpose is to explore.\\nHowever, during online training our goal is to balance exploration vs exploitation.\\nPlease can the authors provide a clearer justification for why coverage is an important metric in this context, or include additional plots that more directly show the relationship between exploration and task performance, such as the normalized return vs coverage?\\n\\nIn Table 1, what do the bold numbers represent? The authors should state what statistical test was used for the bolding or at least expla8in what the bolding represents.\\n\\n## Minor comments and corrections\\n- Line 42 - \\\"can broken\\\" should be \\\"can be broken\\\"\\n- Line 117 - \\\"of an offline data\\\" should be \\\"of offline data\\\"\\n- Line 200 - the term \\\"latent code\\\" is misleading. This suggests the trajectory encoder learns to map trajectories to discrete codes from a codebook and I don't think this is the case. The authors should change it to something like \\\"latent skill\\\".\\n- Line 279 - Should \\\"Three AntMaze layouts with four different goal location configuration each.\\\" be \\\"Three AntMaze layouts with four configurable goal locations each.\\\"\\n- Line 411-414 - It would make more sense for this paragraph to be at the start of Section 5.4.\\n- Line 407 - \\\"Kitchen the domain\\\" should be \\\"Kitchen domain\\\"\\n- Line 408 - \\\"more challenging the kitchen-mixed\\\" should be \\\"more challenging kitchen-mixed\\\"\\n- Figures - The authors have stated that the shaded area indicates the standard error. They also need to state what that solid line indicates. Is it the mean, median, etc?\\n- Figures - I found the figures very hard to read. I would suggest the authors colour the text \\\"HILP w/ Offline Data\\\", \\\"Ours\\\", \\\"Online w/ HILP Skills\\\", etc, to match the colours of the lines in the plots. This would make the text/figures much easier to read.\", \"questions\": [\"How does your method compare to using offline-to-online RL methods which have access to reward labels?\", \"How was the $\\\\alpha$ hyperparameter set?\", \"Why did you not compare to other types of offline data sets?\", \"What should I take from the coverage results?\", \"In Table 1, what do the bold numbers represent?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for the detailed reply, especially the data quality part. However, I will keep my score as I still have concerns about the paper's novelty.\"}", "{\"summary\": \"The paper proposes a hierarchical policy for leveraging unlabeled offline data for exploration. In the offline stage, low-level skills are extracted, and in the online stage, these skills are reused and a high-level policy is learned with optimistic rewards. The proposed method is tested on maze and manipulation tasks and shows good performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to understand.\", \"The paper proposes a simple method for leveraging offline data and showing good performance on AntMaze, visual AntMaze, and Kitchen tasks.\", \"The paper conducts thorough experiments and compares a set of different methods.\"], \"weaknesses\": [\"Dependence on offline data quality: The performance of the proposed method is influenced by the quality of the offline data and the specific features of the evaluation tasks. In particular, the approach relies on a high-level policy that is updated every\", \"\\ud835\\udc3b timesteps and keeps the pre-trained skill and trajectory encoder fixed during the online phase. This limitation constrains adaptability, especially in scenarios where task distribution varies from the offline data.\", \"Limited discussion on Hierarchical Reinforcement Learning (HRL): Although hierarchical policy structures have been extensively explored in the HRL literature [1-8] and are closely related to the paper, the paper does not sufficiently address relevant findings from HRL research. A more comprehensive discussion of how this work could provide valuable context.\", \"Novelty: The paper combines elements from ExPLORe and trajectory-segment VAE to leverage offline data for exploration, but adds limited new insights beyond prior work. HRL emphasizes hierarchical structures, and the benefits of skill extraction in offline settings have already been documented. This paper simply applies existing solutions to ExPLORe.\"], \"the_paper_could_be_improved_in_several_aspects\": \"- Refinement of skill extraction method: Currently, skills are extracted based on fixed-length trajectory segments, a method that may overlook important nuances in skills. A more flexible or adaptive approach could address these limitations, potentially enhancing the robustness of the extracted skills.\\n- Skill adaptation during the online stage: The method does not allow for online adaptation of the skill policy or trajectory encoder. Due to potential distributional shifts between the offline and online data, enabling adaptive updates to the skill set and encoder could further improve the performance.\\n- Training stability in HRL is often affected by interactions between high-level and low-level policies. This work could benefit from discussing how offline data might address or mitigate these stability challenges.\\n\\n[1] Kulkarni, Tejas D., et al. \\\"Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation.\\\" Advances in neural information processing systems 29 (2016).\\n\\n[2] Xie, Kevin, et al. \\\"Latent skill planning for exploration and transfer.\\\" arXiv preprint arXiv:2011.13897 (2020).\\n\\n[3] Nachum, Ofir, et al. \\\"Data-efficient hierarchical reinforcement learning.\\\" Advances in neural information processing systems 31 (2018).\\n\\n[4] Bacon, Pierre-Luc, Jean Harb, and Doina Precup. \\\"The option-critic architecture.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 31. No. 1. 2017.\\n\\n[5] Ajay, Anurag, et al. \\\"Opal: Offline primitive discovery for accelerating offline reinforcement learning.\\\" arXiv preprint arXiv:2010.13611 (2020).\\n\\n[6] Gehring, Jonas, et al. \\\"Hierarchical skills for efficient exploration.\\\" Advances in Neural Information Processing Systems 34 (2021): 11553-11564.\\n\\n[7] Dalal, Murtaza, Deepak Pathak, and Russ R. Salakhutdinov. \\\"Accelerating robotic reinforcement learning via parameterized action primitives.\\\" Advances in Neural Information Processing Systems 34 (2021): 21847-21859.\\n\\n[8] Paraschos, Alexandros, et al. \\\"Probabilistic movement primitives.\\\" Advances in neural information processing systems 26 (2013).\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BfUugGfBE5
Distilling Reinforcement Learning Algorithms for In-Context Model-Based Planning
[ "Jaehyeon Son", "Soochan Lee", "Gunhee Kim" ]
Recent studies have shown that Transformers can perform in-context reinforcement learning (RL) by imitating existing RL algorithms, enabling sample-efficient adaptation to unseen tasks without parameter updates. However, these models also inherit the suboptimal behaviors of the RL algorithms they imitate. This issue primarily arises due to the gradual update rule employed by those algorithms. Model-based planning offers a promising solution to this limitation by allowing the models to simulate potential outcomes before taking action, providing an additional mechanism to deviate from the suboptimal behavior. Rather than learning a separate dynamics model, we propose Distillation for In-Context Planning (DICP), an in-context model-based RL framework where Transformers simultaneously learn environment dynamics and improve policy in-context. We evaluate DICP across a range of discrete and continuous environments, including Darkroom variants and Meta-World. Our results show that DICP achieves state-of-the-art performance while requiring significantly fewer environment interactions than baselines, which include both model-free counterparts and existing meta-RL methods.
[ "reinforcement learning", "in-context learning" ]
Accept (Poster)
https://openreview.net/pdf?id=BfUugGfBE5
https://openreview.net/forum?id=BfUugGfBE5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z5xGeDc3mt", "yDuWROlrdu", "qlu0DkDl69", "d1PYqW0Nnh", "XSpOfDumOE", "WZ7OrJuqQz", "VPXMNgz7hg", "UlrjhrVfG4", "QPucoZoqMo", "OJ6l3fXsen", "HgyLKxHKhd", "DO1PSHh1c2", "CXvRkh0Nwt", "9udnfw4VwJ", "71z4EYE3ql", "3CwWnNm3GA", "2kzn5dQhjJ", "0QmZ37CDY9", "0DspgVjin0" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732016382544, 1732495601828, 1732016081100, 1732240249808, 1732141828014, 1732240208348, 1732015320174, 1730051306549, 1732597734438, 1737523482501, 1732504535333, 1735316009555, 1732508908010, 1730638293341, 1732552138806, 1730220776984, 1732014761700, 1732315606511, 1732368085482 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_XR4p" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_zBei" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_jXdx" ], [ "ICLR.cc/2025/Conference/Submission2042/Area_Chair_qpdG" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_jXdx" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_XR4p" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_XR4p" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ], [ "ICLR.cc/2025/Conference/Submission2042/Reviewer_zBei" ], [ "ICLR.cc/2025/Conference/Submission2042/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author Response to Reviewer zBei\", \"comment\": \"### MCTS and Planning Tree\\n\\n> Q. Would MCTS perhaps have benefits relative to beam search?\\n\\nMCTS is a sophisticated planning algorithm that is well-established in the model-based RL literature. By iteratively simulating outcomes from various nodes and making decisions based on aggregated results, MCTS tends to perform well in long-horizon planning and is particularly effective in stochastic dynamics. Additionally, its adaptive node expansion, rather than relying on a fixed number of leaf nodes, makes it especially suitable for larger action spaces.\\n\\nHowever, we opted for beam search in our work because its structure aligns well with the Transformer architecture. Beam search enables parallelized decoding, sorting, and slicing operations across beams, enhancing computational efficiency in such settings. Moreover, we can effectively manage GPU memory usage by adjusting the beam size, which provides practical benefits when deploying our method. Ultimately, the choice of planning algorithm depends on the environment and the computational budget.\\n\\n> Q. Is there a way to not build the whole planning tree as an initial step, or what is the advantage to doing so?\\n\\nIn our approach, we avoid constructing the entire planning tree at the outset by pruning planning paths at every planning step. Specifically, planning paths are ranked using the predicted return as a value function, which is estimated by the Transformer. This value function integrates seamlessly with MCTS or other tree search algorithms, allowing us to circumvent the need to build an exponentially growing planning tree while maintaining computational efficiency and performance.\\n***\\n\\n### Inaccuracy of World Model\\n\\n> Q. Are there scenarios where having a world model might detract? For example, what happens if the world model is not accurate enough?\\n\\nIf the environment dynamics shift, the learned world model may not plan effectively, potentially leading to a suboptimal guidance for the agent. To address this issue, it is a promising direction to develop a mechanism for evaluating the accuracy of the world model at each step to adaptively decide when it should be relied upon. Another potential approach could involve constructing an offline meta-training dataset containing successful learning histories in the scenarios where inaccuracies are likely to occur.\\n***\\n### Inferior Performance in a Benchmark\\n\\n> Q. What are possible explanations for why model-based performs worse on Pick-Out-Of-Hole?\\n\\nModel-based planning generally provides a performance advantage when the world model is sufficiently accurate. The observed suboptimality likely arises from inaccuracies in the learned world model. Successful in-context learning of test dynamics depends on some degree of transferability between the training and test splits, which can vary across tasks. For instance, in tasks like Pick-Out-Of-Hole, the 50 training seeds may lack sufficient diversity to enable effective generalization of world model learning during the test. In such cases, one potential solution is to disable planning and rely on model-free action selection, similar to model-free counterparts. While this approach could mitigate the issue, we chose to omit it in our experiments to maintain consistency with the scope of our work.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe sincerely appreciate your invaluable feedback, which has been instrumental in improving our paper. In response, we have made the following updates:\\n- Added a computational analysis in Appendix A.\\n- Included an additional experiment on ML10 in Appendix B.\\n- Conducted an ablation study analyzing the impact of world model accuracy on final performance, detailed in Appendix C.\\n\\nWe hope these updates enhance the clarity and completeness of our work.\"}", "{\"title\": \"Author Response to Reviewer zBei\", \"comment\": \"We greatly appreciate Reviewer zBei\\u2019s positive feedback and hope that this discussion contributes to advancing further research endeavors.\\n***\\n\\n### Action Selection Mechanism and Sensitivity\\n\\n> Q. How exactly are the actions sampled, and what is the sensitivity to the sampling approach?\", \"the_action_selection_process_in_our_approach_is_as_follows\": \"the Transformer first predicts a distribution over actions, conditioned on the sequence of past transitions and the current observation. Multiple action candidates are sampled from this distribution. Each candidate action is appended to a duplicated input sequence, and these sequences are processed in parallel by the Transformer to predict the corresponding next observation, reward, and return-to-go. These predictions are further appended to the duplicated input sequences to sample candidate actions for subsequent steps. This process is repeated iteratively until a predefined planning horizon is reached. At the end of the process, the best action candidate from the first step is selected and executed, and the entire planning process is repeated. A detailed description of this algorithm, including the planning tree pruning method, can be found in Alg. 2-3 and Sec. 4.\\n\\nRegarding the choice of the sampling distribution, we experimented with several options, such as Gaussian distributions with unit and diagonal covariance matrices. We observed minimal performance differences between these configurations and opted for the diagonal covariance matrix for applicability. While alternative approaches, such as discretizing and sequentially predicting dimensions of the continuous action space (as in prior works), are feasible, we did not pursue them in this work due to their increased sequence length. \\n***\\n\\n### Optimizing Imitation and Dynamics Losses with Separate Models\\n\\n> Q. What do you think would happen if the imitation loss and dynamics loss trained two separate models?\\n\\nAs long as a single model has sufficient representational capacity, we do not anticipate any significant performance difference between using a single model versus separate models for imitation and dynamics losses. We consider a single sequence model to be a more practical choice, offering greater simplicity and flexibility, particularly when scaling, modifying, or deploying the model.\\n***\\n\\n### Further Development of DICP\\n\\n> Q. I would be interested in more discussion of how this method might apply to online learning. For example, how might it interact with intrinsically-rewarded exploration to improve the world model?\\n\\nWe agree that incorporating intrinsic rewards into our approach presents an exciting future direction. Since reward models are learned purely in-context within our framework, we anticipate that diverse transitions driven by intrinsic rewards could significantly enhance the accuracy of world model learning. Additionally, strategies inspired by language model decoding, such as repetition penalties, could potentially be adapted to seamlessly integrate intrinsic rewards into our method, making this an especially promising avenue for exploration.\\n\\n> Q. How much does this method depend on the quality of the offline dataset?\\n\\nThe quality of the offline meta-training dataset is indeed critical, as our framework relies on in-context learning to train both the policy and the world model. Ensuring that the offline dataset captures well-structured and meaningful learning histories from the source algorithm is essential for the success of our method. This is analogous to the field of large language models, where improvements in dataset quality often lead to substantial performance gains.\\n\\n> Q. How effectively would this approach adapt to an environment where the dynamics change?\\n\\nAdapting our method to environments with changing dynamics is another exciting direction for future work. Building on our response to the previous question, we believe that collecting learning histories of source algorithms in environments with **changing dynamics** is essential for enabling effective adaptation at test time. If the Transformer is properly meta-trained on such an offline dataset, it is likely to perform robustly even in the face of dynamic changes.\\n***\"}", "{\"title\": \"Author Response to Reviewer XR4p\", \"comment\": \"### Computation Compared to Previous Work\\n\\n> Q. How does the computation of your approach compare to [1]?\\n\\nThe FLOP count per action selection is summarized in the table below. Even the maximum value is negligible on modern GPUs, and the difference becomes even less significant when using architectures like IDT, which are specifically designed to handle longer sequences efficiently. Notably, this favorable trade-off aligns with the current trend of increasing inference-time computation to fully leverage the reasoning capabilities of Transformers, particularly through few-shot [2] and chain-of-thought prompting [3].\\n\\n| Method | Darkroom | Dark Key-to-Door | Darkroom-Permuted | Meta-World |\\n|-------------|----------|------------------|-------------------|------------|\\n| AD | 6M | 20M | 20M | 709M |\\n| DPT | 6M | 20M | 20M | 709M |\\n| IDT | 8M | 8M | 8M | 3M |\\n| DICP-AD | 2G | 18G | 18G | 8G |\\n| DICP-DPT | 2G | 18G | 18G | 8G |\\n| DICP-IDT | 147M | 147M | 147M | 15M |\\n***\\n\\n[1] Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steigerwald, DJ Strouse, Steven Stenberg Hansen, Angelos Filos, Ethan A. Brooks, Maxime Gazeau, Himanshu Sahni, Satinder Singh, and Volodymyr Mnih. In-context reinforcement learning with algorithm distillation. In International Conference on Learning Representations, 2023.\\n\\n[2] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Neural Information Processing Systems, 2020.\\n\\n[3] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Neural Information Processing Systems, 2022.\"}", "{\"comment\": [\"The authors mention the effect of inaccurate world models but couldn't notice it in the results. Do you have any results where DICP didn't perform well due to this? Are there any other limitations?\", \"If the difference between [1] and your approach lies in how actions are selected and if your approach uses the DICP subroutine then it may be efficient, when mimicking the source algorithm is inefficient. But if the world model is inaccurate and the mimicking source algorithm is efficient then would it make DICP inefficient?\", \"In extension to this question: How often do inefficiencies of source algorithms cause inefficient learning in [1]?\", \"With enough data for in-context learning, would this problem persist?\", \"What may be other alternatives to a model-based planning approach to solve this problem and why would model-based planning be a better solution to this problem?\", \"How does the computation of your approach compare to [1]?\", \"[1] Laskin M, Wang L, Oh J, Parisotto E, Spencer S, Steigerwald R, Strouse DJ, Hansen S, Filos A, Brooks E, Gazeau M. In-context reinforcement learning with algorithm distillation. arXiv preprint arXiv:2210.14215. 2022 Oct 25.\"]}", "{\"title\": \"Author Response to Reviewer XR4p\", \"comment\": \"We sincerely appreciate reviewer XR4p\\u2019s additional feedback.\\n***\\n\\n### Effect of Incorrect World Models\\n\\n> Q. The authors mention the effect of inaccurate world models but couldn't notice it in the results. Do you have any results where DICP didn't perform well due to this?\\n\\nTo address the reviewer\\u2019s inquiry regarding the direct relationship between world model accuracy and performance, we conducted an additional experiment using a **scripted** world model. This model generates perfect predictions with $1-\\\\epsilon$ probability and random predictions with $\\\\epsilon$ probability. The table below presents the episode rewards after 50 episodes, showing how the performance of DICP-AD in the Darkroom environment varies with the accuracy of the world model. Importantly, our approach is inherently robust, as it is **lower-bounded** with the \\\"Without Planning\\\" case by avoiding relying on the world model when it becomes unreliable. We designed this experiment to freely manipulate the accuracy of the world model, as the accuracy evolves over time steps in the main experiment, making it difficult to establish a direct relationship between the accuracy and the performance. We will include a related discussion in the revised version of our paper.\\n\\n| $\\\\epsilon$ of Script World Model | Episode Rewards |\\n|-------------|----------|\\n| 0.00 | 15.925 |\\n| 0.05 | 12.175 |\\n| 0.10 | 8.350 |\\n| 0.15 | 6.825 |\\n| 0.20 | 6.825 |\\n| 0.25 | 6.225 |\\n| 0.30 | 4.825 |\\n| Without Planning | 14.825 |\\n\\n\\n> Q. Are there any other limitations?\\n\\nAside from potential inaccuracies in the world model, we believe our method has no notable limitations compared to previous works [1], as our method uses the same data collection process and training parameter size, with only negligible additional computation.\\n***\\n\\n### Regarding Inefficiency\\n\\n\\n> Q. If the world model is inaccurate and the mimicking source algorithm is efficient then would it make DICP inefficient?\\n\\nAn inaccurate world model can indeed lead to suboptimal model-based planning, which may reduce the efficiency of DICP. However, as mentioned earlier, the efficiency of DICP is lower-bounded. Additionally, if the source algorithm employs a highly efficient update rule, it could diminish DICP\\u2019s relative advantage. That said, as long as RL algorithms rely on gradient descent\\u2014given its inherently gradual nature\\u2014we believe there will still be opportunities for DICP to provide meaningful improvements.\\n\\n> Q. How often do inefficiencies of source algorithms cause inefficient learning in [1]?\\n\\n[1] demonstrates that naive distillation of learning histories introduces inefficiencies and that skipping intermediate episodes in these histories can result in faster learning compared to the source algorithm. Furthermore, our results show that combining DICP with [1] still enhances learning performance under the same dataset and parameter size, indicating that [1] retains some inefficiencies. This supports our argument that the inefficiencies of source algorithms **generally** contribute to inefficient learning in naive distillation and [1].\\n\\n> Q. With enough data for in-context learning, would this problem persist?\\n\\nThe scenario described by the reviewer, where sufficient offline data is available for test tasks, is valid but falls outside the scope of our research. If **enough** offline data is available, the significance of online sample efficiency diminishes. In such cases, other learning approaches may be more suitable than meta-RL methods designed to enhance online sample efficiency.\\n\\nIn contrast, when only limited offline data is available for test tasks, it could provide the policy with a better starting point for online learning. However, the underlying issue persists beyond this stage, as the learning capability of the distilled algorithm remains unchanged. Consequently, the problem continues to affect subsequent online interactions.\\n\\n> Q. What may be other alternatives to a model-based planning approach to solve this problem and why would model-based planning be a better solution to this problem?\\n\\nAn alternative approach to addressing the inefficiency caused by the gradual updates of source RL algorithms is to skip intermediate episodes and use only every $n$-th episode in learning histories, as explored in [1]. This technique enables $n$-times faster policy updates than the source algorithm. However, such approaches require careful tuning of the skipping frequency based on the specific algorithm and its hyperparameters. In contrast, model-based planning is largely independent of the hyperparameters of the source algorithm, making it a more robust and straightforward solution.\\n***\"}", "{\"title\": \"Author Response to Reviewer XR4p\", \"comment\": \"We deeply appreciate reviewer XR4p's constructive feedback. We hope that our response addresses all of the reviewer's concerns clearly and comprehensively.\\n***\\n\\n### Effect of Incorrect World Models\\n\\nAs the reviewer points out, inaccuracy in the world model has been a significant concern in the model-based RL literature. Indeed, this inaccuracy could diminish the effectiveness of model-based approaches, including ours.\\n\\nWe would like to clarify that we discussed the effect of world model inaccuracy and some ways to mitigate them in Sec. 6.2, where we conducted an ablation study on the relation between context lengths and world model accuracy, noting: \\u201cGiven that the effectiveness of model-based planning heavily depends on the dynamics model\\u2019s bias [1, 2], our framework benefits from longer context lengths.\\u201d The ablation results show that longer context lengths, combined with sequence models with sufficient representational power, can be a great recipe for improving world model accuracy, which in turn enhances performance.\\n\\nMoreover, even in the scenarios where the in-context learned world models are not sufficiently accurate, our approach can maintain competitive performance by adopting the same action selection mechanism as model-free counterparts, without relying on the learned world models, as we empirically demonstrated in Sec. 6.1. As a meaningful future direction, model-based planning could be further improved by adaptively leveraging the world models based on their quantified accuracy at each decision-making step, which could also alleviate the reviewer\\u2019s concern.\\n\\n[1] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Neural Information Processing Systems, 2019.\\n\\n[2] Takuya Hiraoka, Takahisa Imagawa, Voot Tangkaratt, Takayuki Osa, Takashi Onishi, and Yoshi-masa Tsuruoka. Meta-model-based meta-policy optimization. In Asian Conference on Machine Learning, 2021.\\n***\\n\\n### Quantification of Sub-optimality\\n\\nAs the reviewer suggested, proper quantification and comparative analysis are indeed crucial. We would like to clarify that in our framework, an \\u201calgorithm\\u201d is a meta-level concept that trains a policy rather than being a policy itself. As such, measuring sub-optimality in terms of \\u201cthe number of times sub-optimal behavior is observed\\u201d may not be directly applicable to our approach. Instead, we believe the improvement of sub-optimality is more effectively quantified by assessing \\u201cthe steepness of the learning curve,\\u201d which reflects the efficiency and capability of algorithms in training policies. As shown in Fig. 2, our approach achieves faster policy training with much fewer environment interactions compared to baseline methods in most cases. This demonstrates our method's ability to reduce sub-optimality effectively.\\n***\\n\\n### Trade-off between Performance and Computation\\n\\nThe trade-off between performance and computation is important to evaluate the practicality of performance improvement. We would like to emphasize that the additional computational cost in our framework is negligible. Specifically, our method does not increase the number of training parameters compared to model-free counterparts, and the primary difference lies in the increased number of Transformer inferences per action selection.\\n\\nIn our experiments, the maximum computation per action selection is approximately 18 GFLOPs. Given that modern GPUs can process hundreds of teraFLOPs per second, this cost allows for action selection to occur thousands of times per second. Consequently, the computational expense is minimal in practice while the performance gains are substantial, making the trade-off highly favorable in our framework. We will add the related description in our paper to address this point.\"}", "{\"summary\": \"This paper extends the use of decision transformers for in-context meta-task learning to incorporate model-based planning. The main innovation here is to have the transformer output predicted state values (r, o, R) in addition to the next action, and to use this state-transition model to select better actions. This can be applied to multiple different transformer-based agents and yields improvements both in terms of sample efficiency and overall score.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Advances the performance of in-context RL, an exciting recent direction\", \"Provides a mechanism to overcome suboptimal behaviors inherited from the source RL algorithms\", \"Achieves state of the art on Meta-World, compared with a large variety of RL and in-context RL algorithms\", \"This is an important new innovation that makes sense, and as far as I can tell (though I do not have full knowledge of the literature) is the first demonstration of incorporating model-based planning into transformer-based in-context RL.\", \"This seems like a well-done paper with a straightforward but impactful contribution.\"], \"weaknesses\": [\"Nothing major.\"], \"questions\": [\"How exactly are the actions sampled, and what is the sensitivity to the sampling approach?\", \"What do you think would happen if the imitation loss and dynamics loss trained two separate models?\", \"I would be interested in more discussion of how this method might apply to online learning. For example, how might it interact with intrinsically-rewarded exploration to improve the world model? How much does this method depend on the quality of the offline dataset? How effectively would this approach adapt to an environment where the dynamics change?\", \"Would MCTS perhaps have benefits relative to beam search? Is there a way to not build the whole planning tree as an initial step, or what is the advantage to doing so?\", \"Are there scenarios where having a world model might detract. For example, what happens if the world model is not accurate enough?\", \"What are possible explanations for why model-based performs worse on Pick-Out-Of-Hole?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer XR4p\", \"comment\": [\"We sincerely appreciate the reviewer\\u2019s recognition of DICP as a unique and promising idea, and we would like to summarize our perspective in this discussion.\", \"Inaccuracies in the world model are indeed a common limitation of most model-based planning methods. However, our framework stands out by being lower-bounded by the performance of model-free counterparts.\", \"Importantly, our method does not rely on having a perfect world model, which is particularly challenging in continuous dynamics settings like Meta-World. Despite this, our approach achieves state-of-the-art performance.\", \"Additionally, with its negligible computational overhead, our method remains both practical and effective across various scenarios.\", \"In response to the reviewer\\u2019s feedback, we will include further investigation in our revised manuscript and sincerely thank Reviewer XR4p for their insightful engagement and valuable suggestions.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the authors' detailed explanation. I have no further questions and I'm willing to raise my score.\"}", "{\"metareview\": \"The authors present DICP, which builds upon decision transformers to enable in-context, model-based planning/RL.\\n\\nThe paper leverages model learning to accelerate and improve in-context adaptation to new tasks in RL. The method is general-purpose and the results show strong performance.\\n\\nReviewers highlighted several weaknesses. In particular, there were questions around whether all claims in the paper were justified by experimental evidence. There were also questions around the robustness of the learned model, although the authors added new results demonstrating this. \\n\\nAll reviewers recommend acceptance, and the authors have largely addressed the major weaknesses mentioned.\", \"additional_comments_on_reviewer_discussion\": \"The most substantial point of discussion was on the robustness of learned models. While there was no conclusion about the robustness of models (and the impact of model error on performance), this is more of a \\\"nice-to-have\\\" than a critical part of the paper.\"}", "{\"title\": \"Author Response to Reviewer jXdx\", \"comment\": \"We deeply appreciate the reviewer\\u2019s supportive feedback and decision to raise the score. We hope this exchange will lead to valuable contributions to the broader community.\"}", "{\"summary\": \"This paper proposes a model-based in-context reinforcement learning method called Distillation for In-Context Planning (DICP). With a dynamics model for planning, it provides the ability to deviate from the source algorithm's behavior. The authors show that DICP achieves better performance on Darkroom and Meta-World benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written, allowing readers to follow the main arguments.\", \"It provides a comprehensive experiments and ablations to demonstrate the effectiveness of DICP compared with the model-free counterparts.\"], \"weaknesses\": \"I think the experimental results of this paper do not strongly support the main motivation. The authors claim that: \\u201cModel-free in-context reinforcement learning methods are trained to mimic the source algorithm, they also reproduce its suboptimal behaviors. Model-based planning offers a promising solution to this limitation by allowing the agents to simulate potential outcomes before taking action, providing an additional mechanism to deviate from the source algorithm\\u2019s behavior.\\u201d However, in the experiments section, DICP does not show significant performance advantages over its model-free counterparts. For example, as shown in Appendix B, the success rate of DICP-AD compared to AD only improves from 68% to 69%, and DICP-IDT compared to IDT only improves from 75% to 80%. Therefore, I believe model-based planning does not significantly enhance the policy beyond the source behavior.\", \"questions\": [\"Could the authors explain why improvements of DICP-AD over AD is not significant?\", \"Is it possible to evaluate on more challenging benchmarks like ML10?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses.\\n\\n* To clarify:\\n\\n * When \\u03f5 = 0: The world model is perfectly accurate, and DICP-AD can potentially outperform the \\\"Without Planning\\\" case by leveraging the perfect information from the model.\\n * As \\u03f5 increases: The world model becomes less reliable. While DICP-AD is designed to be robust and still provide some benefit, its performance might degrade. In some cases, relying solely on the \\\"Without Planning\\\" approach might be more efficient, especially if the world model's predictions are consistently misleading.\\nThe key takeaway is that the optimal strategy depends on the specific scenario and the reliability of the world model. DICP-AD offers a flexible approach that can adapt to varying levels of model accuracy, but it's important to consider the trade-offs between using the world model and relying on simpler strategies.\\n\\nThe problem is that the accuracy of the world model is seldom known. The proposed approach performs better than the \\\"without planning\\\" case only when the world model is 100% accurate. Since DICP performs best in almost every experiment, inaccurate world models are not evaluated thoroughly. Overall, this looks like a good direction of research the authors have explored and needs deeper investigation and experimentation.\\n\\n* Naive distillation of learning histories is not a fair comparison (the world model can be inaccurate too for your approach in the same scenario). As mentioned earlier evaluations need further analysis. Introducing DICP is a unique idea and I value that. However, it is unclear to me that inefficiency in the source algorithm is the only reason for the boost in performance. \\n\\nAs a result, I believe my rating should stay at its current value.\"}", "{\"summary\": \"The paper proposes a novel method called Distillation for In-Context Model-Based Planning (DICP) to improve the efficiency and effectiveness of in-context reinforcement learning. DICP leverages a learned dynamics model to predict the consequences of actions and uses this information to plan more effectively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-Transformer simultaneously learns environment dynamics and improves policy in-context\\n\\n-Avoid sub-optimal behavior of the source algorithm\", \"weaknesses\": \"-One of the flaws in model based planning is that the model might not be perfect. Errors of the world model might lead to sub-optimal policies as well which haven\\u2019t been discussed in the paper.\\n\\n-Some analysis on how many times sub-optimal behvior from source algorithm was discovered and your approach was able to learn the optimal policy would be important to ensure extra computation of model based planning is worth it.\", \"questions\": \"1) Can you comment on how much the trade-off between performance vs computation is required in your approach and other comparisons? Do the gains outweigh computational expense?\\n\\n2) What happens when the world model is incorrect? How is the performance affected? What steps are taken to ensure model-based planning can be accurate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer jXdx\", \"comment\": \"We sincerely appreciate reviewer jXdx\\u2019s constructive feedback. We hope this discussion will help bridge any gaps in understanding and further enhance the clarity of our work.\\n***\\n### Regarding Performance Gain\\n\\n**Steady Performance Gain**\\n\\nWe would like to emphasize that our method achieves **steady** performance gains across a variety of models and environments, including DICP-AD, DICP-DPT, and DICP-IDT. While the magnitude of improvement varies, our models outperform their model-free counterparts in both discrete and continuous environments. Notably, DICP-IDT, our best-performing model, achieves the **state-of-the-art** performance on the well-established Meta-World benchmarks (Table 1).\\n\\n**Significantly Fewer Environment Interactions**\\n\\nThe review primarily focuses on final success rate differences; however, we would like to highlight that **sample efficiency** is a critical consideration in RL. Our models learn faster than their model-free counterparts across both discrete (first row of Fig. 2) and continuous environments (last subfigure of Fig. 2), averaging across 50 tasks. Furthermore, Table 1 demonstrates that our approach achieves superior performance with significantly fewer environment interactions compared to extensive baselines. These results underscore the practical benefits of our method, particularly in reducing the cost of online data collection while maintaining strong final performance.\\n\\n**Agnostic to Sequence Model Choice**\\n\\nIn our result, the performance improvement of DICP-AD over AD is smaller in ML1 compared to other settings. This difference is attributed to the **accuracy of the in-context learned world model**. Specifically, as shown below, the dynamics model in DICP-AD is relatively less accurate compared to DICP-IDT. This analysis indicates that the small vanilla Transformers used in AD may not be ideal for capturing long input sequences, whereas IDT incorporates design choices that better suit such tasks. Since the performance of model-based planning heavily depends on the accuracy of the learned world model, weaker sequence models inherently limit the gains achieved by our framework. It is important to note, however, that our method is **agnostic to the choice of sequence model**. As a result, our approach directly benefits from the use of advanced or scaled sequence models that can more accurately capture sequential dynamics.\\n\\n| | DICP-AD | DICP-IDT |\\n|----------|----------|----------|\\n| Test Dynamics Loss | $8.9e^{-2}$ | $4.0e^{-2}$ |\\n***\\n\\n### Evaluation on More Challenging Benchmarks\\n\\nIn response to the reviewer's comment, we conduct additional experiments on the ML10 benchmark of Meta-World. The meta-test success rates below demonstrate that our approach outperforms the model-free counterpart and achieves state-of-the-art performance on this benchmark. Notably, this is achieved with significantly fewer environment interactions and without relying on expert demonstrations or task descriptions for test tasks. We will incorporate the results into the revised version of our paper.\\n\\n| Method | Success Rate | Steps |\\n|--------------------|--------------------|---------|\\n| PEARL | $13.0$ | $350M$ |\\n| MAML | $31.6$ | $350M$ |\\n| RL$^2$ | $35.8$ | $350M$ |\\n| IDT | $36.7$ | $500K$ |\\n| DICP-IDT (Ours) | $\\\\textbf{46.9}$ | $500K$ |\"}", "{\"comment\": \"Thank you, I appreciate the answers to the questions. I remain supportive of this work being accepted.\"}", "{\"title\": \"Author Response to Reviewer zBei\", \"comment\": \"We sincerely thank the reviewer for thoughtful engagement and continued support of our work. We hope this discussion translates into meaningful contributions within the community.\"}" ] }
BfUDZGqCAu
On the Linear Speedup of Personalized Federated Reinforcement Learning with Shared Representations
[ "GUOJUN XIONG", "Shufan Wang", "Daniel Jiang", "Jian Li" ]
Federated reinforcement learning (FedRL) enables multiple agents to collaboratively learn a policy without needing to share the local trajectories collected during agent-environment interactions. However, in practice, the environments faced by different agents are often heterogeneous, but since existing FedRL algorithms learn a single policy across all agents, this may lead to poor performance. In this paper, we introduce a personalized FedRL framework (PFedRL) by taking advantage of possibly shared common structure among agents in heterogeneous environments. Specifically, we develop a class of PFedRL algorithms named PFedRL-Rep that learns (1) a shared feature representation collaboratively among all agents, and (2) an agent-specific weight vector personalized to its local environment. We analyze the convergence of PFedTD-Rep, a particular instance of the framework with temporal difference (TD) learning and linear representations. To the best of our knowledge, we are the first to prove a linear convergence speedup with respect to the number of agents in the PFedRL setting. To achieve this, we show that PFedTD-Rep is an example of federated two-timescale stochastic approximation with Markovian noise. Experimental results demonstrate that PFedTD-Rep, along with an extension to the control setting based on deep Q-networks (DQN), not only improve learning in heterogeneous settings, but also provide better generalization to new environments.
[ "personalized federated reinforcement learning", "shared representations", "stochastic approximation" ]
Accept (Poster)
https://openreview.net/pdf?id=BfUDZGqCAu
https://openreview.net/forum?id=BfUDZGqCAu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vPOHfnkBCG", "qihmgEFh47", "onQNzgWChO", "ksuTi2IEcc", "i2kujctX20", "g0EEsMw3Vt", "ZATUcfNrEN", "WwCOkfGDZX", "UuASYf2YNo", "SeZtjHBohf", "Rd7ZJEObtd", "QplsSHuXbC", "QbakcAaBNr", "PzG3h427Y8", "MNgarRmlEU", "KDQ5tBpdk1", "JvWgR8Ljjq", "JPqHBQSGaA", "EtgAr3rwcV", "ENN4hM40Ym", "C5CGOQbHXe", "C52AF4mLUB", "BhdxkBe8Bk", "AmXUIjJwKs", "92S6Q8VJA4", "8gL7QTDI6d", "3TacbUlTOG", "0Z4UkMi258" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732495937846, 1732786412227, 1732496186462, 1732786589776, 1730532973806, 1732727609275, 1732728324086, 1734768953057, 1732165556954, 1732513227739, 1733207076570, 1732496134306, 1732726706832, 1730714493442, 1732164377584, 1733233640219, 1732728061131, 1732161983531, 1730838356201, 1732898936216, 1737523822539, 1732786142060, 1732165656744, 1732163840460, 1732162090183, 1732496154436, 1732162317729, 1732162177805 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_mFXK" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_mFXK" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_mFXK" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Area_Chair_JhP1" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_mFXK" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_LerK" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_LerK" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_Um15" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7190/Reviewer_mFXK" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ], [ "ICLR.cc/2025/Conference/Submission7190/Authors" ] ], "structured_content_str": [ "{\"title\": \"Revision posted\", \"comment\": \"Dear reviewers,\\n\\nThanks for the valuable feedback. We wanted to make sure you are aware that we've uploaded a revision of the paper before the end of the discussion phase. The new results are in blue and located in Appendix H:\\n\\n* **More complex environment.** We've added a more complex environment (Hopper), under the recommendation of Reviewer LerK. Hopper has continuous state and action spaces, so we adapted our personalized FedRL framework to DDPG, resulting in a new instantiation of the algorithm PFedDDPG, which performs well compared to baselines (See Appendix H1 in the revision). In addition, this result illustrates that the PFedRL framework can be used quite broadly (TD, DQN, DDPG).\\n\\n* **Linear speedup wrt to agent count N.** As suggested by Reviewer mFXK, we've conducted an experiment varying the number of agents from 2 through 10 to verify our theoretical results. Indeed, we see that the convergence time decreases nearly linearly as the number of agents increases. Please see Appendix H2.\\n\\nThanks again!\"}", "{\"title\": \"response to comment regarding Personalization quality tradeoff\", \"comment\": \"I thank the authors for this thoughtful addition that addresses personalization quality. The new worst-case metric provides valuable insight into personalization performance, and I look forward to seeing these discussions in the final version.\"}", "{\"title\": \"Additional Feedback?\", \"comment\": \"Dear reviewer,\\n\\nSince the discussion period is almost over, we would to politely check if our response has addressed your concerns & questions. If you have additional feedback, please let us know. We've also posted a new revision of the paper with additional experiments and summarized the changes in a comment at the top of OpenReview.\\n\\nThanks again for your valuable feedback.\\n\\nAuthors\"}", "{\"comment\": \"I thank the authors for their detailed response. I have revised my evaluation and updated the scores. Good luck.\"}", "{\"summary\": \"The manuscript introduces PFEDRL, a framework for personalized federated reinforcement learning (FedRL) aimed at addressing heterogeneity across agent environments. The authors propose PFEDTD-REP, a specific instantiation of PFEDRL with temporal difference (TD) learning. Notably, they claim a linear speedup in convergence proportional to the number of agents, a desirable characteristic in large-scale federated RL systems. Experimental results in both value-based learning (CliffWalking) and control tasks (CartPole, Acrobot) validate the framework's advantages, with promising outcomes in personalization and convergence speedup.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"+The paper is practically relevant and addresses real-world heterogeneity in federated RL environments\\n\\n+The manuscript made some innovations in FedRL theories:\\n\\n- First work to prove linear speedup in personalized federated RL with shared representations under Markovian noise\\n\\n- Rigorous analysis of convergence rates using two-timescale stochastic approximation theory\", \"weaknesses\": \"while there are merits in the paper\\u2019s theoretical contributions, I am concerned with a few critical points:\\n\\n1. regarding the motivation on personalization:\\n\\na. the paper lacks formal definition of what constitutes successful personalization. The authors should consider to design metrics to quantify personalization quality\\n\\nb. thus, no theoretical guarantees that learned personalization (via agent-specific parameters in the paper) captures meaningful environment-specific adaptations. What happens to personalization quality when environments are very different from each other?\\n\\nc. how does agent count N affects personalization? if we add more agents by increasing N, do we have better personlization or worse?\\n\\nd. is there a tradeoff between personalization and global performance or the speedup?\\n\\n2. regarding the problem formulation and approach:\\n\\na. does sharing this common representation breach privacy preservation?\\n\\nb. In section 2.2, why does the transition from (1) to (2) preserve the problem properties? \\n\\n3. regarding theoretical rigor\\n\\na. corollary 4.15 claims that linear speedup w.r.t agent count N is justified by \\u201cwe can proportionally decrease T as N increases while keeping the same convergence rate\\u201d. However, this is not a precise claim as linear speedup generally implies that adding more agents N directly enhances the convergence rate, without requiring adjustments to T. Here, the convergence rate remains fixed only by reducing T, which does not reflect true linear acceleration in convergence. The paper could be misleading readers into believing that the convergence rate inherently improves with more agents, rather than simply adjusting the number of communication rounds to balance computational costs\\n\\nb. related to 3.a) above, if the authors claims linear speedup w.r.t agent count N, they should provide comprehensive experimental validation showing how convergence behavior scales with varying numbers of agents. Notable prior work making similar speedup claims, such as Fan et al. [1] which is one of the earliest FedRL works missing from the related work, included thorough ablation studies demonstrating the impact of agent count N on convergence. The absence of such analysis is particularly concerning given the centrality of the linear speedup claim to the paper's contributions.\\n\\nc. otherwise, how to determine the optimal number of agents? If we add more agents, will we get faster convergence? what about personalization? intuitively, more agents should increase the personalization complexity. \\n\\n4. regarding experimental evaluation.\\n\\na. Limited diversity in test environments (only classic control tasks) and no statistical significance is assessed \\n\\nb. how to empirically verify the personalization achieved?\\n\\nc. ablation on agent count N should be conducted.\\n\\n---\\n[1] Fan, X., Ma, Y., Dai, Z., Jing, W., Tan, C., & Low, B. K. H. (2021). Fault-tolerant federated reinforcement learning with theoretical guarantee.\\u00a0*Advances in Neural Information Processing Systems*,\\u00a0*34*, 1007-1021.\", \"questions\": \"1. how does agent count N affects personalization? if we add more agents by increasing N, do we have better personlization or worse?\\n2. is there a tradeoff between personalization and global performance or the speedup?\\n3. how to determine the optimal number of agents? If we add more agents, will we get faster convergence? what about personalization? intuitively, more agents should increase the personalization complexity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Ablation on Agent Count\", \"comment\": [\"Thank you for the comment.\", \"**Agent count vs convergence time.** In **Appendix H2**, we have added an experiment to validate the linear speedup as in our theorems. Note that because there is the added dimension of environment heterogeneity, we designed a special experiment that holds environment heterogeneity constant as we increase the number of agents. This is achieved by duplicating a set of 2 environments 2, 3, 4, and 5 times, thereby obtaining situations with N=2, 4, 6, 8, 10. Please see Appendix H2 for the detailed plot and discussion. We were able to verify a nearly linear speedup (but note that in practice, there is certain overhead that prevents the speedup to be as efficient as predicted by t heory). We would be happy to move this to the main text if the reviewer prefers.\", \"**Agent count vs personalization.** As discussed above, in **Appendix H3**, we present a new experiment to explore the tradeoff between agent count and personalization. We now report the \\u201cworst case personalization error among N agents\\u201d to properly measure how personalization quality may degrade as we increase the number of agents.\", \"**Environment heterogeneity/discrepancy vs personalization.** In **Appendix H4**, we examine how environment discrepancy affects personalization quality, measured in terms of worst case personalization error among N agents. We fix the number of agents to be 10. While keeping the average pole length fixed, we adjust the discrepancy in pole length between environments, which allows us to see the effect of environment heterogeneity on our results. We observe that while all algorithms show degradation in personalization quality as environment heterogeneity increases (as expected), our approach degrades the smallest amount.\"]}", "{\"title\": \"Another revision posted\", \"comment\": [\"Dear reviewers,\", \"Given our discussion with Reviewer mFXK, we have posted another revision of the paper. Please refer to the latest revision because we have formatted the new results to be easier to read (on separate pages and different sections). Our revision includes several new experiment results:\", \"A more complex environment (Hopper) in **Appendix H1**\", \"Verifying the linear speedup theoretical result in **Appendix H2**\", \"Examining the tradeoff between computation and personalization quality in **Appendix H3**\", \"Examining the effect of environment discrepancy on personalization error in **Appendix H4**\", \"As always, thanks for your comments and please follow up if we can answer or clarify additional points as the discussion period winds down soon.\"]}", "{\"metareview\": \"This paper studies personalized federated reinforcement learning (FedRL) with shared representations, presenting theoretical convergence results and demonstrating linear speedup in the proposed framework, PFedRL-Rep, under Markovian noise. The results are supported by rigorous theoretical analysis and experimental validation, including performance improvements in heterogeneous environments. The reviewers highlighted the novelty and soundness of the theoretical contributions, particularly in establishing linear speedup and addressing personalization in FedRL. During the rebuttal phase, the authors effectively addressed concerns raised by the reviewers, providing additional experimental ablations and clarified theoretical guarantees. Overall, the reviewers reached a positive consensus, and the paper makes a substantial theoretical contribution to the field, making it acceptable for publication.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised several points regarding theoretical clarity, experimental validation, and personalization metrics. Reviewer Um15 sought better explanation of how shared structure affects learning and comparisons with related methods, which the authors addressed by clarifying theoretical contributions and including new comparisons in the appendix. Reviewer LerK suggested expanding experimental evaluations to more complex environments, which was addressed by adding results on the Hopper environment and discussing generalization to heterogeneous settings. Reviewer mFXK requested a formal definition of personalization, clarification on the linear speedup claim, and additional ablation studies on agent count and personalization quality. The authors added a \\\"worst-case personalization error\\\" metric, detailed experiments validating speedup, and an analysis of tradeoffs between agent count and personalization. Each concern was thoroughly addressed, leading to increased reviewer confidence.\"}", "{\"title\": \"Official Response by Authors (3/4)\", \"comment\": \"**Weakness \\\\#3a: regarding theoretical rigor\\na. corollary 4.15 ... linear speedup ...**\\n\\n**Response**: We believe there is a misunderstanding here and would like to respectfully clarify.\\nFirst, let us clarify how speedup is computed in general in the literature. Consider an arbitrary algorithm with convergence rate $\\\\mathcal{O}(1/\\\\sqrt{T})$. To attain $\\\\epsilon$ accuracy for an algorithm, it needs to take $\\\\mathcal{O}(1/\\\\epsilon^2)$ steps. Now consider another algorithm with rate $\\\\mathcal{O}(1/\\\\sqrt{NT})$ (the hidden constant in Big-O is the same), it needs $\\\\mathcal{O}(1/(N\\\\epsilon^2))$ steps to attain $\\\\epsilon$ accuracy. The factor of $N$ is the linear speedup.\\n\\nNow, let us get back to Corollary 4.15,\\nif we let $M\\\\leq \\\\epsilon$, we have the sample complexity of $\\\\mathcal{O}(N^{-1}\\\\epsilon^{-3/2})$, which is $N$ times faster than complexity $\\\\mathcal{O}(\\\\epsilon^{-3/2})$ with one client. This phenomenon is exactly the linear speedup!\\n\\n\\n**Weakness \\\\#3b: b. related to 3.a) above, if the authors claims linear speedup w.r.t agent count N, they should provide comprehensive experimental validation showing how convergence behavior scales with varying numbers of agents. Notable prior work making similar speedup claims, such as Fan et al. [1] which is one of the earliest FedRL works missing from the related work, included thorough ablation studies demonstrating the impact of agent count N on convergence. The absence of such analysis is particularly concerning given the centrality of the linear speedup claim to the paper's contributions.** \\n\\n**Response:** Thank you for your suggestion. We added additional experimental results to support the provably linear speedup results. We vary the number of agents from 2 to 10 using a specialized experimental setup. Please see the new results in Appendix H2, where we observe that the speedup (convergence time) is almost linearly increasing (decreasing)\\nas the number of clients increases. \\n\\nWe thank the reviewer's reminder for bringing [1] to our attention. We have included [1] in our paper and discussed [1] in Section A in the Appendix. \\n\\n\\n**Weakness \\\\#3c: \\nc. otherwise, how to determine the optimal number of agents? If we add more agents, will we get faster convergence? what about personalization? intuitively, more agents should increase the personalization complexity.**\\n\\n**Response:** Thank you for the comment. If all agents operate in identical (or very similar) environments, adding more agents generally leads to faster convergence and improved performance due to the increased availability of collaborative learning data. However, in the case of heterogeneous environments, the relationship between the number of agents, convergence speed, and personalization performance becomes more complex (see our answer above to your previous question about this point). In such scenarios, there is no straightforward correlation, as the added heterogeneity can introduce challenges that may affect both the speed of convergence and the quality of personalization. The impact depends on the extent of diversity among the agents and how effectively the shared representations capture commonalities while allowing for individualized adaptations.\"}", "{\"comment\": [\"Thank the authors for the detailed revisions and comprehensive response. The revised manuscript is indeed clearer in presenting its theoretical contributions, especially the formal definition of successful personalization and the theoretical guarantees regarding environment-specific adaptations. These additions significantly enhance the paper\\u2019s clarity and scholarly contribution.\", \"However, I have several remaining concerns:\", \"1. Ablation on Agent Count $N$: The analysis regarding how the number of agents $N$ affects both personalization quality and convergence speed is still limited. While the theoretical results highlight the linear speedup claim, comprehensive experimental ablations are missing in the main text. It would be highly beneficial for the paper to include:\", \"A detailed empirical evaluation of how increasing $N$ impacts personalization complexity and convergence speed.\", \"Specific studies in heterogeneous environments where $N$ introduces varying levels of agent-environment discrepancy.\", \"Incorporating such experiments in the main body (maybe in future work) would provide stronger empirical validation and align with precedent works in the FedRL literature.\", \"2. Tradeoff Between Personalization and Global Performance/Speedup: The rebuttal suggests that \\u201cglobal performance\\u201d and \\u201cpersonalization\\u201d are treated synonymously within the objective, which may constrain the broader appeal and practical relevance of the work. However, since the convergence speed in the proposed framework is tied to the averaging of parameters via the central server, global performance cannot be entirely equivalent to personalization. In practice, an increased focus on personalization is likely to impact the global model's effectiveness or convergence properties. For instance, prioritizing personalized adaptations for highly heterogeneous environments might introduce conflicts with the shared representation\\u2019s utility for all agents.\", \"To address this, the manuscript would benefit from a more nuanced discussion of the potential tradeoffs between personalization and global performance, especially in the presence of high environment heterogeneity\", \"3. Clarity on Linear Speedup Claim: While Corollary 4.15 asserts linear speedup, the rebuttal clarifies that this is achieved by proportionally reducing $T$ (communication rounds). This is different from a conventional linear speedup (as in the FedRL papers) where adding agents inherently improves the convergence rate without requiring adjustments to $T$.\", \"In conventional FedRL papers (such as those you referenced), linear speedup means that increasing N (number of agents) directly improves the convergence rate. This improvement comes \\\"for free\\\" - you don't need to adjust other parameters\", \"If you double $N$, you roughly halve the time to convergence, all else being equal\", \"What's actually happening in Corollary 4.15 is putting a constraint on $T$ w.r.t $N$,\", \"which means you can't freely increase $N$ without also adjusting $T$\", \"The \\\"speedup\\\" isn't purely from parallelization, as claimed\", \"Please correct me if I made further misunderstanding. Otherwise, this is a significant oversight in the interpretation of the results. While the mathematical bounds themselves may be correct, the interpretation as a \\\"linear speedup\\\" is potentially misleading as it suggests a simpler and more favourable scaling than what's actually achieved. Alternatively, providing additional context and explicit comparisons with prior FedRL works would help avoid potential misunderstandings.\", \"These points, while critical, are primarily focused on improving the completeness and robustness of the paper. I appreciate the authors' efforts in addressing my earlier comments, and I am inclined to revise my scores upon further discussion with the other reviewers, especially if the authors plan to incorporate the suggested experiments and analyses into the final version.\"]}", "{\"comment\": \"Dear authors,\\n\\nThank you for your detailed explanation and efforts in the new experiment. My concerns are addressed, and I have raised my score to 8. Good luck!\"}", "{\"title\": \"Additional Feedback?\", \"comment\": \"Dear reviewer,\\n\\nSince the discussion period is almost over, we would to politely check if our response has addressed your concerns & questions. If you have additional feedback, please let us know. We've also posted a new revision of the paper with additional experiments and summarized the changes in a comment at the top of OpenReview.\\n\\nThanks again for your valuable feedback.\\n\\nAuthors\"}", "{\"title\": \"Personalization quality tradeoff\", \"comment\": \"Thank you for this comment! We've thought through this carefully and believe that you make a great point. We've now defined a new metric to capture personalization quality, the \\\"worst case personalization error among N agents.\\\" In other words, rather than averaging over N agents' performance, we also examine the maximum error over the N agents:\\n\\n$$\\\\max_{i \\\\in [N]} \\\\mathbb{E}_{s\\\\sim \\\\mu^{i,\\\\pi^i}}\\\\left\\\\|f^i(\\\\pmb{\\\\theta}^i, \\\\pmb{\\\\Phi}(s))-V^{i,\\\\pi^i}(s)\\\\right\\\\|^2.$$\\n\\nThis would be a measure of personalization quality. The intuition behind this metric is that if all agents achieve good estimation error, then this metric is small (meaning we have personalized well), but if some agents perform poorly while others perform well, then this metric will be large (detecting that we did not personalize well).\\n\\nIn **Appendix H3**, we have added a new experiment to understand the tradeoff between computation time and personalization quality. The important takeaways are reproduced below, but please refer to the revision for the figure and more details:\\n* We notice that for naive DQN, we can achieve no personalization error at the cost of high computation. \\n* At the other end of the spectrum, FedDQN leverages parallelization and reduces the computation, but has high personalization error. \\n* Finally, our algorithm, PFedDQN-Rep achieves the best of both worlds: low computation, while attaining low personalization error.\\n\\nWe believe this is a great addition to the paper---thank you for your comments which made us realize this fact.\"}", "{\"summary\": \"This paper introduces a personalized federated reinforcement learning (FedRL) framework, PFEDRL-REP, that incorporates shared representations to improve learning in heterogeneous environments. PFEDRL-REP collaboratively learns a shared feature representation among agents while maintaining agent-specific weight vectors for personalization. The authors analyze PFEDTD-REP, a variant using temporal difference learning, and prove it achieves a linear convergence speedup in terms of the number of agents, demonstrating scalability benefits. Experiments in both policy evaluation and control settings show that PFEDTD-REP enhances convergence and generalization in heterogeneous environments compared to non-personalized FedRL methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. PFEDRL-REP is an innovative approach to FedRL, addressing a major challenge in heterogeneous environments by introducing a shared representation while allowing for agent-level personalization.\\n2. The paper provides a rigorous theoretical foundation, including proofs of convergence speedup under Markovian noise using a two-timescale stochastic approximation framework. \\n3. The paper is well-structured and clear, with detailed explanations of the problem formulation, the PFEDRL-REP framework, and the two-timescale convergence analysis.\", \"weaknesses\": \"1. The experimental evaluation could be extended to include more complex environments, such as those with sparse rewards or high-dimensional state spaces, to better assess the scalability of PFEDRL-REP.\\n2. The applicability of PFEDRL-REP to all types of environmental heterogeneity is not fully guaranteed, as the combination of shared feature representations and personalized weight vectors may not capture all nuances of diverse environments.\", \"questions\": \"1. Is PFEDRL-REP universally applicable across diverse heterogeneous federated RL problems? What if a shared feature representation could not represent the similarity between different environments and the heterogeneity could not be distinguished by the agent-specific weight vector?\\n2. Does PFEDDQN-REP maintain strong performance on more complex RL tasks, such as those with sparse rewards or high-dimensional state spaces? Additional experimental results on more challenging environments would provide valuable insights.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response by Authors (2/4)\", \"comment\": \"**Weakness \\\\#1c: ... count N affects personalization ..?**\\n\\n**Response:** This is an insightful question. We don\\u2019t believe there is a clear-cut answer to this, but we definitely believe this is an interesting discussion to include in the paper. First, we note again the definition of personalization:\\n\\\\begin{align}\\\\nonumber \\n\\\\min_{\\\\pmb{\\\\Phi}} \\\\frac{1}{N}\\\\sum_{i=1}^N \\\\min_{\\\\pmb{\\\\theta}^i} \\\\mathbb{E}_{s\\\\sim \\\\mu^{i,\\\\pi^i}}\\\\left\\\\|f^i(\\\\pmb{\\\\theta}^i, \\\\pmb{\\\\Phi}(s))-V^{i,\\\\pi^i}(s)\\\\right\\\\|^2.\\n\\\\end{align}\\nCritically, we point out that there is another dimension to this question: environment heterogeneity. In the equation above, this is represented by how different the $V^{i,\\\\pi^i}$'s are across agents. It could go both ways: for a fixed iteration $T$, if we add more agents and the environments are similar, then this presents an opportunity for better learning of the shared feature; but if we add more agents with dissimilar environments, then personalization could be harder.\\n\\nThe natural follow-up question now is how does environment heterogeneity show up in our analysis? We did not clearly emphasize it in the paper, but we refer the reviewer to Assumption 4.3 and Definition 4.5. In Assumption 4.3, higher environment heterogeneity shows up in a larger constant $C$. In Definition 4.5, we define the mixing time $\\\\tau_\\\\delta$ of the system. Note here that we take a maximum over a sequence of environment-specific discrepancy terms, so $\\\\tau_\\\\delta$ also increases as the level of environment heterogeneity increases.\\n\\nTo circle back to your original question, if the level of environment heterogeneity is fixed (i.e., the terms in our analysis remain the same), then as we add more agents $N$, we are able to *improve* personalization.\\n\\nThank you for this great question. We plan to add this discussion into the paper's appendix per the reviewer's approval. \\n\\n**Weakness \\\\#1d: is there a tradeoff ...?**\\n\\n**Response:** Thanks for your question. First let us clarify that in our paper, ''global performance'' and ''personalization'' are synonymous in that our main objective, Equation (2), accounts for both. This is because the objective is defined as a global average over each agent's personalized performance. Second, the ''speedup'' refers to the speed at which we can learn optimal solutions to the main objective. Therefore, from this perspective, \\nwe should not claim that there is a tradeoff between ``personalization'' and speedup because they are related quantities.\\n\\nIn our opinion, the correct question to ask is whether there is a tradeoff between ``environment heterogeneity'' and speedup (this is somewhat related to the previous question). Since higher heterogeneity in the environments leads to increased values of certain constants in the analysis, it is clear that there is indeed a tradeoff here. Higher environment heterogeneity leads to a worse speedup, which intuitively makes sense.\\n\\nWe hope this answer addresses your high-level concerns. If we misunderstood your question, please feel free to follow-up.\\n\\n**Weakness \\\\#2: ... formulation and approach:\\na. ... privacy preservation...?\\nb. ... (1) to (2) preserve ... properties?**\\n\\n**Response:** Thank you for your comment. \\n\\nOur answer is no! Similar as the communication paradigm in standard FedRL (Khodadadian et al., 2022; Dal Fabbro et al., 2023; Jin et al., 2022), agents only send the gradient of the parameter to the server, not for the trajectories or any other information. Moreover, we divided the entire model into two parts, and agents only share one representation part $\\\\pmb{\\\\Phi}$ while keeping local personalized head $\\\\pmb{\\\\theta}$ in private. This is even better than existing work in terms of privacy-preservation.\\n\\nWe did not completely understand what the reviewer means by \\\"problem properties\\\". However, let us clarify that Eq. (1) and Eq. (2) can be thought of as loss functions to our algorithm and therefore don't depend on particular assumptions of the underlying problem. We suspect that the reviewer may be wondering if we make assumptions about the underlying environments, which we don't. Therefore, our algorithm applies to a wide range of settings: it can be applied in settings where the environments are completely dissimilar (the algorithm will resort to strong personalization) or to environments with shared structure (the algorithm will automatically discover and exploit this structure through the shared global feature learning). Please also see our response to **Question #1 of Reviewer Um15**, where we were asked a similar question. We hope this answer helps.\\n\\nThe only difference is that the feature $\\\\pmb{\\\\Phi}$ is known and the value function is approximated linearly by $\\\\pmb{\\\\Phi}(s)\\\\pmb{\\\\theta}$ in the conventional formulation in Eq. (1); while $\\\\pmb{\\\\Phi}$ is assumed to be unknown and the value function is approximated by a general function $f(\\\\pmb{\\\\Phi}(s),\\\\pmb{\\\\theta})$. \\n\\nIf we misunderstood the \\\"problem property\\\", please let us know.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We are happy to hear that we have adequately addressed all your concerns. Thank you for your acknowledgement and raising the rating of our paper. Much appreciated!\"}", "{\"title\": \"Clarity on the linear speedup claim\", \"comment\": \"Thank you for following up! Let us give some more clarity. Our computation of linear speedup is quite straightforward: we simply compute the # of iterations it takes to reach a certain solution quality. Mathematically, this means we compute the $T(\\\\epsilon)$ (# of iterations) such that the error falls below a threshold ($\\\\epsilon$). In other words, $T(\\\\epsilon)$ is the time to convergence, where \\u201cconvergence\\u201d is defined by $\\\\epsilon$.\\n\\nWe believe that this matches the reviewer\\u2019s intuition exactly (\\u201cif you double $N$, you roughly halve the time to convergence\\u201d)! Our result says precisely this.\\n\\n(Note that we are not \\u201cconstraining\\u201d $T$ in any sense; the result is achieved by some simple algebraic manipulations. Therefore, the speedup is indeed purely from parallelization. We will clarify the writing to avoid this potential misinterpretation.)\", \"a_few_comments\": \"1. Many papers in the FedRL literature give the same style of results:\\n * Please see Theorem 4.1 of [1] (link below), a well-known paper in FedRL. They state \\u201cWe achieve $\\\\mathbf{E}[error] \\\\le \\\\epsilon$ within $T = O(1/(N \\\\epsilon))$ iterations,\\u201d exactly the same logic as ours.\\n * Please see Theorem 3.1 of [2] (link below), another well-known paper in FedRL. They say \\u201cTheorem 3.1 suggests that to achieve an $\\\\epsilon$-accurate Q-function estimate in an $l_\\\\infty$ sense, the number of samples required at each agent is no more than $\\\\tilde{O}(|S| * |A| / (K (1-\\\\gamma)^5 \\\\epsilon^2))$.\\u201d In this paper, $K$ is the number of agents. This is also exactly the same logic that we use.\\n\\n2. We believe the reviewer may be referring to some papers which provide results of the form \\u201cAlgorithm error $\\\\le O(1/(NT))$\\u201d. In these papers, this is directly claimed to be \\u201clinear speedup.\\u201d As an example, please see Corollary 2.2 of [3] or Theorem 2 of [4].\\nIn these papers, they are not explicit about *why* this is a linear speedup. But if we look carefully, it is also the same logic:\\nTo reach epsilon in Algorithm error, we need $1/(NT) \\\\le \\\\epsilon$, which implies that we need $T \\\\ge 1/(N \\\\epsilon)$ to reach convergence. This is the same logic as us and also papers [1] and [2]!\\n\\n3. The natural next question is \\u201cwhy do our results not look as straightforward as papers [3] and [4]?\\u201d This is because our setting is more difficult (Markovian noise + heterogeneous environments), so our result (Corollary 4.15) has more complex terms ($N^{2/3}$ and $T^{2/3}$) than papers [3] and [4]. But once we do the algebraic manipulation, it turns out we achieve the same linear speedup in terms of time to convergence!\\n\\n4. We plan to clarify all of the above in the final version of the paper. Indeed, it deserves some more explanation. Thanks to the reviewer for pointing it out!\", \"references\": \"[1] https://proceedings.mlr.press/v162/khodadadian22a/khodadadian22a.pdf\\n\\n[2] https://proceedings.mlr.press/v202/woo23a/woo23a.pdf\\n\\n[3] https://arxiv.org/pdf/2401.15273\\n\\n[4] https://arxiv.org/pdf/2302.02212\"}", "{\"title\": \"Official Response by Authors (1/2)\", \"comment\": \"Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper.\\n\\n**Weakness \\\\#1: If I understand correctly, the linear speed up is not a particular exciting result, since sample collection in unit time grows linearly with N.**\\n\\n**Response:** We appreciate the reviewer's question. We would like to respectfully clarify our perspective on this point. The reviewer is correct that linear speedup corresponds to the linear growth in sample collection with $N$. However, it is worth noting the practical advantage that sample collection is now parallelized across $N$. Intuitively, the linear speedup result says that we can parallelize without significant loss in solution quality, and hence is highly desirable since one can efficiently leverage the massive parallelism in large-scale decentralized systems.\\n\\nPerhaps more importantly, achieving linear convergence speedup is widely recognized as a significant technical contribution in the field of federated learning (FL), both in conventional FL (in supervised learning settings) and in the more recent and increasingly studied domain of federated reinforcement learning (FedRL). This is highlighted by numerous existing works (e.g., some listed in Table 1). To maintain consistency with this line of research and to provide a fair comparison with these well-established baselines, we also focus on demonstrating linear convergence speedup in our newly proposed personalized federated reinforcement learning (PFedRL) framework. We hope that this clarification helps.\\n\\nSecondly, we would like to point out that proving linear speedup in the PFedRL setting is more theoretically challenging than the conventional FL case, requiring the\\nhandling of Markovian dynamics, which ensures convergence in the presence of non-stationary data generated from agents' interactions with their (heterogeneous) environments. In addition, the call for personalization in PFedRL introduces an additional layer of complexity. Although Jin et al. 2022 proposed a heuristic personalized FedRL algorithm, its theoretical performance guarantee remains unknown. Our theoretical contribution in PFedRL navigates these unique challenges by establishing convergence guarantees in settings characterized by Markovian noise and the need for personalization.\"}", "{\"summary\": \"This work propose a Personalized FedRL approach (similar to PFL but with RL) allowing local/per-agent learnable parameters for use in heterogeneous FedRL settings with convergence results. The authors find a linear relationship between the number of agent and the convergence timestep.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The theoretical analysis have a proof sketch and the assumptions are clearly stated.\", \"I think the two timescale approximation result is novel\", \"Interesting results from cliff-walking and cartpole\"], \"weaknesses\": [\"Although the setup hold promise, evaluation is rather on the simple side. I consider it understandable for now since the main focus in on theory.\", \"If I understand correctly, the linear speed up is not a particular exciting result, since sample collection in unit time grows linearly with N.\", \"There is no comparison with other PFL methods, or parameter-sharing MARL methods.\"], \"questions\": \"Is the problem definition of PFedRL includes \\\"shared common structure\\\", how does it affects learning? or anything else in the theory? In theory they can be completely different problem without any shared structure and the results still stand?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"We are happy to hear that we have adequately addressed all your concerns. Thank you for your acknowledgement and raising the rating of our paper. Much appreciated!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I appreciate the detailed explanation and this clarification significantly changes my previous assessment. My previous critique was off-base because I misinterpreted $T^2 > N$ as a limiting constraint rather than a condition in the analysis.\"}", "{\"title\": \"Official Response by Authors (4/4)\", \"comment\": \"**Weakness \\\\#4: regarding experimental evaluation.\\na. Limited diversity in test environments (only classic control tasks) and no statistical significance is assessed\\nb. how to empirically verify the personalization achieved?\\nc. ablation on agent count N should be conducted.**\\n\\n**Response:** Thank you for your suggestions. \\n\\nFirst, we report the statistical significances in Tables 3 and 4 in Appendix H6. Specifically, we report the return average, variance average, return median and total running time for 10 environments for Cartpole and Acrobot environments. By comparing with DQN algorithm without personalization, we can validate the linear speedup in running time. Among all algorithms, our PFedDQN-Rep achieves the best return average and median, with top variance and running time, as summarized in Tables 3 and 4. We also provide a zoom-in shortened plot for both environments to show the quick adaptation speed when sharing representations as in Figure 16. All results and discussions are highlighted in blue.\\n\\n\\nSecond, to validate that the personalization is reached, we first compute the cosine similarity matrix of transition probabilities in both environments. After the algorithm converges, we compute the cosine similarity matrix of policy layer (last layer) in the neuron network in Appendix H5. We notice that while the nearby agents might share similarity in their policy, personalization is reached corresponds to their transition probabilities. The shared representation layer stays identical. All results and discussions are highlighted in blue.\"}", "{\"title\": \"Official Response by Authors (1/4)\", \"comment\": \"Thank you very much for your review and constructive comments. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper.\\n\\n**Weakness \\\\#1a: Regarding the motivation on personalization:\\na. the paper lacks formal definition of what constitutes successful personalization. The authors should consider to design metrics to quantify personalization quality.**\\n\\n**Response:** Thank you for your comments. However, we believe there is a misunderstanding here; let us respectfully clarify. Our definition of personalization is given in Equation (2): \\n\\\\begin{align}\\\\nonumber \\n\\\\min_{\\\\pmb{\\\\Phi}} \\\\frac{1}{N}\\\\sum_{i=1}^N \\\\min_{\\\\pmb{\\\\theta}^i} \\\\mathbb{E}_{s\\\\sim \\\\mu^{i,\\\\pi^i}}\\\\left\\\\|f^i(\\\\pmb{\\\\theta}^i, \\\\pmb{\\\\Phi}(s))-V^{i,\\\\pi^i}(s)\\\\right\\\\|^2.\\n\\\\end{align}\\n\\nNote here that the \\u201cmin\\u201d over $\\\\pmb{\\\\theta}^i$ is inside of the \\u201csum\\u201d operator, which means that for each agent i, we are computing the error of its personalized estimate $f^i(\\\\pmb{\\\\theta}^i, \\\\pmb{\\\\Phi})$ to the target value $V^{i, \\\\pi^i}$. However, also note that the \\u201cmin\\u201d over $\\\\pmb{\\\\Phi}$ (the shared feature representation) is outside of the \\u201csum\\u201d, representing the fact that $\\\\pmb{\\\\Phi}$ should be a \\u201cgood\\u201d feature representation for all agents.\\nWe can contrast this with the non-personalized formulation in Equation (1):\\n\\\\begin{align}\\\\nonumber\\n \\\\min_{\\\\pmb{\\\\theta}}\\\\frac{1}{N}\\\\sum_{i=1}^N\\\\mathbb{E}_{s\\\\sim \\\\mu^{i, \\\\pi}} \\\\left\\\\|\\\\pmb{\\\\Phi}(s)\\\\,\\\\pmb{\\\\theta}-V^{i,\\\\pi}(s)\\\\right\\\\|^2,\\n \\\\end{align}\\nwhere the \\u201cmin\\u201d over $\\\\theta$ is taken outside of the \\u201csum\\u201d, meaning that this $\\\\theta$ is not personalized.\\nTherefore, \\u201csuccessful personalization\\u201d can be considered to be achieved when the expected value under a personalized estimate is better than a non-personalized estimate. \\n\\n\\nIn Fig 2 (a), we show this via a numerical example. Here, blue can be considered the ground truth value estimated by TD independently in each environment, while the orange bars represent the non-personalized estimates and green bars represent the personalized estimates. Note that the difference |personalized (green) - ground truth (blue)| is smaller than |non-personalized (orange) - ground truth (blue)|, indicating that personalization was successful.\\n\\nThank you for pointing this out, we believe we could have been more clear about these definitions in the paper.\\n\\n\\n**Weakness \\\\#1b: thus, no theoretical guarantees that learned personalization (via agent-specific parameters in the paper) captures meaningful environment-specific adaptations. What happens to personalization quality when environments are very different from each other?**\\n\\n**Response:** We believe this is a misunderstanding. Let us clarify a bit. Our definition of personalization does indeed capture environment-specific adaptations. We refer the reviewer to Equation (2), where the environment-specific value is given by $V^{i, \\\\pi^i}$ (where $i$ denotes the specific environment). Therefore, an agent\\u2019s ability to adapt to $V^{i, \\\\pi^i}$ via the agent-specific $\\\\pmb{\\\\theta}^i$ is precisely what we aim to measure. Our theoretical results directly build upon this definition, so we would argue that our theory is capturing a meaningful quantity. Please let us know if you have follow-ups and we would be happy to discuss more.\\n\\nCertainly, when environments are very different from each other, we expect that it will be harder to learn a good estimate of the environment value functions (since it will be harder to learn a useful feature representation $\\\\pmb{\\\\Phi}$). However, if this happens, it will be precisely measured by the quantity defined in Equation (2): in those situations, we expect to see an increase in the error defined by Equation (2).\\n\\nAt the same time, we would like to point out that when environments are very different, our personalized framework (Equation (2)) will still perform better than the non-personalized framework (Equation (1)) since it allows for agent-specific parameters.\"}", "{\"title\": \"Official Response by Authors (2/2)\", \"comment\": \"**Weakness \\\\#2: There is no comparison with other PFL methods, or parameter-sharing MARL methods.**\\n\\n**Response:** First, as highlighted in Table 1, the most recent works in federated reinforcement learning (FedRL) with some theoretical performance guarantees do not focus on personalization within the FedRL context. In our evaluations, in particular the applications to control problem (Section 5), we did include comparisons with two heuristic personalized federated reinforcement learning methods, PerDQNAvg (Jin et al., 2022) and FedAsynQ-ImAvg (Woo et al., 2023), which incorporate elements of personalization in their designs. Our experiments demonstrate the superior performance of our PFedRL-Rep framework, particularly in its ability to provide personalized learning while leveraging shared representations for heterogeneous environments. In addition, this paper considers the personalized federated RL (PFedRL) setting, rather than the personalized federated learning (PFL) setting (a supervised learning framework), and hence we did not compare with the large body of works in PFL. We discussed the related works in PFL in Appendix A. \\n\\nSecond, FedRL and MARL represent fundamentally different frameworks with distinct objectives and structures. FedRL focuses on collaborative learning across decentralized agents without sharing local trajectories, prioritizing privacy and decentralized data aggregation. In contrast, MARL typically involves agents interacting within a shared environment, emphasizing coordination and competition between agents. Given these divergent goals and settings, a direct comparison between FedRL and MARL methods would be inherently unfair and not reflective of their respective aims or challenges, and hence were not considered in the experiments.\\n\\n**Question \\\\#1: Is the problem definition of PFedRL includes \\\"shared common structure\\\", how does it affects learning? or anything else in the theory? In theory they can be completely different problem without any shared structure and the results still stand?**\\n\\n\\n**Response:** Thank you for this insightful question.\\n\\nNo, we don't make any explicit assumption that there's shared common structure between the environments, which makes our algorithm applicable in a range of settings. If there is shared common structure, then our learning algorithm can automatically discover it and exploit it. If there is absolutely no relationship between the environments, then our learning algorithm will resort to more personalization. In either case, we expect that our algorithm will work.\\nIn particular, the similarity level of environments will affect the mixing time (see Assumption 4.3, Definition 4.5, and Lemma 4.12) of the entire system, which has a significant impact on the convergence speed.\\n\\nIf we misunderstood this question, please let us know.\"}", "{\"title\": \"Additional Feedback?\", \"comment\": \"Dear reviewer,\\n\\nSince the discussion period is almost over, we would to politely check if our response has addressed your concerns & questions. If you have additional feedback, please let us know. We've also posted a new revision of the paper with additional experiments and summarized the changes in a comment at the top of OpenReview.\\n\\nThanks again for your valuable feedback.\\n\\nAuthors\"}", "{\"title\": \"Official Response by Authors (2/2)\", \"comment\": \"**Weakness \\\\#2: The applicability of PFEDRL-REP to all types of environmental heterogeneity is not fully guaranteed, as the combination of shared feature representations and personalized weight vectors may not capture all nuances of diverse environments.**\\n\\n\\n\\n\\n**Response:** Thank you for your thoughtful observation regarding the applicability of PFedRL-Rep to diverse forms of environmental heterogeneity. It is important to clarify that the motivation of this work is not to address scenarios where each agent's environment is significantly different but rather to target settings where agents share a substantial amount of common structure. This focus is well-aligned with the motivations described in lines 42-50 and the illustrative motivating examples provided in Figure 6 of our paper. Also note that this is also the motivation of the study of federated reinforcement learning (FedRL) as in many existing works in this area (e.g., some are summarized in Table 1 and discussed in Appendix A). The agents can face heterogeneous environments in FedRL but should benefit the learning from collaborations in FedRL. \\nWe can take the view that if the agent is unaware of the level of heterogeneity in the environment, the algorithm will automatically adjust itself: if there's dramatic differences, the algorithm will focus on personalization (reducing to the independent case); if the environments are similar, the algorithm will automatically learn a useful feature.\\nThe critical point is that we don't make any specific assumption on a shared structure in the environments.\\n\\n\\n\\n\\nFrom a theoretical perspective, some related works have discussed the necessity of a bounded divergence assumption for environments, both for FedRL setting (Jin et al., 2022) and the widely studied federated learning (a supervised learning framework) settings (Collins et al., 2022, Xiong et al, 2024). In our framework, we assume a bounded mixing time (Assumption 4.3, Lemma 4.12), which inherently limits the extent of heterogeneity among the agents' environments. This ensures that the environments are not arbitrarily different, allowing the shared feature representations and personalized weight vectors in our PFedRL-Rep framework to effectively capture both commonalities and individual nuances across agents.\"}", "{\"title\": \"Official Response by Authors (1/2)\", \"comment\": \"Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper.\\n\\n**Weakness \\\\#1: The experimental evaluation could be extended to include more complex environments, such as those with sparse rewards or high-dimensional state spaces, to better assess the scalability of PFEDRL-REP.**\\n\\n\\n**Response:** Thank you for your valuable feedback regarding the experimental evaluation. \\n\\nOur current experiments were designed to clearly illustrate the benefits of PFedRL-Rep in environments that highlight its ability to handle personalization and shared representations among heterogeneous agents. This includes showing improvements over baseline methods and validating theoretical findings through controlled setups. We agree that extending the experiments to more complex environments would provide deeper insights into the robustness and scalability of PFedRL-Rep. In environments with sparse rewards, for example, the ability to learn shared representations could potentially enhance exploration by pooling agent experiences. Similarly, for high-dimensional state spaces, our framework's ability to learn a common low-dimensional feature representation could mitigate the complexity and improve convergence. We are the first to present this novel framework for personalized federated reinforcement learning via leveraging shared representations, which has the potential to handle a wider range of scenarios.\\n\\n\\nPer the reviewer's suggestion, we conduct experiments on another enviroment named Hopper from gym, whose state and action space are both continuous. We vary the length of legs to be $0.02 + 0.001*i$, where $i$ is the i-th indexed agent, while keeping the same parameters such as healthy reward, forward reward and ctrl cost (l2 cost function to penalize large actions). We increase the number of agents to 20, and plot the return with respect to frames. We generate a new sampled transition to validate the generalization nature of the algorithms. In order to fit the algorithm to continuous setting, we modified the proposed algorithm to a DDPG based algorithm, similar to any DQN related benchmarks. For FedQ-K, LFRL and FedAsynQ-Imavg, we discretize the state and action space. Similar to Cartpole and Acrobot environment, our proposed PFedDDPG-Rep achieves the best reward and generalize to new environments quickly as in Appendix H1.\"}" ] }
BfQNrKJMXq
MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents
[ "Luyuan Wang", "Yongyu Deng", "Yiwei Zha", "Guodong Mao", "Qinmin Wang", "Tianchen Min", "Wei Chen", "Shoufa Chen" ]
Large Language Model (LLM)-based mobile agents are increasingly popular due to their capability to interact directly with mobile phone Graphic User Interfaces (GUIs) and their potential to autonomously manage daily tasks. Despite their promising prospects in both academic and industrial sectors, little research has focused on benchmarking the performance of existing mobile agents, due to the inexhaustible states of apps and the vague definition of feasible action sequences. To address this challenge, we propose an efficient and user-friendly benchmark, MobileAgentBench, designed to alleviate the burden of extensive manual testing. We initially define 100 tasks across 10 open-source apps, categorized by multiple levels of difficulty. Subsequently, we evaluate several existing mobile agents, including AppAgent and MobileAgent, to thoroughly and systematically compare their performance. All materials will be accessible on our project webpage, contributing to the advancement of both academic and industrial fields.
[ "LLM", "Agent", "Benchmark" ]
https://openreview.net/pdf?id=BfQNrKJMXq
https://openreview.net/forum?id=BfQNrKJMXq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ubyizIv7Me", "j0wnUrysFw", "dYTHbNav58", "Yrc7Agfx9Q", "HMyOPwHXSl" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730644859783, 1730782534098, 1731132901454, 1732497416622, 1730941189882 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12213/Reviewer_NJf2" ], [ "ICLR.cc/2025/Conference/Submission12213/Reviewer_YVMa" ], [ "ICLR.cc/2025/Conference/Submission12213/Reviewer_boVc" ], [ "ICLR.cc/2025/Conference/Submission12213/Authors" ], [ "ICLR.cc/2025/Conference/Submission12213/Reviewer_VQ1A" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents MobileAgentBench, a benchmark for mobile LLM agents within the Android system, with a fully autonomous and reliable evaluation process. MobileAgentBench features itself for it can be run on real devices and needs no significant code changes to integrate agents into the framework.\\n\\nThe authors evaluate the performance of current SOTA mobile LLM agents on the new benchmark and find the success rates are relatively low, leaving space for further exploration.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Compared to the existing mobile LLM agent benchmarks, MobileAgentBench can be run on real Android devices to test mobile digital agents, making the evaluation process dynamic and realistic.\\n2. The benchmark can be extended and integrated with more task instances easily, with only several lines of Python codes needed. It is possible that the benchmark will attract more people in the community to contribute to scale and enrich it.\\n3. The evaluation process only checks the final state in the app system to detect if it is successfully completed, while allows the variance of trajectories/steps. It is effective and efficient way of evaluate digital tasks.\", \"weaknesses\": \"1. MobileAgentBench consists of 100 tasks totally, spanning across 10 simple use mobile apps with simple and straightforward user interfaces, which may damage the diversity and of the benchmark. The apps might need more careful selection and filtering, and the tasks could be more realistic as it would run into various situations in the wild environment.\\n2. The analysis part within the article is found to be inadequate and lacking in robustness. To enhance its credibility and depth, a more thorough comparative evaluation of agent performance is required. Furthermore, incorporating an examination of task types that span multiple applications could provide valuable insights.\", \"questions\": \"1. As the suceess rates of agents on MobileAgentBench are low, a detailed and thorough error analysis is needed to illustrate why they perform not well when doing the tasks and what is the bottleneck of improving on the benchmark. I recommend presenting some failure examples to better illustrate it.\\n2. How do you design and formulate the action space of MobileAgentBench? Could you explain more baout it?\\n3. In the part when you discuss about the digital assistant and human-computer interaction, I think one missing citation would be OSWorld[1], which is a benchmark of real-world computer tasks in a unified and interactive computer environment for multimodal agents. The task simulation and evaluation pipeline are quite related to your work.\\n\\n[1] Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments. CoRR, abs/2404.07972, 2024. doi: 10.48550/ARXIV.2404.07972. URL https://doi.org/10.48550/arXiv.2404.07972.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces MobileAgentBench, a benchmark specifically designed to evaluate mobile LLM agents. The present work is motivated by the growing popularity of LLM agents in mobile setting, and the difficulty to develop a common platform to evaluate these diverse agents. The authors propose to alleviate the burden of manual setting by allowing LLM agents directly interact with GUIs. They 100 tasks across 10 open-source applications, categorizing them by difficulty to facilitate thorough evaluation. The experiment results highlight the strengths and weaknesses of existing agents, including AppAgent and MobileAgent.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The proposed benchmark addresses an important gap in evaluating mobile LLM agents, which is essential for the advancement of this field.\\n\\n2.The authors implement 100 benchmarking tasks categorized by different levels of difficulty. It could provide a more nuanced evaluation of LLM agents.\\n\\n3.The paper provides a systematic comparison of multiple agents, establishing a foundation for understanding their performance capabilities and limitations.\", \"weaknesses\": \"1.The scale and scope of designed tasks are generally smaller than other existing benchmarks. For example, SPA-bench [1] contains 340 tasks with both single APP and cross APP settings. Comprehensive comparison and discussion are needed to justify the unique advantage of the present work.\\n\\n2.The paper lacks a detailed explanation of the underlying protocol of implementing MobileAgentBench, which could hinder reproducibility and applicability.\\n\\n3.The evaluation metric of computation cost is limited. The authors only evaluate the token cost of using cloud-based LLMs, which do not consider the constrained computation resource in mobile devices. \\n\\n4.The experiments do not include agents run on locally deployed LLMs like Phi-3 [2]. It misses an important setting for mobile agent, which has unique advantage for privacy preservation, low latency, etc. \\n\\n[1] Chen, Jingxuan, et al. \\\"SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation.\\\" NeurIPS 2024 Workshop on Open-World Agents. 2024.\\n\\n[2] Abdin, Marah, et al. \\\"Phi-3 technical report: A highly capable language model locally on your phone.\\\" arXiv preprint arXiv:2404.14219 (2024).\", \"questions\": \"Please refer to the Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces MobileAgentBench, a new benchmark for evaluating Large Language Model (LLM)-based mobile agents on the Android platform. The authors argue that existing benchmarks suffer from limitations in scalability, robustness, flexibility, and realism. MobileAgentBench aims to address these issues by providing 100 built-in tasks across 10 open-source Android apps, facilitating automated evaluation on real devices, and incorporating a flexible task success judgment mechanism based on final UI state and app event signals. The benchmark also allows for easy customization and integration with existing agents, requiring minimal code modifications. The authors evaluate five popular mobile LLM agents (AndroidArena, AutoDroid, AppAgent, CogAgent, and MobileAgent) using their benchmark and provide baseline performance data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors convincingly identified important dimensions in current mobile agent benchmarks, particularly regarding scalability, robustness to diverse action sequences, realistic device-based testing, and ease of integration. Here are some strengths.\", \"automated_evaluation\": \"The framework automates the evaluation process, reducing manual effort and increasing reproducibility.\", \"accessibility_and_ease_of_integration\": \"The benchmark is designed to be easily integrated with existing mobile agent frameworks, requiring minimal code changes.\", \"open_source_and_reproducible\": \"The authors commit to making the benchmark open-source, promoting transparency and further development by the community.\", \"baseline_data_provided\": \"The evaluation of five existing agents offers valuable baseline data for future research.\", \"weaknesses\": \"I think main motivation for building a new benchmark should ultimately be about \\\"can we evaluate better\\\" or \\\"can we evaluate more complex tasks\\\". While suggested benchmark seem more \\\"user-friendly\\\" than the referenced ones, I'm not quite convinced if MobileAgentBench is moving us forward.\\n\\nLimited app diversity results limited agent behaviors. The benchmark currently relies on 10 open-source apps from a single developer (SimpleMobileTools). While understandable for initial development, this limits the diversity of UI elements, interaction patterns, and complexities that agents face. The provided tasks, while covering basic functionalities, might not adequately capture the complexity of real-world mobile interactions. I expect a new benchmark that encompasses the referenced ones to involve more intricate, multi-step tasks involving data input, navigation across multiple apps, and handling errors are needed.\", \"limited_metric_depth\": \"While the proposed metrics (SR, SE, Latency, Tokens, FFR, OER) are relevant, they could be expanded to capture aspects like agent robustness to unexpected UI changes, error recovery, and efficiency in terms of actions taken.\", \"limited_explanation_of_the_agent_event_listener_app\": \"The functionality and implementation details of the Android Accessibility Service-based event listener app are not thoroughly explained. A more detailed description is crucial for understanding the robustness and reliability of the event capture mechanism.\", \"questions\": \"How does MobileAgentBench handle tasks that require interactions with system-level UI elements (e.g., notifications, permission requests)?\\n\\nWhat is the specific implementation of the \\\"hit test\\\" used by the benchmark to determine successful button clicks?\\n\\nHow does the framework handle cases where the agent crashes or the app under test becomes unresponsive?\\n\\nHow does the choice of UIAutomator as the backend for AndroidViewClient impact performance and reliability? Have other backends been considered?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces MobileAgentBench, an efficient and user-friendly benchmarking framework designed for evaluating large language model (LLM)-based mobile agents within the Android environment. Addressing limitations in existing benchmarks, MobileAgentBench allows for the autonomous testing of agents on real devices, with minimal code requirements for integration. The benchmark supports 100 predefined tasks across ten open-source applications, covering various difficulty levels. It evaluates agent performance based on metrics such as success rate, efficiency, and latency, and incorporates Android Accessibility Services to capture real-time app events. This design facilitates customizable and accurate testing, providing a robust platform for developing and evaluating intelligent mobile agents. The benchmark is interesting for mobile use cases. The writing needs to be further improved before acceptance consideration.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. It is a rather new benchmark designed for mobile use cases.\\n\\n2. The benchmark designs multiple tasks ranging from different difficulty levels.\\n\\n3. It is easy-to-use, and can be integrated within few lines\", \"weaknesses\": \"1. Writing can be further improved\\n\\n2. Large blank space should be fixed\", \"questions\": \"1. Writing of the paper can be further improved. For example, we do not need such detailed illustrations on related work as within this paper. Save the space for more benchmark analysis is preferable.\\n\\n2. Your limitations and further work does not sufficiently comprise a single section. Merging it into conclusion is better.\\n\\n3. Unnecessary blank space should be fixed such as from line 69 to line 74 and line 246 to line 249, etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BfNylgbDuy
Preference-Enhanced Instruction Tuning for Machine Translation
[ "Shuyun Yang", "Zhengmao Ye", "Yan Zhang", "Lei Duan", "Mingjie Tang" ]
Although Large Language Models (LLMs) like GPT-4 perform excellently in machine translation, their high costs and scalability make them unavailable in many scenarios. Recently, there has been increased effort to build smaller LLMs that can achieve comparable performance. However, while typical instruction tuning methods tend to directly mimic reference translations, leading to less meaningful results, recent preference optimization methods have shown improvements. Despite this, they still fail to effectively utilize crucial preference information during inference. In this paper, we introduce Preference-Enhanced Instruction Tuning (PEIT), a novel method that explicitly incorporates preferences into both the instruction fine-tuning and the inference phase. Our extensive experiments show that PEIT not only improves translation quality but also significantly outperforms state-of-the-art preference optimization methods and instruction tuning baselines on multiple language benchmarks.
[ "machine translation", "preference alignment", "large language model" ]
Reject
https://openreview.net/pdf?id=BfNylgbDuy
https://openreview.net/forum?id=BfNylgbDuy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yaeJwoy53Z", "xFbpWRONVw", "vWnzpuoPUC", "rRHRLB6aXs", "qhO2VKY90g", "o99eWDDVeQ", "mmWna3goy8", "leNvuT6bcY", "gMH8FrdnVX", "cR3wPnt9nQ", "bFY5S0ng4p", "aEtv4kT8ol", "a546738iBz", "XjQQJKjlHE", "W40IzVnWHo", "SMd7aQiaSy", "MGSI8Leoo5", "JEy61VtrWp", "C0KnDSk8Ax", "AxWz7rG0z6", "7cL13Ps9kk", "78vpCF36q6", "01oSTx4yWE" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732862442837, 1732715624283, 1732416017779, 1732622324747, 1732726970317, 1734320488068, 1732510693111, 1732626686389, 1737523622236, 1732472726605, 1730031550517, 1732585284005, 1732600056502, 1732416083196, 1732416423828, 1732415794170, 1732416367542, 1730282698945, 1730683360748, 1732715577486, 1730452991754, 1732587058997, 1732417043871 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_LVy7" ], [ "ICLR.cc/2025/Conference/Submission4164/Area_Chair_5k6e" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_LVy7" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_uHW6" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_Xpb8" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_Xpb8" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_LVy7" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_YwvZ" ], [ "ICLR.cc/2025/Conference/Submission4164/Reviewer_uHW6" ], [ "ICLR.cc/2025/Conference/Submission4164/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Happy Thanksgiving, my dear reviewer LVy7. We hope you have a happy holiday.\\n\\nAs you can notice, we have tried our best to improve the points that you mentioned, and all these points are fixed. This indeed improve this work greatly, as you can notice, your review is the very important for this work, If you are satisfied with our work and responses, please consider giving us a higher score. \\n\\nWe also welcome your suggestions for our revised manuscripts at any time. Your support is very important to us, thank you! cc AC\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for reviewing our response! We hope our response addresses your concerns. If you have any further questions, please feel free to let us know. We look forward to your reply and further discussion!\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your thorough review and for providing many valuable suggestions. We will address and respond to each point.\\n\\n## For weakness 1\\n**W1.** The proof of ICL's effectiveness presented in the paper mainly derives from Dai et al. (2023), which raises concerns about the novelty of your paper.\\n\\n**A.** Our main contribution is to propose the **first framework that explicitly leverages preference information** for efficient preference optimization. \\nWe theoretically validated the feasibility of our method following the formula proposed by (Dai et al., 2023)[1]. Subsequently, we confirmed it from an experimental perspective.\\n\\n## For weakness 2.\\nL_ICFT is a typo, which should be L_ICL.\\n\\n## For weakness 3, question 1 and question 2.\\nWe believe these questions are related, so we have organized them together for a combined response.\\n\\n**Q1.** How does the paper categorize samples in the dataset into subsets that contain different preference intentions? \\n\\n**Q2.** During the inference process, is the retrieval corpus still derived from the training set? If so, how does this method's performance get affected when the preference intentions in the prompt are not sufficiently similar to those in the training data? \\n\\n**W3.** The definition of the concept of \\\"preference intention\\\" in the article is vague, affecting the clarity of the paper's arguments. Furthermore, the article does not provide detailed information on how to determine the preference intentions of samples in the dataset. How does the paper categorize samples in the dataset into subsets that contain different preference intentions? During the inference process, is the retrieval corpus still derived from the training set? If so, how does this method's performance get affected when the preference intentions in the prompt are not sufficiently similar to those in the training data?\\n\\n**A.** We mentioned the concept of preference intentions in the first section of the original paper (Line 51). Sentences with the same preference intentions have similar embeddings, allowing us to retrieve examples with the same preference intentions using cosine similarity.\\n\\nThe **retrieval space during testing remains consistent with that during training**. We have also supplemented the experiments to validate the impact of using in-context examples with different preference distributions on the output translation quality for the same test input. Please allow us to elaborate on the design and results of this experiment here: We retrieved the Top 3 examples most similar to the input and used each example individually as the in-context example for the input to evaluate the impact of different preference distributions on the results. Additionally, to assess the effect of completely unrelated preference distributions on the translation results, we also used a fixed example that was entirely unrelated to the input as the in-context example. \\n\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| Model | PEIT+rank 1 example | PEIT+rank 2 example | PEIT+rank 3 example | PEIT+constant example | SFT | CPO |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 94.67 | 94.59 | 93.60 | 92.13 | 93.62 |\\n\\n\\nThe results are shown in the table above. Even when using examples with preference distributions unrelated to the input as in-context examples, PEIT is able to maintain a certain level of performance, demonstrating its adaptability.\\n\\n## For weakness 4.\\n**W4.** The experiments appear to be conducted within a single domain or distribution, indicating that the similarity of preferences between the training and testing datasets is consistent. This seems insufficient to validate the scenario mentioned in the introduction, where the preferences in the inference prompts do not align with the training data.\\n\\n\\n\\n**A.** Please allow us to emphasize once again the relationship between \\\"prompt shift\\\" and \\\"multiple distributions\\\" here. \\nThe **prompt shift** issue mentioned in the introduction arises from **differences in prompt formats** between training and testing, and it is unrelated to the content. The content of data in translation tasks exhibits characteristics of multiple distributions (Line 113). PEIT ensures that the explicitly provided examples and the input belong to the same distribution through retrieval.\\n\\n[1] Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers, 2023.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for reviewing our response!\\nIf you have any further questions, please feel free to let us know. We look forward to your response and further discussion!\"}", "{\"comment\": \"Thank you for your responses. I will maintain my current score.\"}", "{\"metareview\": \"This paper introduces Preference-Enhanced Instruction Tuning (PEIT), a framework for machine translation that integrates preference learning into both the fine-tuning and inference stages. By leveraging a combination of generation loss, a preference-based DPO-like loss, and hidden representation-level alignment, PEIT outperforms existing methods across multilingual benchmarks, delivering significant improvements in metrics such as BLEU and xCOMET.\", \"strengths\": [\"The use of preference information during fine-tuning and inference is intuitive, as user-defined preferences and consistency should play a key role in machine translation systems.\", \"Experimental evaluations, particularly on the FLORES dataset, demonstrate that PEIT outperforms baselines and state-of-the-art preference optimization methods, achieving notable gains in BLEU and xCOMET scores.\"], \"weaknesses\": [\"The primary issue with the paper is its clarity and reproducibility. All four reviewers raised at least one major concern about the paper\\u2019s presentation. One reviewer found the contribution unclear, another highlighted the lack of a \\\"clear definition or explanation\\\" along with several other presentation issues, and two reviewers pointed out a \\\"lack of details\\\" (or the absence of \\\"lots of the details.\\\") Additionally, two reviewers independently raised concerns about how the work differentiates itself from Dai et al. (2023), which may further underscore the lack of clarity regarding its contributions. While the authors\\u2019 responses addressed some points about the contributions and technical approach, the paper appears to need a significant rewrite to resolve these issues comprehensively.\", \"There were additional concerns, such as the assumption that preference automatically equates to translation quality, which the authors partially addressed this more experiments. Some other issues also point to clarity problems, even when not presented as such (e.g., when the authors had to explain the relationship between \\\"prompt shift\\\" and \\\"multiple distributions\\\" to resolve a separate issue during the discussion).\", \"Overall, while the work shows promise, I think it cannot be accepted in its current state due to significant issues in presentation, clarity, and reproducibility.\"], \"additional_comments_on_reviewer_discussion\": \"Three out of four reviewers participated in the discussions, but none were significantly swayed by the authors' responses and chose to maintain their original recommendations (all ratings below acceptance threshold.)\\n\\nOne reviewer did not participate in the discussion; however, their review was consistent with the others, particularly in highlighting significant clarity issues.\"}", "{\"title\": \"An example of calculating h_C.\", \"comment\": \"**Q.** I am still much confused by the writing on the contrastive loss bit. What does \\\"the first token\\\" mean here? During training, in a batch, what needs to be present? Could you give me an example?\\n\\n**A.** Thank you for your feedback! We realize that we may not have explained it clearly. Our manuscript and response convey the same concept.\\nHere, we provide a detailed calculation example of how $h_C$ is calculated as follows: \\n\\n<bos>[in-context example] [input] [**o**,u,t,p,u,t] <eos>\\n\\nWe use the probability distribution of the first token of the output (here, **o**) predicted by the model as the hidden representation after the model has read the [in-context example].\"}", "{\"title\": \"General response to reviewers\", \"comment\": \"Thank you to all the reviewers for their valuable suggestions. We have submitted a revised version based on the recommendations, primarily addressing **typo**, adding detailed version numbers for the XCOMET model, incorporating **relevant references suggested by the reviewers**, **initialization of PEIT** in the experiments, including **additional ablation studies** conducted during the rebuttal phase, and **clarifying our contributions**.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the response! I have also read other reviewers' comments and your response. I am still much confused by the writing on the contrastive loss bit. I see that this is also constantly raised by other reviewers.\\n\\n- I hope to clarify with you how $h^{i}_{C}$ is exactly obtained for $C$, $C^{+}$, and $C^{-}$? In the manuscript, it states *\\\"[these] denote the representations of the preferences intentions of the model for contextual information\\\"*. In your response, it is the *\\\"probability distribution of the model generating the first token\\\"*. I suppose the hidden size is $|1\\\\times |vocab|$? What does \\\"the first token\\\" mean here? During training, in a batch, what needs to be present? Could you give me an example?\"}", "{\"summary\": \"In this study, the authors present Preference-Enhanced Instruction Tuning (PEIT), a novel approach that explicitly integrates preferences into both the instruction fine-tuning and inference phases. Experimental results highlight the effectiveness of PEIT.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The empirical results clearly demonstrate the effectiveness of the proposed PEIT.\", \"weaknesses\": \"1. This paper is not well-written and is hard to follow. For instance, some concepts are not well-defined at their first use. $D$ is mentioned for the first time at line 126 but is defined at line 173, where $k$, referring to the number of distributions, is not explicitly defined in this paper, unless I missed it. Furthermore, in my opinion, sections 2.1 and 2.2 are unnecessary and disconnected from other parts of this work. They do not help in understanding the idea of this work.\\n\\n2. This paper is not self-contained. In the abstract, the authors mention that PEIT explicitly incorporates preferences into both the fine-tuning and inference phases. However, the authors did not explain how PEIT is used in the inference stage.\\n\\n3. This paper presents minimal novelty. If I understand this work correctly, there are three components in the training objective. $L_{ICL}$ is the standard training loss used for supervised fine-tuning, $L_{prefer}$ is the same as the CPO loss, and $L_{context}$ is highly similar to the contrastive loss as presented in SimCSE[1] and SimCLR[2].\\n\\n4. There are some presentation issues. Table 3, Figure 2, Figure 3, and Table 4 are not referred to in the text, which makes the paper hard to follow.\\n\\n[1] Gao, Tianyu, Xingcheng Yao, and Danqi Chen. \\\"Simcse: Simple contrastive learning of sentence embeddings.\\\" arXiv preprint arXiv:2104.08821 (2021). \\n[2] Chen, Ting, et al. \\\"A simple framework for contrastive learning of visual representations.\\\" International conference on machine learning. PMLR, 2020.\", \"questions\": \"1. What are $C^{+}$ and $C^{-}$? As shown in the equation at line 183, both $y_w$ and $y_l$ are conditioned on the same context $C_i$.\\n2. What is the similarity in $L_{context}$?\\n3. What is $L_{ICFT}$ at line 202?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for responding.\\nI'm still confused by the details of the retriever.\\n1. The training details of it, e.g, training data construction, training steps, batch size, etc; Any constrains during the llm training process? or the authors just train the llm and use cosine similarity? By ` cosine similarity to compare the embedding similarity of each example`, is the embedding an average across all words or average pooling, max pooling, etc? It's hard to reproduce without all these details (besides those I didn't mention here).\\n2. The retriever performance, e.g, p/r/f, to understand how important is this retriever.\\n3. Why a random constant example works compared to SFT? \\n4. it seems that K is not important according to the response.\"}", "{\"title\": \"Details of retriever\", \"comment\": \"**Q.** I'm still confused by the details of the retriever.\\n1. The training details of it, e.g, training data construction, training steps, batch size, etc; Any constrains during the llm training process? or the authors just train the llm and use cosine similarity? By cosine similarity to compare the embedding similarity of each example, is the embedding an average across all words or average pooling, max pooling, etc? It's hard to reproduce without all these details (besides those I didn't mention here).\\n2. The retriever performance, e.g, p/r/f, to understand how important is this retriever.\\n3. Why a random constant example works compared to SFT?\\n4. it seems that K is not important according to the response.\\n\\n**A.** Thank you for your feedback! Here, we provide a detailed description of how the retriever works. First, we explain the goal of the retriever, and then we describe the implementation details of our retriever. We hope this will be helpful to you!\\nThe purpose of designing the retriever is **to provide the MT model with examples that share the same preference intention** as the source text to be translated. By leveraging explicit examples, we aim to enhance the model's translation performance. Therefore, we intend to use a retriever to augment the current text to be translated. \\n\\nWe did not train our own embedding model; instead, **we used an open-source model**. Specifically, we used \\\"xlm-r-bert-base-nli-stsb-mean-tokens\\\" [1] as the tool for sentence embedding with the default configuration. Then, we provide the top k most similar preference examples for each sentence to be translated by comparing the cosine similarity of the embedding of the [source text], completing the preference augmentation as shown below:\\n\\n[source text] [pair-wise target text] ---> [in-context example] [source text] [pair-wise target text]\\n\\nDue to the lack of ground truth to label whether the two [source text] are relevant, we are unable to calculate the p/r/f of the retriever. Therefore, we demonstrated the importance of the retriever by designing experiments with different similarity ranks.\\n \\nAs for why using the constant example PEIT is superior to SFT, I believe this is inevitable because the preference-enhanced loss designed for **PEIT incorporates preference learning from the output perspective, whereas SFT only performs imitation learning** on the data.\\n\\nThe value of K is important for the retriever, or more specifically, we believe that the examples retrieved are important. Please allow us to list the ablation experiments we conducted for the retriever again here.\\n\\nDifferent numbers of examples (**different K values**)\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| PEIT | k=1 | k=2 | k=3 |\\n| :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 95.29 | 95.36 |\\n\\nWhen K=1, examples with different ranks (**different retrieval qualities**)\\n\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| Model | PEIT+rank 1 example | PEIT+rank 2 example | PEIT+rank 3 example | PEIT+constant example | SFT | CPO |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 94.67 | 94.59 | 93.60 | 92.13 | 93.62 |\\n\\nAs can be seen from the above experiments, the more complete the retrieval (using higher quality relevant examples or increasing the number of relevant examples), the better the final result.\\n\\n[1] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019\"}", "{\"title\": \"Response (question 3 and 4)\", \"comment\": \"## For question 3 and question 4.\\n**Q3.** In the inference process, what is the specific value of top-k ? Does the value of k impact performance? Additionally, more comprehensive details on other experimental settings, such as learning rate, are needed. \\n\\n**Q4.** In the section 4.1 titled \\u201cDifferent preferences representations of context,\\u201d could you provide a more detailed description of the experimental setup? Did you select similar samples at different levels of contextual similarity for each of the 100 sample points?\\n\\n**A.**\\n\\n**We have added experiments related to the value of k.**\\n\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| PEIT | k=1 | k=2 | k=3 |\\n| :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 95.29 | 95.36 |\\n\\n\\nIn our experiments, we set k = 1 by default. Additionally, we evaluated the impact of different k values on performance, as shown in the table above. The larger the k value, the better the performance. However, since larger k values result in higher training and inference costs, we chose k = 1 as the default.\\n\\nThank you for your feedback. In the revised version, we will provide detailed experimental settings. Please allow us to reiterate here: the learning rate is set to 2e-5, the LoRA rank is set to 32, and the LoRA adapter is enabled on the QKVO layer.\\n\\n**A.** We assume your understanding of Section 4.1, titled \\\"Different preference representations of context,\\\" is correct. \\n\\nThank you again for carefully reviewing these responses. In the revised version, we will address the issues mentioned above by refining the paper with added details, accurate and additional experiments.\"}", "{\"title\": \"Response(weakness 3 and 4)\", \"comment\": \"## For weakness 3 and weakness 4.\\n**W3.** Lack of baselines such as [2]. \\n\\n**W4.** I would appreciate if various sized models/ training data could be involved.\\n\\n**A.** We have carefully reviewed [2] and found that their work is orthogonal to ours. Therefore, we think it is unnecessary to compare the two.\\n\\nThank you for your feedback. We additionally used a 1.1B model to validate the performance of PEIT. The experimental results are as follows, demonstrating that PEIT can achieve relatively optimal performance across models of different scales.\\n\\n| TinyLlama-1.1b | SFT | CPO | PEIT |\\n| :---: | :---: | :---: | :---: |\\n| XCOMET | 74.86 | 75.89 | 76.65 |\\n\\nThank you again for carefully reviewing these responses. In the revised version, we will address the issues mentioned above by refining the paper with added details, accurate and additional experiments.\\n\\n[1] Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation, 2024b.\\n\\n[2] Luong, Trung Quoc, et al. \\\"Reft: Reasoning with reinforced fine-tuning.\\\" arXiv preprint arXiv:2401.08967 (2024).\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your constructive comments. We will respond to each point.\\n\\n## For weakness 1.\\n**W1.** I think the paper should make it clear about this paper's contribution. The proof in Sec 2.2 is the same as in (Dai et al., 2023) which is cited in the paragraph. I am not sure about the novelty of using in-context representations (even just for MT) and applying as these are neither defined nor cited. Please also see Question 1. Perhaps some clarifications are needed? \\n\\n**A.** Our main contribution is to propose the **first framework that explicitly leverages preference information** for efficient preference optimization. \\nWe theoretically validated the feasibility of our method following the formula proposed by (Dai et al., 2023)[1]. Subsequently, we confirmed it from an experimental perspective.\\n\\n## For weakness 2.\\n**W2.** Being able to leverage the preference information in the context/demonstration (both at training and inference) is definitely a selling point. However, I think this is not proven through the experiment design because \\\"preference\\\" is not equivalent to translation quality. I feel that more ablation studies are needed in addition to Sec 4.1. For example, it would be nice to see inference with the same test input but using in-context examples with a different preference distribution and understand how that qualitatively affects the output translation. In addition to general-domain or news translation test sets, one simple and reasonable setup could be terminology translation.\\n\\n**A.** Following the perspective advocated in [3], we ranked translation results based on their quality (i.e., the quality metric) and considered this ranking as a \\\"preference\\\" (i.e., the higher the quality metric, the more preferred the translation).\\n\\nWe have supplemented the experiments to validate the impact of using in-context examples with different preference distributions on the output translation quality for the same test input. Please allow us to elaborate on the design and results of this experiment here: We retrieved the Top 3 examples most similar to the input and used each example individually as the in-context example for the input to evaluate the impact of different preference distributions on the results. Additionally, to assess the effect of completely unrelated preference distributions on the translation results, we also used a fixed example that was entirely unrelated to the input as the in-context example. \\n\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| Model | PEIT+rank 1 example | PEIT+rank 2 example | PEIT+rank 3 example | PEIT+constant example | SFT | CPO |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 94.67 | 94.59 | 93.60 | 92.13 | 93.62 |\\n\\n\\nThe results are shown in the table above. Even when using examples with preference distributions unrelated to the input as in-context examples, PEIT is able to maintain a certain level of performance, demonstrating its adaptability.\\n\\n## For question 1.\\n**Q1.** Experiment details.\", \"line_190_to_line_207\": \"this part looks a bit disconnected.\\n\\n+ Iines 190-196, how are those hCi context terms obtained? What exactly is the similarity function sim()?\\n+ Also given that these are fed as context in a prompt, I am unsure how these can be disentangled.\\n+ Near line 201, LICFT is not defined.\\n\\n**A.** h_C is the **probability distribution** of the model generating the first token. sim() is the cosine similarity function. The **L_ICFT in line 201 is a typo**; it should be L_ICL.\\n\\n\\n\\n## For question 2.\\n**Q2.** What does \\\"incomplete ablation\\\" mean in \\\"Compared with incomplete ablation\\\" under Sec 3.4 Line 312?\\n\\n**A.** Thank you for your feedback. We found that this was a typo. \\\"incomplete ablation\\\" should be \\\"ablation.\\\" \\n\\n## For question 3.\\n**Q3.** Citations and software version\\n\\n+ related work on preference learning for MT [2].\\n+ for the xCOMET and \\\"KIWI\\\" metrics, it would be better to provide the correct citation and perhaps footnote the exact model version.\\n\\n**A.** Thank you again for carefully reviewing this feedback. In the revised version, we have cited this work [2] and clarified the model version in the appropriate sections.\\n\\n[1] Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers, 2023.\\n\\n[2] Dawei Zhu, Sony Trenous, Xiaoyu Shen, Dietrich Klakow, Bill Byrne, and Eva Hasler. A preference-driven paradigm for enhanced translation with large language models, 2024. \\n\\n[3] Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation, 2024b.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review comments; they are extremely valuable. We will address and respond to each point individually.\\n\\n## For weakness 1 and question 1.\\n**Q1.** Lacks lots of the details, which makes the results not convincing enough.\\n\\n**A.** Due to space limitations, we were unable to include the complete baseline setup in the main text. We have added these details in the appendix of the revised version. Please allow us to reiterate the setup of these baselines here:\\n\\n| Baseline | lr | Lora rank | Lora target | initialization | random_seed |\\n| :---: | :---: | :---: | :---: | :---: | :---: |\\n| SFT | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n| CPO | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n| DPO | 2e-5 | 32 | QKVO | Adapter weights trained with SFT | 42 |\\n| ICFT | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n| ICPFT | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n| PECPO | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n| PEIT | 2e-5 | 32 | QKVO | Gaussian distribution | 42 |\\n\\n\\nThe retriever plays a crucial role in our framework. We use cosine similarity to compare the embedding similarity of each example, ensuring that the retrieved examples are the top-k most similar ones. Additionally, our experiments have demonstrated that even when the retrieved examples are not the most similar, performance still improves. Please allow us to elaborate on the design and results of this experiment here: We retrieved the Top 3 examples most similar to the input and used each example individually as the in-context example for the input to evaluate the impact of different preference distributions on the results. Additionally, to assess the effect of completely unrelated preference distributions on the translation results, we also used a fixed example that was entirely unrelated to the input as the in-context example. \\n\\n| Base model | Dataset | Direction |\\n| --- | :---: | :---: |\\n| Llama3-8b | ALMA-R-Preference | xx->en |\\n\\n\\n| Model | PEIT+rank 1 example | PEIT+rank 2 example | PEIT+rank 3 example | PEIT+constant example | SFT | CPO |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| XCOMET | 95.25 | 94.67 | 94.59 | 93.60 | 92.13 | 93.62 |\\n\\n\\nThe results are shown in the table above. Even when using examples with preference distributions unrelated to the input as in-context examples, PEIT is able to maintain a certain level of performance, demonstrating its adaptability.\\n\\n## For question 2 and weakness 2.\\n**W2.** Performance. Actually, I tried to calculate the average BLEU and XCOMET score across languages, for example Table 1. There's no significant differences of BLEU and xCOMET score between methods. This might not be considered significant. \\n\\n**Q2.** Why the performance gain is not consistent across languages.\\n\\n**A.** Thank you for your feedback. The base language model was pre-trained on varying amounts of multilingual corpora, which inherently leads to an imbalance in multilingual capabilities. Without additional adjustments to the quantity of training data, different improvements in performance across languages are inevitable in post-training. \\n\\nSuch improvements in the translation domain can be considered significant. For example, in the CPO work (accepted by ICML24), they observed a 3.67% decrease in the BLEU metric but a 1.51% improvement in the XCOMET metric. In comparison, our work achieved a 0.28% improvement in the BLEU metric and a 2.19% improvement in the XCOMET metric.\\n\\n| ALMA-13B-LoRA | BLEU | XCOMET |\\n| ---: | :---: | :---: |\\n| +SFT on prefer data | 30.90 | 92.54 |\\n| +CPO | 27.03 | 94.05 |\\n\\n\\n| Llama2-13B | BLEU | XCOMET |\\n| ---: | :---: | :---: |\\n| +SFT | 27.46 | 88.33|\\n| +PECPO | 27.06 | 89.89 |\\n| +PEIT | 27.74| 90.52|\\n\\n\\n[1] Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation, 2024b.\\n\\n[2] Luong, Trung Quoc, et al. \\\"Reft: Reasoning with reinforced fine-tuning.\\\" arXiv preprint arXiv:2401.08967 (2024).\"}", "{\"summary\": \"This paper proposes PEIT, which improves the quality of machine translation by incorporating preference learning into instruction tuning and inference. The authors suggest that PEIT can do better than existing tuning methods such as Contrastive Preference Optimization (CPO). PEIT demonstrates improved BLEU and XCOMET scores.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Theoretically demonstrated the proposed method can help the paradox of fitting, allowing a single model to achieve the same loss lower bound as a multi-model setup.\\n2. The experimental results demonstrated superior performance than other competitors, such as CPO, DPO, etc.\", \"weaknesses\": \"1. Lacks lots of the details, which makes the results not convincing enough. For example:\\n - training details of other baselines, such as line 268, by \\\"we have made a specific design for the DPO training process ...\\\", the details are not clear to readers.\\n - training details and performance of the retriever, I believe this would greatly impact the final translation performance.\\n2. Performance.\\nActually, I tried to calculate the average BLEU and XCOMET score across languages, for example Table 1. There's no significant differences of BLEU and xCOMET score between methods. This might not be considered significant. [1]\\n3. Lack of baselines such as [2]\\n4. I would appreciate if various sized models/ training data could be involved.\\n\\n||BLEU|XCOMET|\\n|:---:|:---:|:---:|\\n|SFT|27.46| 88.33|\\n|PE-CPO| 27.06| 89.89|\\n|XCOMET|27.74| 90.52|\\n\\n[1] Kocmi, Tom, et al. \\\"Navigating the metrics maze: Reconciling score magnitudes and accuracies.\\\" arXiv preprint arXiv:2401.06760 (2024).\\n\\n[2] Luong, Trung Quoc, et al. \\\"Reft: Reasoning with reinforced fine-tuning.\\\" arXiv preprint arXiv:2401.08967 (2024).\", \"questions\": \"Please see the weaknesses.\\n1. The details\\n2. Why the performance gain is not consistent across languages.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes preference-enhanced instruction tuning (PEIT) for machine translation which consists of three parts:\\n 1. a typical generation loss function that considers in-context demonstrations during training;\\n 2. a DPO like loss function to consider a win-loss pair of outputs;\\n 3. a hidden representation-level loss that aligns a model's preference representation of the in-context example.\\n\\nThe paper compares the proposed method with quite a few baselines and some ablation cases by removing some part of the overall loss. Evaluated on a few languages in FLORES, the proposed method performs well especially when evaluated by the neural metric xCOMET.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea of making in-context preference information during fine-tuning and inference is intuitive for the approached task, machine translation, where sometimes there is no single gold translation and user-defined preference and consistency should be taken into consideration.\\n2. The paper experimented with a good number of baselines which are strong in the current research landscape. Overall, the results show that the PEIT method with all three losses incorporated performs pretty well.\", \"weaknesses\": \"1. I think the paper should make it clear about this paper's contribution. The proof in Sec 2.2 is the same as in (Dai et al., 2023) which is cited in the paragraph. I am not sure about the novelty of using in-context representations (even just for MT) and applying $\\\\mathcal{L}_{ICFT}$ as these are neither defined nor cited. Please also see Question 1. Perhaps some clarifications are needed?\\n2. Being able to leverage the preference information in the context/demonstration (both at training and inference) is definitely a selling point. However, I think this is not proven through the experiment design because \\\"preference\\\" is not equivalent to translation quality. I feel that more ablation studies are needed in addition to Sec 4.1.\\n - For example, it would be nice to see inference with the same test input but using in-context examples with a different preference distribution and understand how that qualitatively affects the output translation. In addition to general-domain or news translation test sets, one simple and reasonable setup could be terminology translation.\", \"questions\": \"1. Line 190 to line 207: this part looks a bit disconnected.\\n - Iines 190-196, how are those $h^{i}_{C}$ context terms obtained? What exactly is the similarity function $sim()$?\\n - Also given that these are fed as context in a prompt, I am unsure how these can be disentangled.\\n - Near line 201, $\\\\mathcal{L}_{ICFT}$ is not defined.\\n\\n2. What does \\\"incomplete ablation\\\" mean in \\\"Compared with incomplete ablation\\\" under Sec 3.4 Line 312?\\n\\n3. Citations and software version:\\n - related work on preference learning for MT [1].\\n - for the xCOMET [2] and \\\"KIWI\\\" metrics, it would be better to provide the correct citation and perhaps footnote the exact model version.\\n\\n[1] Zhu et al., 2024, https://arxiv.org/abs/2404.11288\\n\\n[2] Guerreiro et al., 2023, https://arxiv.org/abs/2310.10482\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for reviewing our response! We hope our response addresses your concerns. If you have any further questions, please feel free to let us know. We look forward to your reply and further discussion!\"}", "{\"summary\": \"During the inference process, the preference intentions in the prompt may not align with the training data of the model, which will affect the overall effectiveness of the model. To address this issue, this paper introduces Preference-Enhanced Instruction Tuning, a method that integrates preferences into both the fine-tuning and inference stages. This approach improves translation quality and outperforms existing preference optimization methods on multilingual benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experimental results presented in the paper demonstrate that PEIT achieves better translation quality and alignment preformence.\\n2. The paper theoretically establishes the effectiveness of ICL in addressing the identification of preference intentions.\", \"weaknesses\": \"1. The proof of ICL's effectiveness presented in the paper mainly derives from Dai et al. (2023), which raises concerns about the novelty of your paper.\\n2. The term $L_{ICFT}$ on page 4 lacks clear definition or explanation, which may hinder understanding of the proposed method.\\n3. The definition of the concept of \\\"preference intention\\\" in the article is vague, affecting the clarity of the paper's arguments. Furthermore, the article does not provide detailed information on how to determine the preference intentions of samples in the dataset.\\n4. The experiments appear to be conducted within a single domain or distribution, indicating that the similarity of preferences between the training and testing datasets is consistent. This seems insufficient to validate the scenario mentioned in the introduction, where the preferences in the inference prompts do not align with the training data.\\n5. There is a writing error: in the seventh line of the first paragraph of the introduction, a citation is incorrectly marked as \\u201c?\\u201d.\\n\\n**Reference**:\\n\\n[1] Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers, 2023.\", \"questions\": \"1. How does the paper categorize samples in the dataset into subsets that contain different preference intentions?\\n2. During the inference process, is the retrieval corpus still derived from the training set? If so, how does this method's performance get affected when the preference intentions in the prompt are not sufficiently similar to those in the training data? \\n3. In the inference process, what is the specific $k$ value of top-$k$? Does the value of $k$ impact performance? Additionally, more comprehensive details on other experimental settings, such as learning rate, are needed. \\n4. In the section 4.1 titled \\u201cDifferent preferences representations of context,\\u201d could you provide a more detailed description of the experimental setup? Did you select similar samples at different levels of contextual similarity for each of the 100 sample points?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your efforts in addressing my concerns. After carefully reviewing your response, I decide to keep my score as it is.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your thorough review, and we will respond to each of your questions.\\n\\n## For weakness 1 and weakness 3.\\n\\n**W1.** This paper is not well-written and is hard to follow. For instance, some concepts are not well-defined at their first use. D is mentioned for the first time at line 126 but is defined at line 173, where k, referring to the number of distributions, is not explicitly defined in this paper, unless I missed it. Furthermore, in my opinion, sections 2.1 and 2.2 are unnecessary and disconnected from other parts of this work. They do not help in understanding the idea of this work.\\n\\n**W3.** This paper presents minimal novelty.\\n\\n**A.** Thank you for your feedback. We have revised and refined our article in the updated version.\\nOur main contribution is to propose the **first framework that explicitly leverages preference information** for efficient preference optimization. We theoretically validated the feasibility of our method following the formula proposed by (Dai et al., 2023)[1]. Subsequently, we confirmed it from an experimental perspective.\\n\\nSince we only use the notation from its definition after line 173 and not before, we did not provide a complete definition earlier. \\nWe introduce this notation k merely to represent the multi-distribution characteristic of translation data. However, during the training and inference stages, we do not rely on a specific k value. This is because we can simply use similarity retrieval (cosine similarity) to obtain samples that belong to the same preference distribution as the current input.\\n\\n## For weakness 2.\\n**W2.** This paper is not self-contained. In the abstract, the authors mention that PEIT explicitly incorporates preferences into both the fine-tuning and inference phases. However, the authors did not explain how PEIT is used in the inference stage.\\n\\n**A.** In Line 089 and Fig. 1, we described the inference process in detail, and in Appendix C, we provided an even more comprehensive explanation of the inference process.\\n\\n## For weakness 4 and questions.\\n**W4.** There are some presentation issues. Table 3, Figure 2, Figure 3, and Table 4 are not referred to in the text, which makes the paper hard to follow.\\n\\n**Q1.** What are C+ ,C-? As shown in the equation at line 183, both yw and yl are conditioned on the same context C.\\n\\n**Q2.** What is the similarity in L_context ?\\n\\n**Q3.** What is L_ICFT at line 202?\\n\\n**A.** We sincerely apologize for any confusion caused by the presentation in Section 4. We placed each figure and table directly below the corresponding textual description and included supplementary explanations in their captions. In the revised version, we will explicitly reference the figures and tables to improve the clarity of the paper.\\n\\n$L_{ICFT}$ in line 202 should be $L_{ICL}$, as calculated earlier, and we use cosine similarity as sim() for $L_{context}$.\\n\\nWe retrieve the top-k examples using cosine similarity and sort these examples based on their similarity to the input. These examples are then evenly divided into three subsets according to their similarity scores: the most similar subset is denoted as $C^+$, followed by $C$, and finally $C^-$. Apologies for the confusion caused by this notation. Here, $C_i$ should represent the set of the most similar examples used during training. However, during inference, we cannot guarantee that the retrieved examples are always the most similar. Therefore, we designed $L_{context}$ to enhance the model's robustness to C.\\n\\nThank you again for thorough reviewing these responses.\\n\\n[1] Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers, 2023.\"}" ] }
BfI0D1ci9r
Physics-informed GNN for non-linear constrained optimization: PINCO, a solver for the AC-optimal power flow
[ "Anna Varbella", "Damien Briens", "Giuseppe Alessio D'Inverno", "Blazhe Gjorgiev", "Giovanni Sansavini" ]
The energy transition is driving the integration of large shares of intermittent power sources in the electric power grid. Therefore, addressing the AC optimal power flow (AC-OPF) effectively becomes increasingly essential. The AC-OPF, which is a fundamental optimization problem in power systems, must be solved more frequently to ensure the safe and cost-effective operation of power systems. Due to its non-linear nature, AC-OPF is often solved in its linearized form, despite inherent inaccuracies. Non-linear solvers, such as the interior point method, are typically employed to solve the full OPF problem. However, these iterative methods may not converge for large systems and do not guarantee global optimality. This work explores a physics-informed graph neural network, PINCO, to solve the AC-OPF. We demonstrate that this method provides accurate solutions in a fraction of the computational time when compared to the established non-linear programming solvers. Remarkably, PINCO generalizes effectively across a diverse set of loading conditions in the power system. We show that our method can solve the AC-OPF without violating inequality constraints. Furthermore, it can function both as a solver and as a hybrid universal function approximator. Moreover, the approach can be easily adapted to different power systems with minimal adjustments to the hyperparameters, including systems with multiple generators at each bus. Overall, this work demonstrates an advancement in the field of power system optimization to tackle the challenges of the energy transition. The code and data utilized in this paper are available at https://anonymous.4open.science/r/opf_pinn_iclr-B83E/.
[ "Power systems optimization", "Non-linear optimization", "Graph neural networks (GNNs)", "Physics-informed neural networks (PINN)" ]
Reject
https://openreview.net/pdf?id=BfI0D1ci9r
https://openreview.net/forum?id=BfI0D1ci9r
ICLR.cc/2025/Conference
2025
{ "note_id": [ "urDRN3pmJD", "uJIMAxBdWg", "qNv76O00cR", "lpQ0HBmVRa", "ipmpADDFWJ", "cYy8Qr5jOl", "NghNoZhTD8", "Na4XTSSPqH", "MPDgYOTerP", "KDtLErurO3", "DT0rLxNqoN", "A9vzFXqbNd", "6M9M3DZEq0", "3JMpFx9kPt", "2PzYAfG3Wo", "237Pu6fAue" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732269825680, 1732644427713, 1732284241685, 1730700513881, 1734652817573, 1730695567358, 1732279661859, 1732634894884, 1732280533059, 1737523820052, 1732270804659, 1730345342760, 1729031171988, 1732679733842, 1732516785229, 1730691746556 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_JaWa" ], [ "ICLR.cc/2025/Conference/Submission7149/Area_Chair_wh9D" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_CnJU" ], [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_UGxj" ], [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7149/Authors" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_gXzX" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_UGxj" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_JaWa" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_JaWa" ], [ "ICLR.cc/2025/Conference/Submission7149/Reviewer_Nm3Q" ] ], "structured_content_str": [ "{\"title\": \"Reply to reviewer JaWa\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback. While we have decided not to proceed further with the submission, we would still like to take this opportunity to clarify and address the points raised.\\n\\n1. We appreciate your feedback on this point. To clarify, we do not claim to be the first to use Physics-Informed Neural Networks (PINNs) for solving ACOPF problems. Instead, our contribution lies in successfully achieving zero inequality violation using a PINN in a fully unsupervised framework, which has not been demonstrated in the works you reference.\\n\\n2. We recognize the importance of a comprehensive literature review and aim to clearly differentiate our work from existing studies. Below, we provide specific comparisons and highlight the distinctions:\\n[1] is a review article that broadly covers end-to-end learning methods for constrained optimization. While it includes methods cited in our work and those mentioned in your comments, our focus is on unsupervised PINN approaches for ACOPF, not general constrained optimization.\\n[2] addresses constrained optimization using a Deep Neural Network (DNN) but does not generalize to topological differences in power systems, exhibits inequality violations (unlike our approach), and is tested on only two simple systems with limited configurations (e.g., one generator per node and a single demand).\\n[3] was reviewed in our study but not cited because the method is not fully unsupervised, a key focus of our work.\\n[4] primarily targets general constrained optimization problems and is tested on a single power system (IEEE57) with 1000 instances, experiencing constraint violations in the test set. This limited scope does not demonstrate the robustness or generality needed for broader ACOPF applications.\\n\\n3. You raise an important point about comparisons with existing ML methods. However, our method's unsupervised nature makes direct comparisons with supervised ML approaches less relevant, as those methods typically map traditional ACOPF solutions (obtained via solvers) and do not solve the problem directly. Instead, it will be more relevant to compare our method to traditional optimization solvers, including MIPS and IPOPT, as both solve the problem directly.\\n\\n4. Our choice of using only 500 demand samples stems from the principle of avoiding data-intensive methodologies, a primary advantage of unsupervised learning approaches over supervised techniques. However, we acknowledge that increasing the number of samples could provide additional insights. Training with larger datasets would extend training times but does not fundamentally alter the method or its conclusions. The reference [4] reported similarly uses only 1000 instances.\\n\\n5. The experiments were conducted on ETH Euler Clusters, which comprise multiple nodes. Detailed specifications can be found on the official Euler webpage. Training times vary depending on the grid size, ranging from several hours to a few days. However, it is important to note that training is a one-time computational cost, after which inference becomes significantly faster than traditional solvers. This advantage is demonstrated in Figure 4. \\n\\n6. The model depicted in Figure 1 was custom-developed for this work. To the best of our knowledge, no other studies use the same architecture combined with our PINN loss function. \\n\\n7. You correctly note that achieving global optimality is a limitation of nonlinear optimization problems, including ACOPF. Our method does not claim to resolve this inherent challenge. Instead, we aim to provide an alternative to traditional nonlinear solvers for ACOPF problems, demonstrating competitive performance and practical advantages, such as zero inequality violations.\\n\\n8. The equality loss of up to 20 MW observed for the 118-bus system does not imply that MIPS fails to converge. Rather, it reflects the fact that equality constraints are treated as hard constraints and minor violations can occur within the solver's tolerance. Importantly, MIPS would not converge if inequality constraints were violated beyond permissible limits or if an initial condition was infeasible.\"}", "{\"title\": \"Reply to jaWa\", \"comment\": \"We thank the reviewer for their reply and for raising this important question.\\n\\n1.\\tRegarding inequality constraint satisfaction, we acknowledge that supervised methods like DC3 and some unsupervised methods (e.g., Park & Pascal, 2023) report negligible inequality gaps for test systems. Both references cited provide results for limited cases and report their findings in brief, without delving into theoretical guarantees of achieving zero inequality gaps. To address the reviewer\\u2019s point, we emphasize that the DC3 paper does not explicitly provide a theoretical guarantee for zero inequality violations. Instead, its results are empirically validated. Similarly, our approach experimentally achieves zero inequality violations (~10\\u22126) across the test systems considered, comparable to DC3. If the reviewer could point us to a section in DC3 or related works where such a theoretical guarantee is described, we would be glad to explore it further and potentially adapt similar reasoning to our framework. Lastly, we note that while experimentally achieving zero inequality violations is a valuable outcome, the challenges of extending this to a broader range of systems and ensuring robustness under diverse conditions remain areas for future investigation in this field.\\n\\n2.\\tRegarding the training times of DC3 and Park & Pascal, we kindly ask if the reviewer could point us to where the training time and dataset creation time are explicitly reported in these methods. From our review of the DC3 paper, only inference times are reported, and Park & Pascal state that solving (training) the instance takes approximately 120 minutes. While we acknowledge the longer training times, this tradeoff is an expected aspect of developing an unsupervised framework. To address concerns about scalability and training time reductions, we note that using GPUs instead of CPUs (as in our experiments) reduces training time by approximately fivefold, depending on the system. Further, while the training time scales with the system size, the inference time of our model remains constant, enabling rapid generalization to unseen conditions. Reducing training time may be critical for real-world deployment, and exploring hybrid supervised-unsupervised approaches or other efficiency improvements could be an avenue for future work, though it is not the focus of this paper. Finally, we appreciate the reviewer\\u2019s suggestion to analyze how training time scales with system size, we agree that it is a relevant question to explore.\\n\\n3.\\tWe respectfully maintain that comparing our unsupervised method to supervised models is not the most appropriate benchmark, as the two approaches have fundamentally different objectives. Supervised models, such as DC3, aim to replicate the solutions of a solver, inheriting the solver\\u2019s tendencies and local minima. Our unsupervised method, by contrast, directly solves the AC-OPF problem without relying on pre-computed solutions, potentially identifying different valid local minima. Why should we compare our unsupervised method to supervised models that aim to replicate the solution of a solver, when we can directly compare our method to the solver itself?\\n\\n4.\\tWe acknowledge that the equality gaps reported in our current submission, when using MIPS as the baseline solver, may appear larger than what is expected for optimization solvers. To address this concern, we will recompute these results using IPOPT, as employed in Park & Pascal (2023). \\n\\n5.\\tCurrently, our work primarily establishes the potential of the method without presenting explicit empirical evidence on varying grid topologies. However, we recognize the necessity of demonstrating this capability, both theoretically and empirically. We plan to include additional experiments in future iterations of our work. These experiments will test the method's performance on grids with varying topologies. These results will provide empirical evidence to support our claim about the method's generalization capabilities. Theoretically, the use of a graph neural network (GNN) as part of the architecture inherently allows for flexibility across different grid topologies. The GNN leverages the graph structure of the grid, making it topology-agnostic in terms of its operations.\\n\\n6.\\tThe choice to limit the dataset to 500 samples was deliberate and aimed at highlighting the generalization capabilities of our approach. Our method requires a more computationally intensive process in training rather than the input data generation (which is rather trivial and inexpensive, as rightfully pointed out). Our focus was to demonstrate that the method could generalize even with a relatively small training dataset. That said, we agree that increasing the dataset size could improve the performance and provide additional insights. This could be useful for larger and more complex systems. We will explore this aspect in future work to assess the trade-off between dataset size, training time, and performance.\"}", "{\"title\": \"Reply to reviewer UGxj\", \"comment\": \"We appreciate the detailed feedback provided; while some concerns highlight valid areas for clarification and improvement, we believe other critiques may stem from misunderstandings or misalignment of expectations regarding the goals of our work.\\nOur focus is on solving the AC-OPF problem in a fully unsupervised manner using a combined GNN+PINN framework, differentiating this work from others relying on supervised learning techniques. We believe this addresses many of the comments regarding comparisons with prior works. While we have decided not to proceed further with the submission, we would still like to take this opportunity to clarify the points raised.\\n\\n1. While the referenced works address multiple generators per bus, they rely on supervised learning approaches or tackle entirely different problems (e.g., unit commitment). Our approach, by contrast, solves the AC-OPF problem in a fully unsupervised manner. Additionally, we stress that aggregating and disaggregating generators, as suggested by the reviewer, introduce inaccuracies. In practical settings, operators require detailed generator-level outputs\\u2014not aggregated approximations. Particularly, how do we deal with generators but with different costs?\\n\\n2. We disagree with this claim and invite the reviewer to refer to the references [1] [2]. Both references provide theoretical backing for the Augmented Lagrangian approach employed in our work. We want to ask if you could elaborate on why achieving inequality constraints equal to zero is not theoretically sound.\\n[1] Lu et Al., \\\"Physics-Informed Neural Networks with Hard Constraints for Inverse Design\\\"\\n[2] Toussaint M., \\\"Introduction to Optimization\\\"\\n\\n3. We acknowledge the lack of experiments on this aspect. In the next version of the manuscript, we will include a case study demonstrating the proposed model's ability to handle topology variations.\\n\\n4. While we agree that enforcing inequality constraints can sometimes be straightforward, we emphasize that prior fully unsupervised approaches have not achieved this in the AC-OPF context. Our work demonstrates zero inequality violations in a fully unsupervised setting, a non-trivial achievement. Additionally, the results on equality constraint violations are comparable to traditional solvers, as reported in Tables 1 and 2.\\n\\n5. We disagree for several reasons:\", \"realism_of_test_systems\": [\"Many real-world transmission systems, such as the Swiss transmission grid, comprise fewer than 200 nodes (e.g., 186 nodes in the Swiss system, 3657 substations for the European transmission systems https://arxiv.org/abs/1806.01613).\", \"A more extensive test case like the one presented in https://arxiv.org/abs/2301.08840 is synthetic and does not reflect the structure of actual grids.\"], \"comparison_with_the_references_reported\": \"- Unlike the supervised methods reported, our approach does not require labeled data from traditional solvers.\\n\\n- While we acknowledge the importance of scalability, this work focused on validating the framework on standard benchmarks.\\n\\n6. Training times are a standard limitation of any deep learning framework. Training is a one-time process; the benefit always lies in the fast inference times once the model is trained, which is crucial for real-time applications. \\n\\n7. Direct comparisons with supervised ML methodologies are not meaningful because our approach is fully unsupervised. Only one work was found in [3], which presented inequality constraint violations. Supervised approaches approximate solutions from traditional solvers, while our method directly solves the problem, finding alternative feasible solutions.\\n[3] https://arxiv.org/abs/2210.09277\\n\\n8. We respectfully disagree, as the parameter is defined in the text immediately preceding Eq. (5). \\n\\n9. We acknowledge that using a more realistic data distribution could improve the model's generalizability. However, our current approach (uniform distribution) is explicitly stated in Section 4.2. This method is commonly used in literature [3],[4].\\n[4] \\\"PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow.\\n\\n10. We agree that the phrasing could be clearer. Our claim specifically refers to inequality violations, and we will revise the future text to reflect this distinction explicitly.\\n \\n11. We appreciate the final suggestions and offer the following responses:\\n- 10,000-Bus Systems: Testing on such large systems is a valuable future direction, but we note that real-world grids often contain fewer nodes (e.g., Swiss grid, European Transmission System).\\n\\n- Realistic Data Distributions: This will be explored further in future work.\\n\\n- Topology Changes: Experiments will be included in the next version accounting for topology variations.\\n\\n- Training Times: While deep learning frameworks inherently require significant training times, we will explore optimizations.\\n\\n- Comparisons: Direct comparisons with ML methods are challenging due to different problem formulations (unsupervised vs. supervised).\"}", "{\"summary\": \"The paper presents a GNN-based approach to address the ACOPF problem, with the primary aim of reducing the computation time required by traditional interior-point solvers to enable faster ACOPF solutions. It leverages a physics-informed loss function incorporating penalties for both equality and inequality constraints. The grid model is represented as a graph, having real and reactive power demand, along with node type as inputs. Experimental results are provided, comparing the proposed approach with the MIPS solver across both single and multiple load scenarios.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"-- Attempting to solve a very relevant problem in the form of ACOPF.\\n-- Idea of using PINNs and GNN together is a strength.\", \"weaknesses\": \"\\u2014 The authors\\u2019 claim of being the first to use a PINN for solving the ACOPF problem must be clarified against the various existing works [1]-[2]. There are several works already using PINNs, in both supervised and unsupervised learning settings. There exists a substantial body of work under end-to-end learning that relates to this area, for example: [1]-[2].\\n\\n\\u2014 The issue mentioned above seems to stem from an incomplete literature review. For example, [1] presents a survey on end-to-end learning methods for constrained optimization, the general class of optimization problems to which ACOPF belongs. Many works listed there use PINNs in either supervised or unsupervised fashions. Additionally, several key references such as [2]-[4] are not discussed in the paper. Furthermore, the use of GNNs for the ACOPF problem is also not a unique contribution, as noted by the authors themselves. Authors should clearly discuss the limitations of these works and highlight how their work differs from them and provide advantages. \\n\\n\\u2014 Experimental studies: The results presented are limited in terms of the number of experiments and comparative analysis. No comparisons are made to existing ML methods. Authors must compare the results with various ML methods. For example, the cost difference with the proposed method is significantly higher than that achieved by various ML methods for ACOPF.\\n\\n\\n\\n[1] Kotary, James, Ferdinando Fioretto, Pascal van Hentenryck, and Bryan Wilder. \\\"End-to-End Constrained Optimization Learning: A Survey.\\\" In 30th International Joint Conference on Artificial Intelligence, IJCAI 2021, pp. 4475-4482. International Joint Conferences on Artificial Intelligence, 2021.\\n[2] Seonho Park and Pascal Van Hentenryck. Self-supervised primal-dual learning for constrained optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 4052\\u20134060, 2023.\\n[3] Ferdinando Fioretto, Terrence WK Mak, and Pascal Van Hentenryck. Predicting ac optimal power flows: Combining deep learning and lagrangian dual methods. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 630\\u2013637, 2020.\\n[4] Priya Donti, David Rolnick, and J Zico Kolter. Dc3: A learning method for optimization with hard constraints. In International Conference on Learning Representations, 2021.\", \"questions\": \"\\u2014 If the model is completely unsupervised, why are only 500 demand samples used?\\n\\n\\u2014 What are the specifications of the computing hardware cluster (ETH Euler Clusters), and how long are the models trained?\\n\\n\\u2014 How does the model shown in Figure 1 differ from existing PINN models? If it does not differ, appropriate citations should be provided.\\n\\n\\u2014 In the abstract, the authors highlight the limitation of interior-point methods in achieving global optimality. While this is correct due to the NP-hard nature of ACOPF, how does the current method address this limitation?\\n\\n--The authors mention that the MIPS equality loss reaches up to 20 MW in case of 118-Bus system Table 2. What does this imply? Does it mean that MIPS is unable to find a feasible solution? In general, interior point methods yield feasible but potentially suboptimal solutions. Authors should clarify and explain the reasons behind high loss via MIPS, and under what conditions MIPS is failing to converge to any feasible solution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper addresses the challenges of integrating intermittent renewable energy sources, by enhancing the Alternating Current Optimal Power Flow (AC-OPF) process. AC-OPF is a fundamental optimization problem in power systems that ensures safe and cost-effective grid operation. While traditionally solved using linearized approximations or non-linear solvers like the interior point method, these approaches face limitations, such as inaccuracies, convergence issues for large systems, and lack of global optimality.\\n\\nThe paper introduces PINCO, a physics-informed graph neural network (GNN) framework designed to solve AC-OPF efficiently. PINCO models the power grid as a graph, incorporating real and reactive power demand along with node types as inputs. It employs a physics-informed loss function that enforces equality and inequality constraints, enabling the method to produce accurate and feasible solutions. Unlike traditional methods, PINCO achieves results in a fraction of the computational time, generalizes effectively across diverse loading conditions, and handles multiple generators per bus with minimal adjustments to hyperparameters.\\n\\nExperimental results demonstrate that PINCO outperforms traditional solvers, such as the MIPS solver, in both single and multiple load scenarios. It functions as both a solver and a hybrid universal function approximator, ensuring compliance with inequality constraints while maintaining adaptability to various power systems. Overall, PINCO addresses the critical need for scalable, fast, and accurate solutions to AC-OPF, paving the way for better management of power grids in the context of the energy transition. It highlights the potential of machine learning, particularly GNNs, in optimizing complex, non-linear power system challenges.\\n\\nThe reviewers raise several major issues, the most important of which is the lack of a thorough and fair review of the existing literature. This leaves the paper's contribution unclear, given the existing literature. Furthermore, the reviewers criticize the novelty, the level of empirical evidence, and the actual improvements brought about by the proposed PINCO approach. Collectively, the concerns significantly outweigh the merits of the paper. \\n\\nThe authors are encouraged to consider the reviewers' comments to improve the contribution and presentation of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers have pointed out issues about the relevance to the existing literature, the contribution, interpretation of the observations, and the adequacy of the empirical studies. The authors, for the most part, have acknowledged these issues. While I can believe that the authors can revise the manuscript to address some of the issues, I believe the level of revision required exceeds what is customary and needs a thorough overhaul. This is in addition to the concerns about the novelty of the approach.\"}", "{\"summary\": [\"This paper introduces PINCO, a physics-informed graph neural network (GNN) designed to approximate solutions for the frequently employed Alternating Current Optimal Power Flow (AC-OPF) problem in power transmission networks. By leveraging existing independent tools like physics-informed NNs and GNNs, PINCO distinguishes itself from prior approaches by better adhering to the inequality constraints of the AC-OPF problem, thereby enhancing solution feasibility in a data-driven framework.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper is clear in its exposition and is well-structured that makes the contributions and methodology easy to follow.\", \"Prior work is clearly acknowledged, and the authors effectively situate their approach within the existing body of research\", \"The authors make effective use of established methodologies to tackle the AC-OPF problem through an unsupervised learning approach with a focus on ensuring feasible AC-OPF solutions\", \"Experimental results do highlight feasible AC-OPF solutions quantified via the Equality Loss at the expense of slightly higher cost difference (which is inevitable given the data-driven model).\", \"The authors also showcase the additional benefits such as faster inference times\"], \"weaknesses\": [\"*Novelty*: The novelty of this approach appears limited, as it primarily relies on well-established tools like GNNs and PINNs. The work could further benefit from clearer differentiation that goes beyond simply combining existing frameworks. For e.g., did the authors explore architectural modifications to the GNN tailored to the characteristics of power transmission networks, or a customized optimization technique designed for AC-OPF? This could significantly enhance the paper\\u2019s originality within the ML domain.\", \"*Scope*: The current approach also presents practical concerns in real-world applications. In practice, power system operators frequently adjust network topology due to maintenance, unplanned outages, and other operational needs, leading to variations in the grid adjacency matrix for a *given* power system test case. Based on the paper\\u2019s description, the proposed model would need re-training from scratch for each such new grid adjacency matrix, which presents a notable limitation. Did the authors explore methods to make their model more adaptable to changing topologies for a given test case (even if the cost of the OPF solution is relatively high)? Developing a more flexible framework that can accommodate such variations within a single model would make the approach far more practical than showcasing the performance results across multiple IEEE benchmark systems.\"], \"questions\": [\"Could the authors elaborate on the ML-specific challenges encountered when modeling more than one generator per electrical bus? Is there a fundamental challenge that has prevented past approaches from addressing such cases, or is this simply a modeling choice that the authors address by introducing artificial nodes?\", \"When solving the IEEE 118-bus problem, any node can be designated as the reference node by including the reference bus angle as an equality constraint in the OPF optimization problem. Could the authors clarify why phase angle comparisons are not included in their evaluation?\", \"The comparisons in Figure 3 could benefit from additional context. While the differences between the MIPS solver and the proposed solution are shown, it\\u2019s not immediately clear how these absolute differences inform the effectiveness of the model, given that the OPF objective is primarily centered on minimizing generator operating costs. Could the authors clarify the purpose of Figure 3 and what specific insights they intended to convey through these comparisons?\", \"To enhance interpretability, it might be useful to include visualizations that track the evolution of equality loss and relative cost differences throughout training. This could provide more relevant insights into how well the model aligns with the OPF objective over time.\", \"As noted in the weaknesses, the current formulation focuses on a *single* fixed network topology for each power system case. However, in real-world applications, system operators often encounter dynamic topologies due to changes in operational conditions. Formulating the OPF learning problem to accommodate time-varying grid topologies is more relevant and such a formulation would also give rise to ML modeling challenges that could inspire innovative techniques tailored to real-world needs, potentially making a stronger contribution to both the ML and power systems communities.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to reviewer CnJU\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback. While we have decided not to proceed further with the submission, we would still like to take this opportunity to clarify and address the points raised.\", \"novelty\": \"We appreciate the reviewer\\u2019s feedback regarding the novelty of the work. We acknowledge that GNNs and PINNs are well-established frameworks. However, our work is novel in the following ways:\\n\\n1.\\tApplication of the H-PINN framework to AC-OPF: To the best of our knowledge, we are the first to successfully apply the H-PINN framework to the AC-OPF problem and achieve zero constraint violations in an unsupervised setting. This type of PINN is designed for problems with hard constraints, such as the OPF, and to our knowledge, we are the first to apply this method successfully in this field.\\n\\n2.\\tInvestigation of GNN architecture: While we utilize GNNs, we carefully evaluated different architectures, concluding that the Graph Transformer architecture is the most effective for power systems. This choice stems from prior observations and empirical results, which we highlight in our work.\", \"scope\": \"We agree with the reviewer that handling topology variations is a critical consideration for real-world applications. While our current work does not explicitly address topology variations, we acknowledge this limitation and plan to address it in future work.\", \"specifically\": \"\\u2022\\tPlanned evaluation on topology changes: We aim to test our framework on datasets that include N-1 and N-k contingencies. This will help evaluate the ability of the GNN to generalize across different topologies.\\n\\n\\u2022\\tAdaptability of the GNN model: Incorporating such contingencies will allow us to explore methods to enhance the GNN's adaptability without requiring retraining for each new adjacency matrix.\", \"ml_specific_challenges_in_modeling_multiple_generators_per_electrical_bus\": \"The use of artificial nodes to represent multiple generators per bus is a modeling choice rather than a fundamental limitation. As noted by the reviewer, this adds complexity and requires the framework to learn a more granular solution. We observed this in prior work, where results for grids with multiple generators at the bus level were not observed.\", \"phase_angle_comparisons_in_the_evaluation\": \"We clarify that the absence of phase angle comparisons stems from the use of the MATPOWER data structure and the MIPS solver, which do not designate a slack node or assign a reference angle for the IEEE 118-bus system.\", \"context_in_figure_3\": \"The purpose of Figure 3 is to compare the predicted physical variables (e.g., voltages, powers) of the proposed framework (PINCO) with those of the MIPS solver. While generator operating cost is a central focus in OPF, respecting physical constraints such as grid properties is equally important. The figure primarily aims to illustrate how well the proposed method aligns with these physical constraints. We thank the reviewer for pointing out the need for additional context. We will revise the discussion of Figure 3 to clarify its intent and significance.\", \"visualizations_tracking_equality_loss_and_cost_differences_during_training\": \"We agree with the reviewer that including plots tracking the evolution of equality loss and relative cost differences throughout training would enhance interpretability. We will include these visualizations in the revised version of the paper to provide deeper insights into the model\\u2019s alignment with the OPF objective over time.\", \"formulating_opf_for_time_varying_topologies\": \"We acknowledge the importance of addressing dynamic topologies in real-world power systems and agree with the reviewer that this represents an exciting direction for future research. While our current work focuses on fixed topologies, we plan to extend the framework to accommodate time-varying grid topologies. This will involve evaluating the method on datasets that incorporate real-world operational scenarios, including topology changes and contingencies.\"}", "{\"comment\": \"1. My point is that i) other works have proposed architectures that consider multiple generators per bus, and ii) in the OPF setting, it is straightforward to aggregate/disaggregate multiple generators per bus.\", \"for_instance\": \"consider two generators with output $p_{1}, p_{2}$, minimum output $0$ and maximum output $p_{1}^{max}, p_{2}^{max}$ and cost $c_{1}, c_{2}$. An aggregate model will predict the aggregate power $\\\\bar{p} = p_{1} + p_{2}$, with corresponding maximum limit $\\\\bar{p}^{max} = p_{1}^{max} + p_{2}^{max}$. Given a predicted $\\\\bar{p}$, and assuming without loss of generality that $c_{1} < c_{2}$ it is then immediate to recover $p_{1} = min(\\\\bar{p}, p_{1}^{max})$ and $p_{2} = \\\\bar{p} - p_{1}$. More generally, the disaggregation step can be performed in parallel for every bus, and can support more general (e.g. quadratic of piece-wise linear) cost functions and arbitrary number of generators.\\n\\n2. Given inequality constraints of the form $g(x) \\\\leq 0$, and ignoring equality constraints for simplicity here, the augmented Lagrangian as stated in the paper has the form $L_{\\\\mu}(x, \\\\lambda) = J(x) + \\\\lambda \\\\max(0, g(x)) + \\\\mu g(x)^{2}$ where $\\\\mu > 0$ and $\\\\lambda$ is the Lagrange multiplier. Now consider the problem $\\\\min_{x \\\\leq 0}$, which yields $L_{\\\\mu}(x, \\\\lambda) = x + \\\\lambda \\\\max(0, x) + \\\\mu x^{2}$. The original problem is unbounded, yet the (augmented) Lagrangian problem always has a finite solution; this is contradictory because $\\\\min_{x} L_{\\\\mu}(x, \\\\lambda)$ should always be a lower bound on the optimal value of the original problem.\\n I think the original form of Eq. (5) was a typo, but I would encourage the authors to consider the augmented Lagrangian formulation used in, e.g., Lancelot.\\n (Note: I chose an unbounded linear problem for simplicity, other examples can be constructed that are bounded and have strictly convex objective).\\n\\n3. N/A\\n4. The point is that, for AC-OPF, what is hard is to satisfy _both_ equality and inequality constraints. The inequality constraints in AC-OPF involve only i) variable bounds on active/reactive generation and voltage magnitude, which can be enforced using a scaled/shifted sigmoid activation and ii) convex quadratic constraints $p_{ij}^{2} + q_{ij}^{2} \\\\leq S_{ij}$, which are also easy to enforce using, e.g., a re-parametrization or a closed-form re-scaling (see, e.g., papers like RAYEN).\\n As far as I can tell, the proposed methodology does not guarantee both equality and inequality constraint satisfaction, which I think is a substantial limitation of the method for AC-OPF problems. If the authors want to showcase their methodology on general, non-trivial inequality constraints, then I would recommend presenting experiments on multiple classes of problems (where inequality constraint satisfaction should be non-trivial).\\n\\n5. The PyPSA European grid is synthetically reconstructed (the original paper reports, for instance, that all line impedance information is artificial). The PEGASE project released snapshots of the european grid, and these snapshots have up to 13,000 buses (see https://arxiv.org/abs/1603.01533). In the same paper, several snapshots of the RTE system were released, all of which have over 6,000 buses.\\n\\n6. I agree that training time is usually considered a \\\"sink cost\\\" in the ML literature. However, if one can spend arbitrary time training a model, then it should be OK to also spend time generating data for supervised learning. Therefore, besides the final performance of the model, the comparison between supervised vs self-supervised approaches should include [data-generation time + training time] as a metric. As an example: for similar accuracy levels, spending {10hrs data generation + 2 hours supervised training} is better than {0hrs data generation + 16hrs self-supervised training}.\\n\\n7. Supervised and self-supervised methods both aim to address the same problem, and should therefore be compared. As mentioned above, the comparison should include not only overall accuracy but also data-generation, training and inference time.\"}", "{\"title\": \"Reply to reviewer gXzX\", \"comment\": \"We appreciate the time and effort the reviewer has dedicated to evaluating our work. However, we find it challenging to reconcile the overall tone of the feedback, which acknowledges the paper\\u2019s soundness, presentation, and contributions, with a relatively low rating of 3. We kindly ask the reviewer to clarify this discrepancy. While we have decided not to proceed further with the submission, we would still like to take this opportunity to clarify and address the points raised.\\n\\n1. The architecture employed in our work was carefully customized for the AC-OPF problem by leveraging graph transformer layers, as detailed in the Appendix. While we acknowledge that the individual components (GNNs and PINNs) are established techniques, their combination in this context is novel. Importantly, we emphasize that our contribution lies not in introducing unnecessary architectural complexity but in demonstrating that the proposed framework, built on graph-transformer-based GNNs and a PINN-based unsupervised learning approach, can achieve zero inequality violations and generalize to unseen scenarios. Moreover, to the best of our knowledge, this work is the first to successfully apply the H-PINN framework to the AC-OPF problem, a key innovation that we believe adds significant value. This type of PINN is designed for problems with hard constraints, such as the OPF, and to our knowledge, we are the first to apply this method successfully in this field.\\n\\n2. We thank the reviewer for this comment. While we agree that a comparison with another IP solver (e.g., IPOPT in addition to MIPS) could be added, our work is not aimed at addressing every modeling approach to the OPF problem. Instead, we focus on demonstrating that an unsupervised ML approach can solve the full OPF formulation without relying on relaxations or simplifications.\\n\\nQ1. Yes, our GNN+PINN architecture was specifically tailored for the AC-OPF problem by integrating graph transformer layers to effectively model the power grid's topology. This choice was motivated by prior work indicating the superior performance of transformer-based architectures in power systems. Furthermore, we are the first to apply the H-PINN method to solve the ACOPF problem. \\n\\nQ2. While we acknowledge the importance of comparisons, methods using GNNs or PINNs in isolation are fundamentally different from our combined approach. Specifically: A GNN-only solver would require supervised learning and depend on labeled datasets generated by traditional solvers, which our method aims to eliminate. A PINN-only solver would not benefit from the structural insights provided by the GNN architecture. The strength of our framework lies in the combination of these, enabling an unsupervised solution that generalizes well. Therefore, while we agree our work could benefit from more comparison with other AC-OPF solvers, it would not be possible to compare it fairly with other ML techniques. \\n\\nQ3. We believe the reviewer is requesting clarification on the necessity of using PINNs with hard constraints in the AC-OPF problem. To address this:\\n\\n- Necessity of Hard Constraints: The AC-OPF problem involves both equality and inequality constraints that are critical for ensuring physical feasibility. Using a PINN with hard constraints ensures that these conditions are met during optimization, making it well-suited for this application.\\n\\n- Improvement: Our method improves upon standard PINN approaches by integrating a graph-based architecture, which captures the underlying power grid topology more effectively. This, combined with the H-PINN framework, allows us to achieve zero inequality violations, a significant improvement over previous works.\\n\\nQ4. We appreciate this suggestion and acknowledge the importance of prior work on physics-informed GNN-based state estimation. However, it is important to note that the focus of this paper is on AC-OPF, not state estimation. While these domains are related, they address different challenges within power systems. If the reviewer could provide specific references for the PSSE work and elaborate on why they should be discussed in a work only focused on OPF, we would be happy to review and consider including them as part of our motivation in the revised manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reviewer Nm3Q\", \"comment\": \"We appreciate the time and effort the reviewer has dedicated to evaluating our work. However, we find it difficult to reconcile the detailed evaluation\\u2014characterizing the paper as having fair soundness, good presentation, and a fair contribution\\u2014with a final rating of 1. We would kindly ask for clarification regarding this rating, as the provided feedback appears to reflect a more moderate assessment of the paper. While we have decided not to proceed further with the submission, we would still like to take this opportunity to clarify and address the points raised.\\n\\n1. All results presented in the paper explicitly demonstrate zero inequality constraint violations, which is a key indicator of feasibility. Additionally, the equality constraint violations achieved by our method are comparable to, and often lower than, those of traditional solvers like MIPS. This is evidenced in the numerical results presented in the manuscript, particularly in Table 2. The reviewer does not provide specific arguments or examples to support this statement. We kindly ask the reviewer to elaborate further so that we can better address their concerns.\\n\\n2. We believe the reviewer has misunderstood the intent of Figure 3. This figure does not show constraint violations but rather a comparison of results between our method and the traditional solver (MIPS). The ACOPF problem does not have a unique solution; different solvers may converge to different feasible solutions that represent distinct local minima. These differences often reflect trade-offs between objectives, such as operating cost and strict adherence to power flow equations. Figure 3 highlights such trade-offs. To avoid further confusion, we will revise the figure's description and include additional context in the next version of the manuscript to make its purpose clearer.\\n\\n3. The values reported in Table 2 represent the maximum equality constraint violations observed across a test set of 50 grids under varying load scenarios. These values do not imply that the problem is infeasible or that operators would reject the solutions outright.\\nFor comparison, the equality constraint violations of our method (16 MW) are smaller than those of MIPS (20 MW). Additionally, the violations observed are within acceptable tolerances for large-scale nonlinear optimization problems like ACOPF. To provide additional validation, we plan to include results obtained with other solvers (e.g., IPOPT) in the next version of the manuscript.\\n\\n4. As stated in the introduction, the ACOPF problem is of significant importance in power system operations and analysis due to its complexity and practical relevance. Our focus is on applying the proposed framework to this domain. While the method is general and could be extended to other optimization problems, these applications are beyond the scope of the current work.\\n\\n5. You raise an important point about comparisons with existing ML methods. However, our method's unsupervised nature makes direct comparisons with most ML approaches (which are supervised) not relevant, as those methods typically map traditional ACOPF solutions (obtained via solvers) and do not solve the problem directly. Instead, it will be more relevant to compare our method to traditional optimization solvers, including MIPS and IPOPT, as both solve the problem directly.\\n\\n6. Indeed, the neural network will always output a solution, even when the problem is infeasible. However, infeasibility can be detected by monitoring the violation of inequality constraints or if numerical instability during training is observed. If such violations exceed acceptable thresholds, it indicates that no feasible solution exists. In our experiments, we did not encounter cases of infeasibility, as all presented results respect inequality constraints. We will include infeasibility detection mechanisms in the future version of the manuscript.\"}", "{\"summary\": \"This paper presents PINCO, a neural architecture for solving the AC-OPF problem. PINCO is a combination of GNN + PINNs with hard constraints. Numerical tests show the usefulness of the proposed model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"A new model combining the GNNs and PINNs for solving AC-OPF\", \"Promising numerical performance relative to MIPS solver\"], \"weaknesses\": [\"Both GNNs and PINNs have been used for solving AC-OPF, and the paper directly combines the two without much novel design in the architecture.\", \"Insufficient comparison against other numerical solvers for AC-OPF of distribution systems including e.g., using GNNs, PINNs, SDP relaxation and SOCP relaxation, and linear-OPF initialized Newton-Raphson method in terms of performance.\"], \"questions\": \"Q1. Any specific design in the GNN+PINN architecture?\\nQ2. Solvers using GNNs or PINNs shall be included as baselines to show the usefulness of having both in a single architecture.\\nQ3. It is not clear why the proposed PINCO would improve upon the PINNs with hard constraints approach? This shall be clarified as well as evidenced using numerical comparison against a number of benchmark test systems.\\nQ4. Physics-informed GNN based power system state estimation (PSSE) was among the first use of NN architecure for power systems and they shall be discussed as a motivation of using deep learning for power systems.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a graph neural network architecture for solving AC Optimal Power Flow models.\\nThe paper describes a physics-informed loss for training the model in a self-supervised manner, and conducts numerical experiments on systems with up to 118 buses.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed methodologies has two positive aspects\\n* it uses a graph-based architecture, which supports changes in grid topology (however this capability was not demonstrated)\\n* it uses a self-supervised training scheme that does not require the (expensive) use of an optimization solver to generate data\\n\\nHowever, neither of these are new, and the paper suffers from multiple limitations (see below).\", \"weaknesses\": [\"Limitations of the methodologies / experiments:\", \"The paper incorrectly states that previous works do not handle multiple generators per bus.\", \"_Confidence-Aware Graph Neural Networks for Learning Reliability Assessment Commitments_ (https://doi.org/10.1109/TPWRS.2023.3298735) consider graph neural network architectures for unit-commitment problems, and present an encoder-based mechanism for handling multiple generators per bus\", \"_CANOS: A Fast and Scalable Neural AC-OPF Solver Robust To N-1 Perturbations_ (https://arxiv.org/abs/2403.17660) does support such features:\", \"_We also use artificial edges to connect the subnodes to their respective bus (these do not model any physical equipment)._\", \"It should be noted that handling multiple generators at a single bus is not a hard task, as they can easily be aggregated into a single generator; an aggregated solution can then be disaggregated in closed-form.\", \"I believe the augmented Lagrangian in Eq. (5) is incorrect, specifically the term $\\\\mu_{k} g_{j}(w_{u}^{k})^2$.\", \"Assuming inequality constraints are of the form $g(w) \\\\leq 0$, this quadratic term effectively drives all inequality constraints towards being binding (i.e. $g(w) = 0$), which is not theoretically sound.\", \"The paper argues that the use of a graph neural network supports changes in the grid topology, but does not present experiments that corroborate this claim\", \"The paper claims that the proposed model can achieve \\\"zero violation of inequality constraints.\\\" This is not a substantial achievement, given that said inequality constraints are either simple variable bounds (which can be enforced via a bounded activation function) or simple l2 norm constraint (which can also be enforced by a simple scheme). The complexity of AC-OPF is in satisfying _both_ equality and inequality constraints. It is straightforward to enforced either set of constraints separately (i.e. only equality or only inequality constraints).\", \"Numerical experiments consider AC-OPF instances on systems with up to 118 buses. This is 100x smaller than real-life instances, which comprise (at least) in the order of 10,000 buses.\"], \"it_should_be_noted_that_several_works_have_trained_ml_models_to_predict_ac_opf_with_systems_of_that_scale\": [\"_Spatial Network Decomposition for Fast and Scalable AC-OPF Learning_ (https://doi.org/10.1109/TPWRS.2021.3124726)\", \"_Compact optimization learning for AC Optimal Power Flow_ (https://arxiv.org/abs/2301.08840): up to 30,000 buses\", \"_CANOS: A Fast and Scalable Neural AC-OPF Solver Robust To N-1 Perturbations_ (https://arxiv.org/abs/2403.17660): up to 10,000 buses, and also uses graph neural network architectures\", \"Training times appear to be very large (10 to 24 hours as reported in Section 5). Given that the paper only considers very small artificial systems, it is not clear that the proposed scheme would scale to real-life systems. A reasonable target would be at most 6-8hrs of training time on a systems with about 10,000 buses.\", \"Numerical experiments do not compare against any existing ML methodology for AC-OPF problems.\"], \"issues_about_the_paper\": [\"Not all notations are properly defined. For instance, the formalism of equality and inequality constraints is not defined before the Augmented Lagrangian presented in Eq. (5). Similarly, this equation uses a parameter $w^{k}_{u}$ that is not defined.\", \"the paper does not present the data distribution used to train the model. It should be noted that generating data by perturbing individual loads independently is not realistic, as it leads to a very narrow variability in total demand.\", \"Note that there are open-source datasets and data generators for AC-OPF problems, e.g.:\", \"OPFData (https://arxiv.org/html/2406.07234v1) is an open-source dataset of AC-OPF instances and their solutions\", \"OPFGenerator (https://github.com/AI4OPT/OPFGenerator) is an open-source instance generator for various OPF formulations\", \"In addition to the comments, above, the paper sometimes makes misleading claims. For instance, at lines 88-89, it is stated that \\\"INCO allows for solving the AC-OPF **without violations**.\\\" This claim is not substantiated by the results reported in Section 4 (Tables 1 and 2).\"], \"questions\": [\"In the absence of a fundamentally new architecture or training scheme, to be accepted, this paper should:\", \"conduct numerical experiments on systems with at least 10,000 buses...\", \"... with a more realistic data distribution than independently perturbing individual loads;\", \"demonstrate the proposed architecture's ability to handle varying topologies;\", \"achieve training times of at most 8hrs (this is the kind of timeline that would be needed for practical applications);\", \"achieve low constraint violation (in the order of $10^{-4}$ relative violation) and optimality gap (no more than $0.1$% worse than Ipopt);\", \"It should be noted that each of the above bullet has been addressed in the literature, at least individually.\", \"For instance, there exist works that scale to 10,000-bus systems, works that consider more realistic data distributions, works that achieve low constraint violations, etc...\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"It is interesting that the authors argue that benchmarking directly with an optimization problem solver is more appropriate than comparing with other ML methods. I would like to raise a few points for clarification and discussion:\\n\\n1. **Theoretical Guarantee of Feasibility**: \\n The authors claim that their method **directly solves the AC-OPF problem without relying on pre-computed solutions**, distinguishing it from supervised approaches that mimic a solver. If this is the case, I strongly believe that the method should provide **theoretical guarantees** on the feasibility of the AC-OPF problem concerning both equality and inequality constraints. A key advantage of solvers is their ability to provide feasible solutions consistently. If the claim is to compete directly with solvers, then a guarantee of feasibility must be explicitly demonstrated. \\n\\n2. **Feasibility Gap in MIPS Results**: \\n The authors acknowledge that the MIPS results exhibit unusually high feasibility gaps. This raises concerns about the interpretation of the results. Specifically, I find a 20 MW gap in equality constraints to be excessively large. For comparison, consider an approximation of the AC-OPF problem, such as polynomial regression, which achieves a gap of 16 MW. Should this be considered superior to MIPS or other solvers under the authors\\u2019 framework? The authors need to clarify how such discrepancies are addressed in their analysis. \\n\\n3. **Comparison with Self-Supervised Methods**: \\n While the authors argue that comparing with supervised methods is unfair, they should still present a comparison with self-supervised approaches like Park & Pascal. Additionally, Park & Pascal provide a detailed table on data generation times, and their DC3 code is publicly available. I suggest the authors include a runtime comparison using the same computational setup to provide a fair benchmark for evaluating results. \\n\\n4. **Test Case Size and Scope**: \\n The authors\\u2019 point about limited test cases in prior work requires further scrutiny. Both this paper and Park & Pascal evaluate up to 118-bus systems. However, numerous works in the power systems literature test much larger systems. For example, works like **DeepOPF** and others listed in [this wiki on ML-OPF](https://energy.hosting.acm.org/wiki/index.php/ML_OPF_wiki) evaluate significantly larger systems. The authors should clarify how their method scales to larger test cases compared to these existing works.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"I appreciate the authors' point-by-point response. However, I have the following reservations regarding the motivation and focus:\\n\\n1. In existing unsupervised or supervised learning methods, inequality constraint satisfaction has not been identified as a major limitation. For example, the supervised method DC3 achieves zero inequality gaps, and unsupervised methods, such as Park & Pascal (2023), report maximum inequality gaps of $10^{-3}$ for the 118-bus system. What are the numerical values of inequality gaps in the proposed method? Furthermore, do the authors have any theoretical guarantees on achieving zero inequality gaps? This clarification is necessary.\\n\\n2. The authors state in their rebuttal that training time ranges from **several hours to a few days**, depending on the system. The authors must clarify: **Why would one train an unsupervised model for days when similar performance could be achieved using some supervised data, and smaller total time?** For instance, generating 1,000 data points and training a DC3-type model would be much cheaper than training for several days. Additionally, it would be important to understand how the training times of the proposed method scale with system size.\\n\\n3. I respectfully disagree with the authors' point that comparisons with supervised models are not useful. State-of-the-art (SOTA) supervised models achieve similar levels of performance with much less total time (i.e., data generation time + training time). Thus, they must be included as benchmarks for a fair comparison.\\n\\n4. The authors' point regarding MIPS convergence is unclear. A 20 MW equality gap is not a minor violation within the solver's tolerance. Even when expressed in per-unit terms (on a 100 MVA base), a 0.2 per-unit equality gap is substantial. By contrast, most methods (both supervised and unsupervised) listed in Table 3 of Park & Pascal (2023) achieve equality gaps that are one order of magnitude smaller than 0.2. For optimization solvers, equality gaps should ideally be less than 0.0001 to ensure convergence.\\n\\n5. How does proposed method generalize over grid topologies: theoretically and empirically? \\n\\n6. When unlabeled data is inexpensive (e.g., simple load sampling), limiting the approach to 500 samples to avoid data-intensive methodologies seems counterintuitive. The primary limitation of supervised models arises when generating large amounts of labeled data by solving ACOPF instances, which is computationally expensive. However, when only unlabeled data (e.g., load points) is needed, why restrict the dataset to just 500 samples? The example cited from reference [4], which uses 1,000 data points, appears misplaced in this context because those are labeled data points generated by solving ACOPF, not sampled load points.\"}", "{\"summary\": \"This paper considers a physics-informed graph neural network, PINCO, to solve the AC-OPF problem. Using standard test cases, it shows that proposed method is faster than standard nonlinear solvers. Some generalization properties are discussed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"ACOPF is an important problem in grid operations.\", \"The paper is easy to read, and the results seem correct.\"], \"weaknesses\": [\"In my opinion the main weakness of the paper is that the result isn't that good. In the sense that the main goal of the paper is to find feasible solutions, and I don't think that goal is accomplished.\", \"In Figure 3, the violations of $P, \\\\theta, Q,$ and $V$ are shown. But the violations can be pretty large, especially for $Q$ and $V$. One of the reasons ACOPF is ran is to handle the V/Q constraints, and having a 10% error is not great.\", \"Table 2 reports the \\\"equality constraint\\\" violations when the input load scenarios changes. But this violation can be 16 MW, which is again not a small number. I don't think operators would be likely to accept these types of violations. Although the nonlinear solver can be slow, but the problem can be resolved when the load changes.\", \"The approaches in the paper is based on generic methods, and can be applied to any constrained optimization problem. It would be good to see if there is anything special about the ACOPF when applying the method.\"], \"questions\": \"There has been a lot of work on ACOPF (some are cited in the paper). The authors should compare against some of these, in addition to standard solvers.\\nWhat happens when the problem is infeasible? Presumably the neural network would still output something, but would one be able to tell that there is actually no solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
BfH7rtJe1L
Can a Single Tree Outperform an Entire Forest?
[ "Qiangqiang Mao", "Jiayang Ren", "Yixiu Wang", "Chenxuanyin Zou", "Jingjing Zheng", "Yankai Cao" ]
The prevailing mindset is that a single decision tree underperforms random forests in testing accuracy, despite its advantages in interpretability and lightweight structure. This study challenges such a mindset by significantly improving the testing accuracy of an oblique regression tree through our gradient-based entire tree optimization framework, making its performance comparable to random forests. Our approach reformulates tree training as a differentiable unconstrained optimization task, employing a scaled sigmoid approximation strategy. To ameliorate numerical instability, we propose an algorithmic scheme that solves a sequence of increasingly accurate approximations. Additionally, a subtree polish strategy is implemented to reduce approximation errors accumulated across the tree. Extensive experiments on 16 datasets demonstrate that our optimized tree outperforms random forests by an average of 2.03\% improvements in testing accuracy.
[ "Differentiable decision tree", "Oblique decision tree", "Subtree-polish strategy", "Gradient-based optimization" ]
https://openreview.net/pdf?id=BfH7rtJe1L
https://openreview.net/forum?id=BfH7rtJe1L
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hCrC2DoGIY", "c0isGywvRf", "5TmHgHKj1F", "5NoEkSWb52", "2c2jshhqUn" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1730530977419, 1730606000170, 1730691283205, 1731472938620, 1731868834755 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8371/Reviewer_tMMM" ], [ "ICLR.cc/2025/Conference/Submission8371/Reviewer_TDNM" ], [ "ICLR.cc/2025/Conference/Submission8371/Reviewer_FRUv" ], [ "ICLR.cc/2025/Conference/Submission8371/Area_Chair_wYFM" ], [ "ICLR.cc/2025/Conference/Submission8371/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a variation of soft (a.k.a.) probabilistic trees that are annealed to obtain \\\"close-to\\\" hard DTs. Additional heuristics, such as subtree polishing is proposed to further enhance the empirical performance. Experiments on small-to-medium scale datasets show competitive accuracy w.r.t. SOTA methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Clear and easy to follow algorithm with practical implementation;\", \"Experiments show convincing results across several baselines (including, surprisingly, random forests). Although the scale of datasets is a concern (see below);\"], \"weaknesses\": \"1. Novelty: I believe the paper combines together several ideas explored in soft DT literature:\\n- Gradients based learning via sigmoid approximation is well-known approach to train soft trees;\\n- Based on my understanding, iterative scaled sigmoid approximation is similar to annealing mechanism, which has been previously explored, e.g. in [1] (although this is not the earliest work). Note that [1] also discuss alternative function to sigmoid.\\n- Subtree polishing reminds weaker version of Tree Alternating Optimization [2,3]. \\nIt's good to know that combing these all works quite well, but authors are encouraged to expand on this context and specify differences, if any.\\n\\n2. Experiments are conducted on small-to-medium datasets where CART is already performing quite well and using RFs are not bringing significant benefits. Authors are encouraged to use more practical high-dimensional (and possibly large-scale) datasets. Moreover, I'd suggest adding [2,3] as additional non-greedy baseline. Note that [3] claim SOTA performance on some datasets used in this paper.\\n\\n\\n[1] Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, and Rahul Mazumder. ICML (2020). The Tree Ensemble Layer: Differentiability meets Conditional Computation.\\n\\n[2] M. Carreira-Perpinan and P. Tavallali, 2018. Alternating optimization of decision trees, with application to learning sparse oblique trees.\\n\\n[3] A. Zharmagambetov and M. Carreira-Perpinan, 2020. \\\"Smaller, More Accurate Regression Forests Using Tree Alternating Optimization\\\".\", \"questions\": [\"What kind methodology authors use to get the accuracy (%) for regression task? Wasn't able to find that in the paper.\", \"Author emphasize interpretability as important feature of their methods. Are there any evidences supporting this (other than reporting the structural characteristics, e.g. number of nodes, depth, etc.)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a simulated annealing technique to train a soft decision tree, which is ultimately converted into a hard decision tree. The proposed method optimizes soft trees with constant and linear leaves (i.e., linear models within the leaves). Additionally, the method uses several heuristics, such as multiple restarts, randomly selecting simulated annealing steps within a specified range, and optimizing subtrees of all decision nodes of the tree.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The authors attempt to address the important problem of optimizing hard decision trees, which is an NP-hard problem.\\n2. The method is evaluated across 16 datasets.\", \"weaknesses\": \"1. The use of simulated annealing for training soft decision trees lacks novelty (e.g., [1], [2])\\n2. The \\\"accuracy\\\" metric reported throughout the paper is not formally defined\\n3. The training time appears exponential in the number of parameters, due to the soft branches routing the entire dataset across all decision nodes ($2^{D + 1} - 1$). This is further exacerbated by the Polish Strategy, which applies the algorithm to all subtrees.\\n4. It is unclear how a complete binary oblique decision tree can be considered interpretable without incorporating regularization terms for sparsity in decision node weights.\\n\\n[1] Thomas M. Hehn, Julian F. P. Kooij, and Fred A. Hamprecht. 2020. End-to-End Learning of Decision Trees and Forests. Int. J. Computer Vision 128 (April 2020), 997\\u20131011.\\n\\n[2] Ajaykrishna Karthikeyan, Naman Jain, Nagarajan Natarajan, and Prateek Jain. 2023. Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent. Trans. Machine Learning Research (Sept. 2023).\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper challenges the common belief that single decision trees cannot match the testing accuracy of random forests, despite the former's advantages in interpretability and lightweight structure. The authors introduce a gradient-based entire tree optimization framework aimed at significantly improving the testing accuracy of oblique regression trees, bringing their performance level close to or even surpassing that of random forests.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tYour paper is commendably clear and easy to understand.\", \"weaknesses\": \"1.\\tThe title of the paper is ambiguous. The comparison between trees and random forests requires conditional constraints. Random forest is essentially an ensemble learning framework, in which decision trees are the base learners. Does your title mean that ensemble learning frameworks cannot work on the tree model you proposed? If you are comparing the proposed tree model with the original version of RF, the significance of this comparison is not significant.\\n2.\\tIn my opinion, this article should focus on the comparison with different oblique decision tree algorithms, especially adding the latest pruning techniques, as this method includes a pruning mechanism.\\n3.\\tThe method presented in this article lacks a theoretical guarantee. I believe that in a structured model such as a tree model, theoretical explanations would be more convincing than experimental results after parameter tuning.\\n4.\\tTree models still have different structures at the same depth, and Tree depth is not very convincing. It is recommended that the number of nodes in the tree model be displayed.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"authors - reviewers discussion open until November 26 at 11:59pm AoE\", \"comment\": \"Dear authors & reviewers,\\n\\nThe reviews for the paper should be now visible to both authors and reviewers. The discussion is open until November 26 at 11:59pm AoE.\\n\\nYour AC\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
BehxaBcSML
CDQuant: Accurate Post-training Weight Quantization of LLMs using Greedy Coordinate Descent
[ "Pranav Ajit Nair", "Arun Suggala" ]
Large language models (LLMs) have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key technique for addressing this challenge, enabling the compression of large models with minimal impact on performance. The recent GPTQ algorithm, a post-training quantization (PTQ) method, has proven highly effective for compressing LLMs, sparking a wave of research that leverages GPTQ as a core component. Recognizing the pivotal role of GPTQ in the PTQ landscape, we introduce CDQuant, a simple and scalable alternative to GPTQ with improved performance. CDQuant uses greedy coordinate descent to minimize the layer-wise reconstruction loss to achieve high-quality quantized weights. Our algorithm is easy to implement and scales efficiently to models with hundreds of billions of parameters. We perform extensive evaluation on Gemma, and PaLM2 model families, and demonstrate that CDQuant consistently outperforms GPTQ in 2-4 bit weight quantization. Moreover, CDQuant improves the performance of state-of-the-art PTQ techniques such as QuIP and FrameQuant when used as a replacement for their GPTQ component, resulting in further gains in quality.
[ "quantization", "large pre-trained models", "post-training", "coordinate descent" ]
Reject
https://openreview.net/pdf?id=BehxaBcSML
https://openreview.net/forum?id=BehxaBcSML
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vLkPQvs4je", "oU8ebIwZtP", "lT6g8bUoII", "kkZSsYqGbp", "jAHVvbcNeU", "iF7y3pR7ge", "cw7qHq4oFQ", "avRJDb4dx9", "a64duKzZ5v", "YX8XgJ2auu", "Xo3IUZV7vQ", "WyHnBdR3bO", "SqfhPUixSZ", "QYLuAMMtoH", "MiCuVaukfO", "HoKvpOC6qL", "E999jjXas0", "8BcLPlc4Se" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731825114058, 1729175255640, 1731825176955, 1731825001328, 1730676353796, 1737524008619, 1732564900494, 1732793584878, 1732695559750, 1731824832114, 1732597671076, 1732618428754, 1734916631878, 1731824818446, 1730135926618, 1732792705409, 1733067236530, 1732696242748 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_AoQn" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_8oR7" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_8oR7" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_AoQn" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_AoQn" ], [ "ICLR.cc/2025/Conference/Submission9834/Area_Chair_FkZL" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_9L8s" ], [ "ICLR.cc/2025/Conference/Submission9834/Authors" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_AoQn" ], [ "ICLR.cc/2025/Conference/Submission9834/Reviewer_AoQn" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their feedback. Below, we address some of the key concerns raised.\\n\\n**Comparisons to GPTQ:** As pointed out in our paper, the arbitrary order used by GPTQ usually leads to drop in quality, especially in extreme bit quantization. Our work tries to address this drawback of GPTQ via efficient greedy coordinate descent strategies. As pointed out by the reviewer, while OBQ also performs CD, its update rule is drastically different from ours and is very inefficient. This is because OBQ involves updating all the un-quantized coordinates in each iteration, which is very inefficient on accelerators such as GPUs, TPUs. As a result of this, the OBQ paper only managed to show experiments on ResNet models with a few million parameters. Our work addresses this drawback by providing efficient coordinate descent strategies which can scale to models with 100B parameters. Our algorithm is simple and only involves updating a single coordinate in each step. \\n\\n**Computational cost:** We would like to first note that on a multiple GPU setting, GPTQ and the CD variant of CDQuant have comparable runtimes. For FFN1 (FFN2) quantization, CD is 5x (2x) slower than GPTQ on 8 H100 GPUs. For the single GPU setting, the gap between their runtimes is more pronounced. However, as shown in Table 22 in Appendix F.4, CDQuant, even with 1/8th of the iterations, achieves better perplexity than GPTQ. Also, in Figure 1 in Appendix F.4 (please look at the updated paper), CD\\u2019s L2 activation reconstruction error converges in roughly 1/8th of the iterations across several settings. This reduction in iterations makes the runtime of CDQuant comparable to GPTQ. Moreover, in Table 21, in Appendix F.4, we show that replacing BCD with CD for FFN2 quantization does not lead to a significant performance drop. Since quantizing FFN2 is a bottleneck for BCD, this substitution significantly speeds it up. With the above two modifications, BCD can be considerably sped up, and CD could even run faster than GPTQ. As future work, we will work on writing kernels to implement the gathers and scatters in our algorithm efficiently.\\n\\n**Gains on model sizes:** We would like to highlight that our technique indeed shows significant gains on extreme quantization of large models (INT2 INT3 quantization). For instance, for INT2 quantization of PaLM2-Otter, we see a 10% improvement over GPTQ. Similarly, for INT2 quantization of Gemma2-27B and PaLM2\\u2013Bison, we see 3-5% gains. \\n\\nFurthermore, as we showed in our paper, CDQuant acts as an excellent replacement for GPTQ in algorithms that use GPTQ as a subroutine (see Table 3). For instance, replacing GPTQ with CDQuant in QuIP, FrameQuant and AWQ showed 3-5% performance gains even on larger models such as Gemma2-27B. We would like to note that these results are highly non-trivial as (a) we are working with SOTA algorithms, and (b) providing a simple general purpose, plug-and-play approach to boost any SOTA technique that relies on GPTQ.\\n\\n**Experiments on Llama:** Unfortunately, due to certain organizational restrictions, we cannot use Llama models for research as that would violate the terms of use. To compensate for that, we run experiments with the PaLM2, Gemma-1 and Gemma-2 family of models. In Riviere et al., 2024 [1], Gemma-2 27B has been shown to have comparable performance to Llama-3 70B. Furthermore, PaLM2 family models are production quality models which are usually very hard to compress. We thus believe PaLM2, Gemma-1 and Gemma-2 to be reasonable replacements for Llama-2 and Llama-2\\n\\n**OWC initialization:** We would like to highlight that, for a fair comparison, we used OWC initialization for both GPTQ and CDQuant. Thus, the performance gains observed in our experiments are due to the proposed coordinate descent algorithms.\\n\\n**Using 1280 data samples instead of 128:** We would like to note that our method works perfectly fine with 128 samples. We used 1280 samples for both GPTQ and CDQuant as it increased the Hessian approximation and improved the quantization performance for both CDQuant and GPTQ (this is especially the case with larger models with large FFN dimensions). CDQuant and GPTQ both leverage second-order statistics to quantize the LLMs and thus benefit from more examples. Furthermore, using 1280 data samples doesn\\u2019t add any significant computational overhead over using 128 samples and we do not see a strong reason for restricting ourselves to 128 samples. Ultimately, we care more about quality and thus went with 1280 samples in our work.\\n\\n[1] Riviere et al., 2024, Gemma 2: Improving Open Language Models at a Practical Size.\"}", "{\"summary\": \"The authors present CDQuant, an alternative to GPTQ that offers better performance in LLM quantization. CDQuant uses greedy coordinate descent to minimize layer-wise reconstruction loss, resulting in high-quality quantized weights. Evaluations on models such as Gemma and PaLM2 show that CDQuant outperforms GPTQ in 2-4 bit quantization. Furthermore, when integrated into other PTQ techniques like QuIP and FrameQuant, CDQuant enhances their performance as well.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents a set of improvements to GPTQ and introduces CDQuant, a straightforward and efficient alternative.\\n\\n2. CDQuant demonstrates versatility and effectiveness when integrated with other PTQ techniques, such as AWQ, QuIP, and FrameQuant.\", \"weaknesses\": \"1. Actually, GPTQ has already identified the issue that OBQ updates coordinates in a greedy manner and introduced the Arbitrary Order Insight, which significantly reduces quantization time. Although CDQuant makes several efforts to accelerate quantization speed, it remains noticeably slower than GPTQ.\\n\\n2. The experimental results suggest that, compared to GPTQ, CDQuant demonstrates certain advantages only on smaller models with relatively weaker capabilities, such as Gemma-1 7B and PaLM2-Gecko. However, it does not show clear advantages on models like Gemma-2, PaLM2-Otter & Bison, and it also lacks experiments on the LLama2 & 3 families, which are more commonly used in mainstream quantization research (e.g., GPTQ, QuIP, AWQ, etc.). Given that CDQuant uses more calibration data (1280 V.S. 128) and adopts OWC method, the benefits of using the greedy coordinate descent method remain uncertain.\", \"questions\": \"1. Since the LLama family models are more commonly used in LLM quantization research, could you provide CDQuant's results on LLama 2 and LLama 3 for a more intuitive comparison?\\n\\n2. Currently, rotation-based quantization methods, such as QuaRot and SpinQuant, effectively eliminate outliers and demonstrate better performance than previous methods like AWQ and QuIP. Given that both QuaRot and SpinQuant use GPTQ for weight quantization, could you assess the versatility and effectiveness of CDQuant when integrated with these methods?\\n\\n3. CDQuant uses 1280 samples for calibration, while 128 is more commonly used in other methods (e.g., GPTQ, AWQ, etc.). However, the authors did not explain this choice. Could it be because CDQuant leverages second-order information for error compensation and may overfit the calibration set during reconstruction?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"**Comparison with recent PTQ techniques:** As we already demonstrated in our paper, CDQuant significantly outperforms GPTQ when used along with two other SOTA rotations algorithms, namely QuIP and FrameQuant. We expect the same behaviour to hold with both SpinQuant and QuaRot and other weight+activation quantization techniques as well. This is mainly because, once the appropriate rotation is applied to weights and activations, quantization of weights and activations is treated independently and performed using minmax or GPTQ or other popular techniques. We can simply replace these components with CDQuant for weight quantization.\\n\\nTo demonstrate this, we provide W4A4 and W2A4 quantization results with QuaRot. Here, we replace the GPTQ component for weight quantization with CDQuant. We observe that CDQuant consistently outperforms GPTQ. The results become more noticeable for 2-bits. We expect a similar trend to hold for SpinQuant as well. \\n\\n| Gemma-2 9B | | C4 perplexity |\\n|------------------|-----|---------------|\\n| W16A16 | | 10.683 |\\n| W16A4 | | 11.01 |\\n| W2A4 | QuaRoT + GPTQ | 13.571 |\\n| | QuaRoT + CD | 13.311 |\\n| | QuaRoT + BCD | 13.196 |\\n| W4A4 | QuaRoT + GPTQ | 11.147 |\\n| | QuaRoT + CD | 11.139 |\\n| | QuaRoT + BCD | 11.122 |\\n\\nWe hope we addressed the reviewer\\u2019s concerns, and are happy to answer any further questions. We would really appreciate it if the reviewer reevaluates our work.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"**Comparison with weight+activation quantization techniques:** As we already demonstrated in our paper, CDQuant significantly outperforms GPTQ when used along with two other SOTA rotations algorithms, namely QuIP and FrameQuant. We expect the same behaviour to hold with both SpinQuant and QuaRot and other weight+activation quantization techniques as well. This is mainly because, once the appropriate rotation is applied to weights and activations, quantization of weights and activations is treated independently and performed using minmax or GPTQ or other popular techniques. We can simply replace these components with CDQuant for weight quantization.\\n\\nTo demonstrate this, we provide W4A4 and W2A4 quantization results with QuaRot. Here, we replace the GPTQ component for weight quantization with CDQuant. We observe that CDQuant consistently outperforms GPTQ. The results become more noticeable for 2-bits. We expect a similar trend to hold for SpinQuant as well. \\n\\n| Gemma-2 9B | | C4 perplexity |\\n|------------------|-----|---------------|\\n| W16A16 | | 10.683 |\\n| W16A4 | | 11.01 |\\n| W2A4 | QuaRoT + GPTQ | 13.571 |\\n| | QuaRoT + CD | 13.311 |\\n| | QuaRoT + BCD | 13.196 |\\n| W4A4 | QuaRoT + GPTQ | 11.147 |\\n| | QuaRoT + CD | 11.139 |\\n| | QuaRoT + BCD | 11.122 |\\n\\n**Comparison with other weight+activation quantization techniques:** CDQuant can be used to improve AffineQuant as it is again orthogonal to CDQuant. This technique uses Asymmetric MinMax quantization to quantize the rescaled weights. We expect CDQuant to work as a plug-and-play replacement for Asymmetric MinMax quantization and improve AffineQuant\\u2019s performance. Similarly, OmniQuant can be improved by replacing the Asymmetric MinMax quantizer with CDQuant.\\n\\nWe hope we addressed the reviewer\\u2019s concerns, and are happy to answer any further questions. We would really appreciate it if the reviewer reevaluates our work.\"}", "{\"summary\": \"The paper presents CDQuant, a quantization algorithm that improves the efficiency LLM through a greedy coordinate descent approach. CDQuant is an incremental work following GPTQ, a post-training quantization method by minimizing layer-wise reconstruction loss. The authors evaluate CDQuant across multiple model families, including Gemma and PaLM2, where it shows slight performance improvements over GPTQ in 2-4 bit weight-only quantization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"CDQuant proposes a unique greedy coordinate descent approach for LLM quantization, offering an alternative to the cyclic method used in GPTQ.\\n\\nThe experiments are comprehensive, covering multiple models and quantization settings.\\n\\nThe algorithmic description of CDQuant, as well as the variants, are clearly explained.\", \"weaknesses\": \"High computational cost: CDQuant\\u2019s runtime is much higher than GPTQ, especially on larger layers (e.g., FFN2). While the authors suggest mitigation strategies, further optimizations could enhance practicality.\", \"incremental_improvement\": \"the idea is an incremental change to GPTQ, the improvement is also marginal compared with GPTQ, especially in the W4A16 setting which is the most practical use case.\", \"questions\": \"How does CDQuant perform in layers with extreme outliers compared to other approaches, like SqueezeLLM, which address outlier weights?\\n\\nWhat modifications would be necessary to apply CDQuant to Quantization-Aware Training (QAT)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"the high computational cost is still concerning to me. I would keep the original score.\"}", "{\"comment\": \"We also request the reviewer to take a look at some of the recent papers on PTQ techniques. For instance, Table 1 in AffineQuant paper (https://openreview.net/pdf?id=of2rhALq8l). The improvements over baselines diminishes as we increase the weight precision. This is expected because the gap between bf16 and baseline INT4 performance diminishes. The real value of these techniques shows up at extreme quantization (INT2, iNT3) and/or smaller models.\\n\\nSimilarly, please look at Table 1 in the SqueezeLLM paper: https://arxiv.org/pdf/2306.07629, and Table 3 in MagR paper: https://arxiv.org/pdf/2406.00800.\"}", "{\"comment\": \"\\\"But we are unsure why this should make our technique less novel compared to GPTQ.\\\"\\n\\n---\\n\\nGPTQ, introduced two years ago, demonstrated the ability to quantize GPT models with 175 billion parameters in approximately four GPU hours, reducing the bitwidth of weights to as low as 3 or 4 bits with negligible accuracy degradation compared to the uncompressed baseline. This breakthrough significantly advanced the quantization accuracy and efficiency of LLMs, enabling\\u2014for the first time\\u2014the execution of a 175 billion-parameter model on a single GPU for generative inference.\\n\\nIn comparison, I do not observe a comparable level of improvement and contribution from CDQuant.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their feedback. Below, we address some of the key concerns raised.\\n\\n**Computational cost:** We would like to first note that on a multiple GPU setting, GPTQ and the CD variant of CDQuant have comparable runtimes. For FFN1 (FFN2) quantization, CD is 5x (2x) slower than GPTQ on 8 H100 GPUs. For the single GPU setting, the gap between their runtimes is more pronounced. However, as shown in Table 22 in Appendix F.4, CDQuant, even with 1/8th of the iterations, achieves better perplexity than GPTQ. Also, in Figure 1 in Appendix F.4 (please look at the updated paper), CD\\u2019s L2 activation reconstruction error converges in roughly 1/8th of the iterations across several settings. This reduction in iterations makes the runtime of CDQuant comparable to GPTQ. Moreover, in Table 21, in Appendix F.4, we show that replacing BCD with CD for FFN2 quantization does not lead to a significant performance drop. Since quantizing FFN2 is a bottleneck for BCD, this substitution significantly speeds it up. With the above two modifications, BCD can be considerably sped up, and CD could even run faster than GPTQ. As future work, we will work on writing kernels to implement the gathers and scatters in our algorithm efficiently.\\n\\n**Experiments on Llama:** Unfortunately, due to certain organizational restrictions, we cannot use Llama models for research as that would violate the terms of use. To compensate for that, we run experiments with the PaLM2, Gemma-1 and Gemma-2 family of models. In Riviere et al., 2024 [1], Gemma-2 27B has been shown to have comparable performance to Llama-3 70B. Furthermore, PaLM2 family models are production quality models which are usually very hard to compress. We thus believe PaLM2, Gemma-1 and Gemma-2 to be reasonable replacements for Llama-2 and Llama-2\\n\\n**Convergence guarantees:** We would like to note that the least squares minimization problem that we are solving at each layer is NP hard (see line 156 in our paper). This is a combinatorial optimization problem, and takes exponential (in dimension) time to converge to global optimum. However, one can easily show that our algorithm converges to a special form of local optimum where modifying a single coordinate doesn\\u2019t reduce the loss value. The same can not be said of GPTQ which cycles through each coordinate only once. This is also the reason why we see an improvement in performance over GPTQ.\\n\\n**Performance on models with trillions of parameters:** Unfortunately, we do not have access to open source models with trillions of parameters or resources to run evals on them (getting perplexity evals and few shot evals requires a lot of resources). That being said, we believe similar performance trends observed in our paper will hold even for models with trillions of parameters. We base this claim on two main reasons: (1) SOTA trillion parameter models are MoE models, with each expert roughly the size of the largest model experimented in our paper, (2) the core problem we are solving (quantized least squares problem) remains the same across models. Consequently, we believe the gains we see here translate to bigger models.\\n\\n**Comparison between AWQ and OWC:** Thanks for the suggestion. Below we compare OWC and OWC-CD (the two initialization strategies introduced in our paper) and find them to outperform AWQ. The following table shows the perplexity numbers for Gemma-2 9B quantization using AWQ, OWC, OWC-CD. As can be seen, OWC, OWC-CD provide better initializations than AWQ (OWC-CD has blanks because it is designed only for sub-channel quantization).\\n\\n| Gemma-2 9B | AWQ | OWC | OWC + CD |\\n|------------------|-------|-------|----------|\\n| W3A16 | 14.02 | 11.666| - |\\n| W3g128A16 | 11.449| 11.338| 11.22 |\\n| W4A16 | 11.101| 10.929| - |\\n| W4g128A16 | 10.815| 10.82 | 10.786 |\\n\\n[1] Riviere et al., 2024, Gemma 2: Improving Open Language Models at a Practical Size.\"}", "{\"comment\": \"We thank the reviewer for their prompt reply. As mentioned in our paper (as well as in the rebuttal), we provide two approaches to reduce the computational cost of CDQuant:\\n\\n - **running CD and BCD for fewer iterations**: As shown in Figure 1 in the updated draft, both these algorithms converge in very few iterations. Relying on this insight already gives us 8x reduction in computational cost without effecting the quality (please refer to table 22 for quality numbers). This makes CD as computationally efficient as GPTQ.\\n - **replacing BCD with CD for FFN2 quantization**: BCD spends most of its time quantizing FFN2. To speedup BCD, one can rely on CD for quantizing FFN2. Please refer to Table 21 for quality numbers using this approach.\\n\\nThe empirical results clearly show that both these approaches significantly speedup our algorithms without hurting the quality. Please let us know, if you have any further concerns or questions or if the above arguments are not convincing. We will be happy to allay your concerns.\"}", "{\"comment\": \"Thanks for the authors' response.\\n\\nGiven the novelty of the method, which extends GPTQ by incorporating greedy search, and the limited improvement in accuracy, I will keep the original score.\"}", "{\"metareview\": \"It received mixed ratings of 6,5,5.\", \"the_reviewers_pointed_out_several_weaknesses_including\": \"CDQuant suffers from high computational cost, particularly on larger models like Gemma-2 27B, where its runtime can be up to 10\\u00d7 slower than GPTQ, especially for layers like FFN2. Despite efforts to optimize with Block Coordinate Descent (BCD), it remains significantly slower than GPTQ, even for smaller models. The improvements over GPTQ are marginal, particularly in the practical W4A16 setting. Additionally, CDQuant's advantage is mostly seen in smaller models with weaker capabilities, and it lacks experiments on more commonly used models like LLaMA 2 and 3, which limits its broader applicability. The reliance on more calibration data and the OWC method further raises questions about the efficiency and benefits of its approach compared to existing methods.\\n\\nThey also mentioned some positive points including the positive results compared to the baselines. \\n\\nIn the end, after the discussion, the reviewers still have the following concerns: \\nThe main weaknesses of CDQuant lie in its limited theoretical innovation and insufficient improvements over GPTQ. While it incorporates a greedy search approach, this integration does not address key challenges in model quantization, such as the generalization issues caused by the Hessian matrix. Additionally, CDQuant's performance improvements are modest, and it fails to offer advantages over GPTQ in practical settings, particularly under the w4a16g128 configuration. The algorithm's high computational cost and slower efficiency further diminish its potential to replace GPTQ.\\n\\nSince in the end, none of the reviewers is sufficiently positive about the paper, it will not be accepted.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers interacted in the discussion. We agree with the authors that the interaction was very limited and short for 2 of the reviewers, but unfortunately, we cannot ignore the negative points those reviewers have raised.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their feedback. Below, we address some of the key concerns raised.\\n\\n**Computational cost:** We would like to first note that on a multiple GPU setting, GPTQ and the CD variant of CDQuant have comparable runtimes. For FFN1 (FFN2) quantization, CD is 5x (2x) slower than GPTQ on 8 H100 GPUs. For the single GPU setting, the gap between their runtimes is more pronounced. However, as shown in Table 22 in Appendix F.4, CDQuant, even with 1/8th of the iterations, achieves better perplexity than GPTQ. Also, in Figure 1 in Appendix F.4 (please look at the updated paper), CD\\u2019s L2 activation reconstruction error converges in roughly 1/8th of the iterations across several settings. This reduction in iterations makes the runtime of CDQuant comparable to GPTQ. Moreover, in Table 21, in Appendix F.4, we show that replacing BCD with CD for FFN2 quantization does not lead to a significant performance drop. Since quantizing FFN2 is a bottleneck for BCD, this substitution significantly speeds it up. With the above two modifications, BCD can be considerably sped up, and CD could even run faster than GPTQ. As future work, we will work on writing kernels to implement the gathers and scatters in our algorithm efficiently.\\n \\n**Improvements from CDQuant:** While we agree with the reviewer that improvements are less pronounced compared to GPTQ for W4A16, the gains are significant for extreme quantization (2 or 3 bit quantization; 5-10% gains in a number of settings). These extreme quantization settings have received significant attention of late and are crucial for widespread deployment of LLMs. We believe our technique can provide value in such settings. Additionally, as demonstrated in the paper, CDQuant can be used as a plug-and-play replacement for GPTQ in numerous state-of-the-art PTQ methods like QuIP, FrameQuant, QuaRot, and AWQ, and provides an easy way to boost their performance. We believe this last property makes our technique especially valuable in practice. \\n\\n**Using CDQuant on layers with extreme outliers:** In the presence of extreme outliers, we observed Hessian eigenvalue clipping helped us effectively handle the outliers (it helped boost GPTQ performance as well). The Hessian eigenvalue clipping is reminiscent of rescaling done in AWQ. This is also similar in spirit to methods like QuIP and FrameQuant that use rotation to handle extreme outliers. SqueezeLLM uses non-uniform quantization and a sparse-and-dense quantization scheme, both orthogonal to CDQuant. We expect CDQuant to benefit from these methods as well.\\n\\n**Applying CDQuant to QAT:** This is an interesting suggestion that we are planning to explore in the future. We believe coordinate descent based techniques could be a very good alternative to Straight Through Estimation (STE). There are various ways in which one could use CD for QAT. For instance, after each step of SGD (in full precision), one may take the full precision weights and use our technique to obtain quantized weights. On a related note, we would like to highlight that a recent NeurIPS 2024 oral paper by Malinovskii et al., [1] explores the use of greedy coordinate descent as an alternative to STE for vector quantization.\\n\\nWe hope we addressed the reviewer\\u2019s concerns, and are happy to answer any further questions. We would really appreciate it if the reviewer reevaluates our work.\\n\\n[1] Malinovskii et al., 2024, PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression.\"}", "{\"summary\": \"The paper presents CDQuant, a new quantization method for large language models that improves on GPTQ by using greedy coordinate descent to achieve better accuracy in low-bit quantization. It outperforms GPTQ in experiments on models like Gemma and PaLM2, providing lower perplexity and enhanced performance. CDQuant can also seamlessly replace GPTQ in other quantization techniques, making it a versatile and effective tool for compressing large models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. CDQuant consistently outperforms GPTQ in low-bit (2-4 bit) quantization, leading to better quantization quality and lower perplexity across various models.\\n2. The greedy coordinate descent approach in CDQuant provides better optimization of the layer-wise objective compared to GPTQ, leading to more efficient quantization. \\n3. CDQuant demonstrates significant performance improvements, especially on smaller models like PaLM2-Gecko and Gemma-1 7B, where it reduces perplexity by up to 10%.\", \"weaknesses\": \"1. CDQuant, particularly with Block Coordinate Descent (BCD), is significantly slower than GPTQ, especially for large models. The paper presents runtime comparisons (Table 5) showing that CDQuant is about 5\\u00d7 slower than GPTQ for FFN1 quantization and up to 10\\u00d7 slower for FFN2 quantization on models like Gemma-2 27B. BCD is even slower, with runtimes an order of magnitude higher than GPTQ in some cases.\\n2. For larger models, such as Gemma-2 27B, the computational cost of CDQuant becomes prohibitive. The time required to quantize the FFN2 layer, which has a larger quantization axis, is significantly higher than for other layers. This is demonstrated in Table 20, where FFN2 quantization takes up to 10\\u00d7 longer than FFN1 quantization.\\n3. Experiments based on a series of new models should be included in the paper. Would the llama series models, such as llama3, also be suitable for CDQuant?\", \"questions\": \"1. What are the theoretical guarantees of CDQuant's convergence? Does the greedy coordinate descent method guarantee convergence to a global minimum, or is it more prone to local minima, especially in high-dimensional spaces like those of LLMs?\\n2. How does CDQuant perform on models larger than those tested (e.g., models with trillions of parameters)? The paper demonstrates results on models with up to tens of billions of parameters (like Gemma-2 27B). How would CDQuant scale to models with trillions of parameters?\\n3. Why was MinMax quantization chosen as the baseline for comparison? The paper mentions that CDQuant uses Optimal Weight Clipping (OWC) for initialization, which performs better than MinMax quantization. Why not compare CDQuant's initialization with other advanced techniques like SmoothQuant or AWQ?\\n4. How does CDQuant perform when both weights and activations are quantized? The paper primarily focuses on weight quantization. How does CDQuant perform when both weights and activations are quantized, especially in scenarios like W4A4 quantization? How do the results compare with those of some recent quantization papers [1-3]?\\n\\n[1] OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models. ICLR 2024.\\n\\n[2] AffineQuant: Affine Transformation Quantization for Large Language Models. ICLR 2024.\\n\\n[3] QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs. Arxiv 2024.\\n\\n[4] SpinQuant: LLM quantization with learned rotations. Arxiv 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"\\\"As shown in Table 7 and Table 8, under the configurations of w3a16g128 and w4a16g128, the accuracy of GPTQ, CD, and BCD is comparable.\\\"\\n\\n--------------------------------------------------------------------------------------\\n\\nIn Tables 7 and 8, the headroom for improvement was very little, because bf16 has similar performance as w4a16g128. For instance, for Gemma-2 27B, the avg bf16 accuracy is 67.55 and the avg accuracy with w4a16g128 is 67.45. So it's impossible to expect any technique to show improvements in such settings. \\n\\nThat being said, for w3a16g128, we do show some improvements. For instance, for Gemma-2 27B, GPTQ has 66.4 avg accuracy, our technique has 66.87 and topline bf16 is 67.55. So, our technique bridges the gap between GPTQ and baseline. We believe, the results should be interpreted with topline in consideration. \\n\\nFinally, we would like to note that for settings where topline is significantly better than GPTQ, we do see significant gains. For instance, for Gemma-1 7B quantization using w3a16, GPTQ has avg 40.87 accuracy, CD has 50.51, and topline bf16 has 58.35. Similarly, for PaLM2-Gecko quantization using w3a16, GPTQ has avg 39.39 accuracy, CD has 41.12, and topline bf16 has 43.84 accuracy.\\n\\nPlease also take a look at our INT2 quantization results where we have significant gains over GPTQ.\"}", "{\"comment\": \"The authors highlight that CDQuant is intended as a replacement for GPTQ. While CDQuant demonstrates certain performance improvements over GPTQ in the w2 and w3 setting, the results have a considerable gap from floating-point accuracy and practical requirements.\\n\\nOn the other hand, as they pointed out, under the w4a16g128 configuration (where CDQuant offers no advantage) , GPTQ achieves results very close to floating-point accuracy and satisfies the requirements of practical applications.\\n\\nI have always believed that there is room for improvement in GPTQ, and further advancements are necessary\\u2014such as addressing the model quantization generalization issues caused by the Hessian matrix. For this reason, I am always supportive of related research efforts.\\n\\nHowever, CDQuant provides limited theoretical innovation, particularly in terms of its integration of GPTQ and Greedy Search, and it falls short of fully addressing the core challenges associated with GPTQ. Moreover, the greedy search approach undermines its quantization efficiency. Overall, CDQuant fails to demonstrate sufficient potential to replace GPTQ. \\n\\nSo the score (5) is appropriate, and I will maintain it.\"}", "{\"comment\": \"As shown in Table 7 and Table 8, under the configurations of w3a16g128 and w4a16g128, the accuracy of GPTQ, CD, and BCD is comparable.\"}" ] }
BegT6Y00Rm
PREDICTING THE BEHAVIOR OF AI AGENTS USING TRANSFER OPERATORS
[ "Shiqi Zhang", "Darshan Gadginmath", "Fabio Pasqualetti" ]
Predicting the behavior of AI-driven agents is particularly challenging without a preexisting model. In our paper, we address this by treating AI agents as stochastic nonlinear dynamical systems and adopting a probabilistic perspective to predict their statistical behavior using the Fokker-Planck equation. We formulate the approximation of the density transfer operator as an entropy minimization problem, which can be solved by leveraging the Markovian property and decomposing its spectrum. Our data-driven methodology simultaneously approximates the Markov operator to perform prediction of the evolution of the agents and also predicts the terminal probability density of AI agents, such as robotic systems and generative models. We demonstrate the effectiveness of our prediction model through extensive experiments on practical systems driven by AI algorithms.
[ "stochastic differential equations", "markov process", "operator theory" ]
Reject
https://openreview.net/pdf?id=BegT6Y00Rm
https://openreview.net/forum?id=BegT6Y00Rm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sTr4tRjMhr", "hGcAaxTGbn", "dI4MmQVhVz", "NCRDdTyRcU", "Hej0FokJbK", "424aFIwJZU" ], "note_type": [ "decision", "official_review", "meta_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1737524173257, 1731035975031, 1735456587037, 1730642990283, 1732481280998, 1729759033110 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12209/Reviewer_nU6Y" ], [ "ICLR.cc/2025/Conference/Submission12209/Area_Chair_vtHe" ], [ "ICLR.cc/2025/Conference/Submission12209/Reviewer_s1m7" ], [ "ICLR.cc/2025/Conference/Submission12209/Reviewer_nU6Y" ], [ "ICLR.cc/2025/Conference/Submission12209/Reviewer_h16B" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper addresses the challenge of predicting AI agents' behavior, treating these agents as stochastic nonlinear dynamical systems. Using a probabilistic approach, the authors propose a framework based on the Fokker-Planck equation to predict statistical behaviors via an entropy minimization strategy. Their primary contribution is the PISA algorithm, which enables accurate predictions of agents' behavioral density evolution, particularly over long horizons. PISA leverages the spectral decomposition theorem to simultaneously approximate the Markov operator from agent trajectory data and predict asymptotic behavior. The authors demonstrate PISA's effectiveness in diverse applications, including robot trajectory prediction, generative model behavior, and pedestrian movement forecasting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a unique probabilistic perspective on AI agent behavior, combining concepts from stochastic processes with the Fokker-Planck equation. The originality stems from adapting a statistical density-based approach for complex, high-dimensional AI-driven environments. The spectral decomposition-based formulation for behavioral density evolution is novel in predicting long-term agent alignment.\", \"The mathematical rigor is evident, with clear derivations of the density evolution framework and detailed algorithmic steps. The PISA algorithm\\u2019s grounding in spectral decomposition provides robust theoretical backing. Although, it might be possible that I have not completely understood some parts of the proof.\", \"I feel that this research tries to address the need to understand and predict the behavior of complex AI agents, which has critical implications for fields requiring safety and reliability in autonomous systems. Applications like reinforcement learning, generative modeling, and pedestrian prediction show the method's versatility.\"], \"weaknesses\": [\"While PISA demonstrates strong performance theoretically and in small-scale applications, its practicality in real-time, high-dimensional systems may be limited. The algorithm's scalability with respect to density estimation (e.g., kernel density estimation) needs clearer justification or further exploration in high-dimensional environments.\", \"The paper's assumption of Markov properties in agent dynamics may not always hold in certain AI-driven systems, such as those influenced by long-term dependencies or non-stationary environments. Additionally, relying on a fixed Gaussian kernel could introduce estimation bias, potentially underestimating density variations in non-Gaussian distributions, especially for high-variance AI behaviors.\"], \"questions\": \"1. Could the authors elaborate on the feasibility of adapting PISA for real-time, high-dimensional AI systems? Additionally, has the choice of Gaussian kernel in KDE been optimized for different applications? Would adaptive kernel techniques enhance density estimation accuracy?\\n2. The authors mention the computational resources used for experiments but do not discuss performance time or computational trade-offs explicitly. Can the authors quantify PISA's computational efficiency compared to DPDD and Meng et al., especially in scenarios requiring frequent density updates?\\n3. How robust is PISA if the Markov assumption for AI agent dynamics is slightly violated? Could incorporating non-Markovian extensions or memory-enhanced operators enhance prediction accuracy for more complex behaviors?\\n4. The paper evaluates PISA primarily through KL divergence. Have other evaluation metrics been considered (e.g., likelihood estimation or out-of-sample testing)? It would be insightful to understand the robustness of PISA across various performance metrics.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a behavior prediction method that treats agents as stochastic nonlinear dynamical systems and uses the Fokker-Planck equation to predict the statistical behavior. The data-driven approach, named PISA, simultaneously approximates the Markov operator for predicting the evolution of agents and their terminal probability density. The method's effectiveness is demonstrated across various applications, including robot trajectory prediction, generative model behavior, and pedestrian movement forecasting.\\n\\nThe reviewers acknowledge the paper's novel probabilistic perspective. The use of spectral decomposition for behavioral density evolution is also seen as a strength. However, there are concerns regarding the clarity and accessibility of the paper, particularly for those not well-versed in statistical mechanics. Reviewers also noted that the initial literature review was incomplete, and that the paper did not engage sufficiently with related work. While these concerns were largely addressed in revisions, the paper in its current state is 20% over the maximum length (12 out 10 maximum pages).\", \"additional_comments_on_reviewer_discussion\": \"The authors significantly improved the paper during the rebuttal, by providing a more comprehensive literature review, clarifying technical details, adding mathematical background, and including an appendix with hyperparameters. They also provided code for the experiments. Reviewers acknowledged that the authors had improved the paper by addressing some of the concerns regarding the completeness of the literature review and the clarity of the method. However, some reviewers felt that some points were not completely addressed, such as the discussion of alternative methods, the role of hyperparameters in the cost function, and the overall presentation.\\n\\nAt the end, the paper in the current state, still requires a significant update, given that it is over the allowed page limit. The scope of the change required to shorten it would call for another around of peer reviews.\"}", "{\"summary\": \"This paper introduces a method for producing the statistical behavior (terminal distribution) of agents using the Fokker-Planck equation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and well-motivated.\\nThe statistical analysis appears to be sound.\\nTo the best of this reviewer's knowledge, this exact method for agent modeling has not been previously defined.\", \"weaknesses\": \"The paper seems to suggest that the idea of agent modeling originated in the 2020s. All related work is from that time or later.\\nThere is a survey entitled \\\"Autonomous Agents Modelling Other Agents\\\" that was published in 2017 and covers research from at least the two decades prior to that. To properly assess the novelty of this approach, the authors should relate to the prior research and identify the closest methods for direct comparison.\\n\\nFinding the \\\"terminal distribution\\\" of the agent behavior appears to amount to finding the stationary distribution of a Markov process. Is that the case? If so, there have been prior methods for doing so that ought to be compared.\\n\\nIf I understand correctly, the approach is designed for a purely single agent context, without any strategic interactions among agents. Nonetheless, it is assessed in a pedestrian domain, which is an inherently multiagent setting. Methods such as replicator dynamics (e.g. see the work of Karl Tulys et al.) could be brought to bear for finding the terminal distribution.\", \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I agree with reviewer h16B's points that some parts of the paper were hard to understand and were not easily accessible.\"}", "{\"summary\": \"The paper aims to predict the behavior of AI systems using density estimation and models of the stochastic dynamics of the systems. It introduces a method based on learning the evolution of probability densities from trajectory data. The probability densities at each time step are first non-parametrically approximated using kernel density estimates. A transfer operator on the densities is then learned assuming a particular proposed functional form involving neural networks. The method is evaluated on three examples: a reinforcement learning agent in a continuous control domain, a score-based generative model, and a dataset of pedestrian walking. In comparison with two baseline methods, the predicted densities are closer to the true densities in terms of the KL divergence.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"In general, the paper tackles an interesting problem and the results are clearly presented. The key strength of the method seems to be the ability to directly predict the stationary distribution (eq. 9), which would not be straightforward to obtain for many other approaches.\", \"weaknesses\": \"Because the paper does not situate the proposed method in a broader context (neither by discussing other work on the problem of predicting AI systems' behavior, nor by discussing how the results compare to alternative approaches), I hard a hard time judging the contribution of this paper.\\n\\n1. The introduction section contains basically no citations. Has nobody else worked on the problem of predicting the behavior of AI systems? There are several places where the text hints at relevant work, but does not cite any. Here are some examples (although this is not an exhaustive list):\\n - \\\"The integration of artificial intelligence (AI) models within autonomous agents has transformed many fields\\\" (l. 26 - 27)\\n - \\\"there has been a notable increase in modeling these behaviors as nonlinear dynamical systems\\\" (l. 46 - 47)\\n - \\\"techniques such as Dynamic Mode Decomposition (DMD) and its generalizations have demonstrated significant capability in revealing the underlying evolutionary laws of AI agents\\\" (l. 47 - 48)\\n - \\\"Although the application of probabilistic models to learn and predict the statistical behavior of complex AI agents has increasingly attracted interest in areas such as autonomous driving, motion planning, and human-robot interaction\\\" (l. 53 - 54)\\n2. The paper does very little to be accessible to a broader audience that might be in interested in predicting behavior but is not well-versed in statistical mechanics. In particular, a section providing a bit more mathematical background and intuition about the Perron-Frobenius operator (transfer operator / Markov operator) would be useful.\\n3. The part where the reasoning behind the model and loss function is explained (Section 4) would be more useful in the methods section instead of after the results.\\n4. The description of the methods is not very detailed, so that reproducing the results would be difficult just from the paper. For example, there is no indication of how the hyperparameters of the training were set, and little detail is provided on the neural network architectures and the training procedure. I understand that there are page limits, but there is also no code provided and no supplementary material.\\n5. No code is provided as supplementary material, which might have been helpful to address the shortcoming mentioned in the previous point and to get an intuitive understanding about how the abstract mathematical concepts are represented in concrete code.\\n6. As someone who was not familiar with this kind of modeling of the evolution of non-parametrically estimated densities, it was hard for me to judge how the approach compares to alternative appraoches, such as directly modeling the stochastic dynamics of the state (e.g. using a parametric model). This narrowness might be fine if the paper wants to provide a specific technical contribution in the context of these approaches. But the way the introduction is set up with the quite general goal of predicting the behavior of AI systems, I expected at least some comparison to alternative approaches. This applies to several sections of the paper\\n a. The literature review is quite narrowly focused on methods for estimating the transfer operator. What about other approaches to reachability analysis, trajectory prediction etc.?\\n b. The results only includes a comparison with two other methods. Are these the only applicable baseline methods? Please justify.\\n c. The discussion section also does not set the method in a broader context. It hints at the limitations resulting from using KDE to approximate the densities. How does this compare to possible alternative approaches?\\n7. Formatting errors\\n - Section 2.1 still contains parts of the instructions for using ICLR's Latex template (l. 180 - 181)\\n - The citation style often makes no distinction between in-text citations and citations that should be in parentheses (e.g. l.60, l. 105 - 106, l. 153)\", \"questions\": [\"Is the method specific to the use case of predicting the behavior of AI systems or is it applicable in general to stochastic dynamical systems? The introduction suggests the former, but the rest of the paper the latter.\", \"What is the role of the two hyperparameters of the loss function ($\\\\lambda$ and $\\\\mu$)? How were they set? How does changing these hyperparameters affect the performance of the method?\", \"Algorithm 1: how are the constraints on the functions $G_\\\\gamma^i$ and $A_\\\\theta^i$ enforced? Which optimization algorithm is used to train the neural networks?\", \"Non-parametric density estimation techniques are known to be particularly prone to the curse of dimensionality. How does the method scale to higher-dimensional spaces?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
BefqqrgdZ1
UltraLightUNet: Rethinking U-shaped Network with Multi-kernel Lightweight Convolutions for Medical Image Segmentation
[ "Md Mostafijur Rahman", "Radu Marculescu" ]
In this paper, we introduce UltraLightUNet (2D and 3D), an ultra-lightweight, multi-kernel U-shaped network for medical image segmentation. The core of UltraLightUNet consists of a new Multi-kernel Inverted Residual (MKIR) block, which can efficiently process images through multiple kernels while capturing complex spatial relationships. Additionally, our Multi-kernel Inverted Residual Attention (MKIRA) block refines and emphasizes image salient features via sophisticated convolutional multi-focal attention mechanisms. UltraLightUNet strategically employs the MKIR block in the encoder for feature extraction and the MKIRA block in the decoder for feature refinement, thus ensuring targeted feature enhancement at each stage. With only 0.316M \#Params and 0.314G #FLOPs, UltraLightUNet offers an ultra-lightweight yet powerful segmentation solution that outperforms state-of-the-art (SOTA) methods across twelve medical imaging benchmarks. Notably, UltraLightUNet surpasses TransUNet on DICE score while using 333$\times$ fewer \#Params and 123$\times$ fewer #FLOPs. Compared to the lightweight model, UNeXt, UltraLightUNet improves DICE scores by up to 6.7% with 4.7$\times$ fewer parameters. UltraLightUNet also outperforms recent lightweight models such as MedT, CMUNeXt, EGE-UNet, Rolling-UNet, and UltraLight_VM_UNet, while using significantly fewer #Params and #FLOPs. Furthermore, our 3D version, UltraLightUNet3D-M (1.42M #Params and 7.1G #FLOPs), outperforms SwinUNETR (62.19M #Params, 328.6G #FLOPs) and nn-UNet (31.2M #Params, 110.4G #FLOPs) on the FETA, MSD Brain Tumor, Prostate, and Lung Cancer segmentation benchmarks. This remarkable performance, combined with substantial computational gains, makes UltraLightUNet an ideal solution for real-time and point-of-care services in resource-constrained environments. We will make the code publicly available upon paper acceptance.
[ "Ultra Lightweight CNN", "Medical Imaging", "Semantic Segmentation", "3D Segmentation" ]
https://openreview.net/pdf?id=BefqqrgdZ1
https://openreview.net/forum?id=BefqqrgdZ1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zHtjcjrsCM", "y2bpcikahN", "xhSYkJ9mHS", "xD9YHf6U6u", "wE5YrmZyAI", "vQ4dpubX0m", "qrjZd7R3ZU", "nT0SddbmDV", "nQJGvkAWNr", "mO680ZuXU6", "kTwD4QYrQK", "kHENseYWdk", "jXRYTkbfKs", "j5q0DHKL2f", "igrciVZueO", "eIxJIothDE", "cGdReEkeO6", "aFpUgfjGSL", "Zk34XvBjKN", "Za8YEu6GU6", "YBysrFc4YP", "XN1ujzywh6", "X2jgI6Tz7z", "VqlmB4feRg", "U1G0NO9vkj", "SAxvf4su2z", "SAlMIiDh4Z", "S3Iua60sYO", "Ry0hnMuce2", "R2yhRNwCEX", "PaZLVxTVYc", "OP5xpBHySo", "J0aduEKZmK", "Ha6Dnr7tUN", "GAllhzBxBK", "DqpulCJoml", "BtJYoxvjhF", "Bb1DGzpZDE", "B92K3DvvKm", "9V2Nx8AzsN", "6PUB3BNmLn", "5kS66iYdrF", "5eIOqdb3nE", "5YbrLjCrPB", "4GPRQTk8D8", "2i7iDDhe9W", "2GIExLWmWL", "1Sbp7LClHI", "0XJlZ2KxOG" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732623850424, 1730721938942, 1732518796744, 1733111807716, 1732516981360, 1733187311159, 1732518523612, 1733110711499, 1732330652959, 1733110030273, 1733110806823, 1732559415685, 1729959651987, 1733109654896, 1732330334135, 1730086616344, 1738120979626, 1732521017462, 1732589965571, 1732518081424, 1733110314317, 1733111576622, 1732559527674, 1732330143701, 1732518696033, 1732330528448, 1733111290909, 1732558974505, 1732576545034, 1732630308712, 1733110486002, 1732631107281, 1733156397146, 1732589719918, 1732558488382, 1732588721554, 1732589829113, 1732330003631, 1729151135005, 1732588596249, 1733161975513, 1733110174394, 1732517136772, 1733167390358, 1732588829624, 1733114777898, 1733111453883, 1732518195739, 1732521082012 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_SWMn" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_jeKK" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_SWMn" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_jk4q" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_jeKK" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_SWMn" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Reviewer_zJLj" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ], [ "ICLR.cc/2025/Conference/Submission8307/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your effort and response. While some of my concerns (e.g., ablation results) have been addressed, I believe more focus is needed on providing theoretical support for the proposed model. The Learning Society should prioritize demonstrating technical insights and theoretical rationale over merely emphasizing performance improvements and simple interpretations. In current state, my score remains unchanged.\"}", "{\"summary\": \"The authors introduce UltraLightUNet, an ultra-lightweight 2D and 3D U-shaped network designed for medical image segmentation. This network features novel Multi-kernel Inverted Residual (MKIR) and Multi-kernel Inverted Residual Attention (MKIRA) blocks, aiming to effectively balance computational efficiency with high segmentation performance across multiple medical imaging benchmarks. The architecture is motivated by the need to reduce computational demands in point-of-care diagnostics, particularly in resource-constrained environments.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"The experimental results are through, with diverse datasets and settings.\", \"In-depth ablation study and parameters consideration are presented\"], \"weaknesses\": [\"Despite the paper presents diverse experiments and ablation studies, I see a very close similarity to a CVPR 2024 paper, named EMCAD [1], which I detail in the next parts (Note that the EMCAD paper is not cited!)\", \"The proposed method (Figure 2) is clearly the same as last year CVPR paper, with approximately no change in any of the modules both in encoder and decoder of the network. Therefore, I see no novelty and contribution in the paper submitted.\", \"How is it possible for your method to just have 27000 parameters, while last year paper with the same architecture has at-least 3M parameters (its base version).\", \"The proposed method is not compared to other SOTA networks (such as [2]) which perform much better over some of the datasets used in the paper (such as ISIC)\", \"[1] Rahman MM, Munir M, Marculescu R. Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 11769-11779).\", \"[2]Azad R, Niggemeier L, H\\u00fcttemann M, Kazerouni A, Aghdam EK, Velichko Y, Bagci U, Merhof D. Beyond self-attention: Deformable large kernel attention for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2024 (pp. 1287-1297).\"], \"questions\": \"Please refer to weaknesses section.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"I have significant concerns regarding research integrity as the submitted manuscript appears to exhibit considerable overlap with another paper published in CVPR 2024 [1]. Specifically, the methodology described in this paper\\u2014including key architectural elements and experimental setup\\u2014seems to be identical to the aforementioned publication.\\n\\nGiven the extent of these similarities, I suspect plagiarism or a potential dual submission. I recommend that the conference organizers investigate this matter further to ensure that proper academic standards are upheld.\\n\\n[1] Rahman MM, Munir M, Marculescu R. Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 11769-11779).\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Theoretical development\", \"comment\": \"### **Q4.1. The theoretical development is not solid. Authors reviewed Vision Transformers in the related work, but it is not related to the work in this manuscript. The network in the manuscript employs a convolutional neural network architecture and attention mechanisms.**\\n\\nWe thank the reviewer for their thoughtful feedback and for pointing out areas for clarification regarding the theoretical development and relevance of Vision Transformers in our work. Below, we address this comment:\\n\\nWe included a discussion of Vision Transformers in the Related Work section to provide context on **computationally expensive approaches**, such as **TransUNet** and **SwinUNet**, which are widely used and popular in medical image segmentation. These methods demonstrate strong performance, but come with high computational demands due to their reliance on transformer-based architectures.\\n\\nOur proposed **multi-kernel design with depth-wise convolutions** directly addresses these limitations by ensuring **lightweight efficiency** while maintaining high performance. This aligns with the motivation of our work, which emphasizes computational efficiency for resource-constrained environments. The inclusion of Vision Transformers in the Related Work highlights this contrast and establishes the relevance of our lightweight design in comparison to transformer-based approaches.\\n\\n### **Q4.2. Additionally, insufficient theoretical development in the Method section, and it is unclear how this design improved the segmentation performance.**\\n\\n**UltraLightUNet\\u2019s design is grounded in clear theoretical concepts**, which are explained in the Method section and validated in the Ablation Study section:\\n\\n- **Theoretical Basis**: Most existing architectures in computer vision rely on theoretical concepts to justify their design choices (e.g., Vision Transformers focus on self-attention). Similarly, our approach employs the **multi-kernel trick** to improve segmentation performance and **depth-wise convolutions** for lightweight computation. While our contribution is not theoretical in nature, these foundational concepts ensure that UltraLightUNet achieves both high performance and extreme efficiency.\\n\\n- **Method Section**: We elaborated on how **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks work together to balance performance and efficiency. The use of **multi-kernel depth-wise convolutions (MKDC)** enables adaptable feature extraction, while **Convolutional Multi-Focal Attention (CMFA)** enhances critical features.\\n\\n- **Empirical Validation**: The **Ablation Study** (Tables 4 and 5 in our initial submission) demonstrates the impact of each module on segmentation performance. These results show how the theoretical design improves segmentation accuracy while maintaining computational efficiency.\\n\\nTo address this reviewer\\u2019s concern, we plan to further emphasize the connection between the theoretical design and its performance improvements in the revised manuscript, explicitly linking the multi-kernel design to segmentation accuracy.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"### **C1. Thank you for your effort and response. While some of my concerns (e.g., ablation results) have been addressed, I believe more focus is needed on providing theoretical support for the proposed model. The Learning Society should prioritize demonstrating technical insights and theoretical rationale over merely emphasizing performance improvements and simple interpretations. In current state, my score remains unchanged.**\\n\\n**Response:** \\n\\nThank you for your thoughtful feedback. While we acknowledge the importance of theoretical developments in advancing machine learning, it is also evident that empirical innovations, even without proposing new learning theories, have historically made impactful contributions to the field and have been well-recognized by the ICLR community. A few examples are given below. \\n\\n### **Historical Context of Empirical Papers at ICLR** \\n\\nSeveral seminal architecture-focused papers accepted at ICLR have made substantial contributions without introducing any new learning theory, but by innovating through design principles and pushing the SOTA results: \\n\\n- **VGG** (Simonyan et al., ICLR 2015, 133516 citations): Introduced deeper networks by stacking convolutional layers in a straightforward architecture, becoming a cornerstone for CNN research without any new theoretical insights. \\n\\n- **3D UX-Net** (Lee et al., ICLR 2023, 141 citations): Extends the ConvNeXt block to volumetric data by designing a lightweight encoder but relies on a computationally expensive existing decoder. Its novelty lies in adapting an existing module for 3D processing rather than introducing any new theoretical concepts. \\n\\n- **MobileViT** (Mehta et al., ICLR 2022, 1454 citations): A hybrid CNN-Transformer model designed for resource-constrained settings, emphasizing practical deployment over theoretical learning innovations. \\n\\n- **CycleMLP** (Chen et al., ICLR 2022, 262 citations): Proposes local window-based MLPs to achieve computational efficiency, focusing on practical adaptability rather than novel theoretical foundations. \\n\\nThese examples illustrate that impactful empirical contributions do not necessarily require new learning theories in order to be relevant contributions to ICLR, but rather can advance the field by improving the SOTA in performance, efficiency, and adaptability. \\n\\n### **Our Contributions** \\n\\nUltraLightUNet aligns with this tradition of empirical innovation, contributing to the growing body of lightweight, resource-efficient models for real-time medical imaging. Our contributions include: \\n\\n**1. End-to-End Lightweight Design:** A novel encoder-decoder architecture built entirely from scratch for extreme efficiency in both 2D and 3D tasks. \\n\\n**2. Conceptually New Modules:** \\n- **Multi-Kernel Inverted Residual (MKIR)** and **MKIRA** blocks introduce a flexible multi-kernel approach which provides a new mathematical basis to address various segmentation challenges. \\n\\n- New **3D extensions** of all modules for volumetric medical imaging, a contribution not present in prior works like ConvNeXt. \\n\\n**3. Experimental Validation:** Comprehensive evaluations across 12 datasets demonstrate competitive accuracy with significantly lower computational costs, a critical need in medical imaging. \\n\\n### **Perspective on Learning Theories vs. Empirical Innovations** \\n\\nWe would like to start by stating that we definitely share the same desire as this Reviewer to see the entire field of ML for computer vision (and beyond) entirely build on solid (theoretical) principles. But given that the deep learning research is such a fluid and fast evolving field, this remains for now an aspirational objective, at best. A lot of empirical contributions remain for now (and perhaps the foreseeable future) very relevant if they can improve the SOTA and can stimulate more research in the area. \\n\\nFrom this perspective, while our work does not propose a new learning theory, it provides substantial architectural advancements and redefines the SOTA in image segmentation with limited resources, similar to many influential works cited (or not even mentioned) above. We believe that these contributions align well with the ICLR community\\u2019s history of recognizing impactful architectural innovations and their relevance to practical problems in the field. \\n\\nWe hope this response provides clarity on the significance of our work and its alignment with ICLR\\u2019s standards. Thank you again for your constructive feedback.\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and for pointing out areas for improvement. Below, we address all the comments and clarify our approach:\\n\\n### **Q1. The overall novelty is low. The author mainly proposed two modules, CMFA and MKDC modules. MKDC just applied several depth-wise convolutional layers to extract features from different channels. This idea has already been proposed in the Scale-Aware Modulation Meet Transformer [1]. MKDC module employed average and max pooling, and convolution layers, which are the channel-wise and spatial-wise attention. Many various attention-based modules have been proposed between 2018 and 2020 (like the CBAM [2]). The overall network is similar with [3] [4].**\\n\\n---\\n\\n**Response:**\\n\\n**UltraLightUNet** introduces significant contributions both at **architecture** and **module** levels. At the **architecture level**, UltraLightUNet offers a fully integrated, lightweight encoder-decoder design, specifically tailored for resource-constrained scenarios, thus achieving competitive segmentation performance across both 2D and 3D tasks. At the **module level**, UltraLightUNet incorporates innovative designs like the **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks, enabling efficient feature extraction and refinement with minimal computational cost. Below, we provide details on these contributions.\\n\\n---\\n\\n### **1. New End-to-End Lightweight Design**\\n\\n**UltraLightUNet** is a novel encoder-decoder architecture built entirely from scratch to ensure extreme lightweight efficiency. It is specifically designed for resource-constrained scenarios, including real-time medical diagnostics and point-of-care applications. Both the encoder and decoder are specifically optimized with novel lightweight modules, eliminating the need for pre-trained components while maintaining competitive performance across diverse tasks.\\n\\n- **Encoder Design**: The encoder is built using our proposed **Multi-Kernel Inverted Residual (MKIR) Block**, which performs efficient feature extraction through a combination of **multi-kernel depth-wise convolutions** and an inverted residual structure. This ensures adaptable feature extraction with minimal computational overhead, supporting both local (small kernels) and global (large kernels) context extraction.\\n\\n- **Decoder Design**: The decoder is constructed with our novel **Multi-Kernel Inverted Residual Attention (MKIRA) Block**, which combines **Convolutional Multi-Focal Attention (CMFA)** for local attention and **MKIR** for multi-kernel refinement. Additionally, the decoder employs a **Grouped Attention Gate (GAG)** for efficient skip connection aggregation and the **simple bilinear upsampling**, thus ensuring lightweight refinement and reconstruction of segmentation outputs.\\n\\nWith this integrated design, **UltraLightUNet** achieves unmatched efficiency with only **0.316M parameters and 0.314 GFLOPs** for its 2D base model, and **0.453M parameters and 3.42 GFLOPs** for its 3D base model. This is a significant advancement compared to related methods like **SAMT** the reviewer mentions (32M parameters, 7.7 GFLOPs), **CASCADE** (34.12M parameters, 7.62 GFLOPs), and **EMCAD** (26.76M parameters, 5.6 GFLOPs), all of which depend on pre-trained encoders or computationally expensive modules. In contrast, UltraLightUNet\\u2019s lightweight design specifically addresses the computational constraints of real-world applications in point-of-care scenarios without sacrificing performance.\", \"title\": \"Response to the comments of Reviewer zJLj: Overall novelty (Part1)\"}", "{\"comment\": \"Thank you for providing such a relevant and comprehensive overview. I agree with some of the authors' responses; however, I still believe that top-tier AI conference papers should offer explicit or at least implicit insights. Considering that AI development is no longer in its early stages, it is crucial to focus on a deeper understanding of the field rather than merely pursuing performance improvements.\"}", "{\"comment\": \"### **Q3.1. The motivation is unclear. The author mentioned that several 3D segmentation networks, including 3D U-Net, SwinUNETR, 3D UX-Net, UNETR, nnU-net and nnFormer, have high computational demands, so they proposed their lightweight network. However, these baseline networks are proposed in 2021 and 2022, and in recent years many lightweight 3D segmentation networks have been proposed and this challenge has been tackled, such as [6][7][8][9][10]. However, authors didn't discuss and explore these lightweight networks.**\\n\\n---\\n\\n**Response:**\\n\\nWe thank the reviewer for their thoughtful feedback regarding the clarity of the motivation and exploration of lightweight segmentation networks. Below, we provide clarifications and address the concerns raised:\\n\\n**Motivation:** The primary motivation of our work is to address the **computational demands of existing 2D segmentation networks** by proposing an **ultra-lightweight architecture** that achieves competitive performance with significantly reduced parameters and FLOPs. Recognizing the versatility of our architecture, we then **extend it to 3D** by incorporating volumetric modules, ensuring that the same architecture works efficiently for both 2D and 3D segmentation tasks. This unified, resource-efficient approach allows UltraLightUNet to cater to a broader range of applications, including 3D volumetric tasks, with extreme efficiency.\\n\\n**Addressing Missing Lightweight Networks:** The reviewer mentioned several lightweight segmentation networks ([6], [7], [8], [9], [10]) that were not fully explored in our initial submission. Our clarification is as follows:\\n\\n- **[6] (SlimUNETR, Pang et al., 2023)**: To strengthen our evaluation, in **Table R2 above** (also in Tables 3, 12 of the revised draft), we have now added results for **SlimUNETR** (a recent lightweight 3D segmentation method). Table R2 shows that **UltraLightUNet3D-S** achieves **10.19% higher DICE** score on Task05 Prostate, **0.17% higher on FETA**, **1.47% higher on Synapse 8-organ**, and **2.25% higher on Synapse 13-organ** while using **9.9x fewer #parameters and 5.9x fewer #FLOPs** than SlimUNETR. Larger variants like **UltraLightUNet3D** improve further, with **UltraLightUNet3D-M** showing a total improvement of **12.50% on Task05 Prostate**, **1.42% on FETA**, **2.16% on Synapse 8-organ**, and **4.90% on Synapse 13-organ** compared to SlimUNETR. These results demonstrate **UltraLightUNet3D\\u2019s significant improvements** in performance while maintaining exceptional computational efficiency and cost.\\n\\n- **[7] (CMUNeXt)**: Already included in our initial submission (see Table 1, 2, 11 in our initial submission); again, UltraLightUNet demonstrates its superior performance and efficiency compared to CMUNeXt.\\n- **[8], [9], [10]**: These methods are for 2D segmentation and do not extend to volumetric 3D segmentation tasks. While they provide valuable contributions for 2D segmentation tasks, their lack of 3D applicability makes them less relevant for a direct comparison with UltraLightUNet 3D version.\\n\\nWith the addition of **SlimUNETR** in our rebuttal and revised manuscript, along with the necessary comparisons for 2D methods, we believe our paper provides comprehensive coverage of both 2D and 3D lightweight segmentation networks.\\n\\n---\", \"title\": \"Response to the comments of Reviewer zJLj: Motivation (Part1)\"}", "{\"title\": \"Official Comment by Authors: Part1\", \"comment\": \"### **C1. The novelty is low since the way to solve the problem has been widely explored and is the same, such as employing depth-wise convolution for lightweight design and splitting channels for isolated convolutions. Thus, putting much efforts on applying the exactly same way to solve the same problem is not very interesting, and this application from N to N+1 does not make impact on the medical image segmentation tasks. The architectural design cannot be considered as a novel design since the overall design (U-shaped encoder-decoder) is always used in the medical image segmentation. Incorporating several modules into this architecture does not revolutionize the architectural design.**\\n\\n**Response:** \\n\\nWe thank the reviewer for their follow-up comments and the opportunity to clarify our contributions further. Below, we provide a detailed response to these new concerns. \\n \\n\\n### **Addressing Novelty in Architectural Design** \\n\\nYes, the U-shaped encoder-decoder design is a well-established framework in medical image segmentation, originating with the seminal U-Net paper in 2015 (Ronneberger et al.). However, we believe that using the U-Net popularity as a penalizing argument for further creative improvements of this basic architecture is too reductionist in nature and actually disconnected from the reality in this area. In fact, as evidenced by a wide body of literature, this foundational U-design has seen continuous improvements over the years to address specific challenges in accuracy and computational efficiency. Notable examples include Attention UNet (Oktay et al., MIDL 2018), UNeXt (Valanarasu et al., MICCAI 2022), TransUNet (Chen et al., Medical Image Analysis 2024), EMCAD (Rahman et al., CVPR 2024), **3D UX-Net (Lee et al., ICLR 2023)**, and Swinunetr-v2 (He et al., MICCAI 2023), among others. These works improve over the basic U-Net architecture and highlight the significance of improving U-shaped architectures rather than aiming only for entirely revolutionary designs, which are rare. In fact, one of the most enduring contributions of the UNet architecture is precisely this wide-open area of research it enables which provides ample opportunities for further improvements to the SOTA. \\n\\nOur contribution aligns perfectly with this ongoing trend, but with a distinct focus on addressing **extremely low computational costs** while maintaining **high segmentation accuracy**. Unlike prior works that often increase architectural complexity to improve accuracy (e.g., TransUNet, 3D UX-Net, SwinUNETR-v2), UltraLightUNet provides a new type of solution focused on extreme efficiency, thus making it uniquely suited for real-time, resource-constrained scenarios such as point-of-care diagnostics. From this perspective, the UltraLightUNet architecture redefines the SOTA in image segmentation so this is why we believe our contribution is worthwhile and we should not be penalized for providing better results than SOTA.\\n\\n### **Advancing Lightweight Design in Biomedical Imaging** \\n\\nYes, depth-wise convolution and channel splitting are commonly used. However, it is essential to recognize that leveraging established concepts in novel ways to meet specific challenges and redefine the SOTA is a widely accepted and desirable approach in the field. For instance: \\n\\n- MobileNet (Howard et al., 2017) and EfficientNet (Tan et al., 2019) have successfully advanced computer vision using depth-wise convolutions (an idea already known at that time) for lightweight designs. \\n\\n- In biomedical imaging, UNeXt (Valanarasu et al., 2022), EGE-UNet (Ruan et al., 2023), and Rolling-UNet (Liu et al., 2024) have similarly employed lightweight modules for efficient segmentation, so simply improving SOTA with better ideas. \\n\\nUltraLightUNet contributes to this trend in research by introducing the new Equation 2 (reproduced in our first response of this rebuttal) that makes the mathematics of **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks far more efficient by going beyond standard depth-wise convolutions by enabling adaptable feature extraction across diverse spatial contexts. Additionally, our integration of these modules into both 2D and 3D designs extends the applicability of lightweight models to volumetric medical imaging, a challenging and underexplored area. So, it is the new mathematical basis and the extended scope of our research that make our contribution worthwhile.\"}", "{\"title\": \"Response to the comments of Reviewer jeKK: Summary of Changes\", \"comment\": \"### **Summary of Changes**\\n\\nTo address the reviewer\\u2019s concerns and strengthen the manuscript, we will:\\n\\n1. **Explicitly Cite EMCAD** and include a detailed comparison table (as provided above) highlighting differences in motivation, architecture, and experimental scope.\\n\\n2. **Expand Methodology Section** to provide additional details on the **MKIR** and **MKIRA** blocks, emphasizing their novelty and roles in achieving great performance with this ultra-light architecture.\\n\\n3. **Enhance Experimental Scope** to include additional comparisons with recent SOTA methods, such as **Deformable Large Kernel Attention**, and validate the model on challenging datasets like **BRATS**.\\n\\n---\\n### **Conclusion**\\n\\nWhile we strongly disagree with the claims of similarity to EMCAD and the ethical concerns raised on our paper, the revised version of our paper will demonstrate the **originality, significance, and impact of UltraLightUNet** as a novel contribution to lightweight medical image segmentation.\\n\\nThank you for your time and constructive feedback. We look forward to submitting the revised manuscript.\\n\\n\\n### **References**\\n\\nRahman, M.M., Munir, M. and Marculescu, R., 2024. Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 11769-11779).\\n\\nSandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition* (pp. 4510-4520).\\n\\nHu, J., Shen, L. and Sun, G., 2018. Squeeze-and-excitation networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition* (pp. 7132-7141).\\n\\nHe, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). \\n\\nTan, M. and Le, Q., 2019, May. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). \\n\\nAzad, R., Niggemeier, L., H\\u00fcttemann, M., Kazerouni, A., Aghdam, E.K., Velichko, Y., Bagci, U. and Merhof, D., 2024. Beyond self-attention: Deformable large kernel attention for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1287-1297).\"}", "{\"comment\": \"**Response:**\\n\\nWe thank the reviewer for their follow-up comments and the opportunity to clarify our contributions further. Below, we address this reviewer\\u2019s concerns directly and highlight the unique aspects of our work. \\n\\n## 1. Novelty of UltraLightUNet Modules\\n\\n### **Multi-Kernel Inverted Residual (MKIR) Block**\\nThe MKIR block introduces a novel feature extraction mechanism based on our **multi-kernel trick** (i.e., *MKDC*) in Equation 2 (reproduced below):\\n\\n$MKDC(x) = CS\\\\left(\\\\sum_{k \\\\in K} DWCB_k(x)\\\\right)$\\n\\nWhere $DWCB_k(x) = ReLU6(BN(DWC_k(x)))$. Here, $DWC_k(x)$ is a depth-wise convolution with kernel $k$. Of note, our MKDC supports *both* $k_1=k_2$ (same-size kernels, e.g., $[3 \\\\times 3,3 \\\\times 3,3 \\\\times 3]$, $[5 \\\\times 5,5 \\\\times 5,5 \\\\times 5]$) *and* $k_1 \\\\neq k_2$ (different-size kernels, e.g., $[1 \\\\times 1,3 \\\\times 3,5 \\\\times 5]$), for $k_1, k_2 \\\\in K$. This flexibility in supporting both identical and different scale kernels distinguishes the MKDC from the EMCAD\\u2019s MSDC block (see Equation 5 in the EMCAD paper):\\n\\n$MSDC(x) = \\\\sum_{k \\\\in K} DWCB_k(x)$\\n\\nwhich is restricted only to **multi-scale designs** ($k_1 \\\\neq k_2$, e.g., $[1 \\\\times 1,3 \\\\times 3,5 \\\\times 5]$). This shows the new contribution MKIR brings compared to MSDC, mathematically speaking; this difference is the very basis for the excellent results of the UltraLightUNet architecture. \\n\\nIndeed, by enabling adaptable kernel configurations, MKIR offers efficient and versatile feature extraction tailored to application-specific needs, thus reducing computational cost drastically while achieving high segmentation accuracy. An illustrative example, various statistics, and empirical evidence supporting our claims are described next.\\n\\n**Explaining Multi-Kernel vs. Multi-Scale with an Illustrative Example:**\", \"let_us_start_with_an_analogy\": [\"Imagine we are drawing on a piece of paper with different types of paintbrushes: **small brushes**, **big brushes**, and a set of **both small and big brushes**. Now, let's say we want to color objects of different sizes, like:\", \"**Small objects**, such as tiny dots.\", \"**Large objects**, like big circles.\", \"**Mixed objects**, where we have both tiny dots and big circles.\", \"**The Multi-Kernel Scenario (akin to this ICLR paper):** In the multi-kernel scenario, we can choose our brushes based on what we need:\", \"**Only small brushes** for tiny dots.\", \"**Only big brushes** for big circles.\", \"**A mix of small and big brushes** for both tiny dots and big circles.\", \"This flexibility lets us adapt our tools (kernels) for the specific task at hand. In other words, we might say, \\\"we\\u2019ll use just small brushes for this drawing because it\\u2019s all tiny dots,\\\" or \\\"Let\\u2019s use a mix because we need to cover both.\\\"\", \"**The Multi-Scale Scenario (akin to the EMCAD paper):** In the multi-scale scenario, **we must always use a mix of small and big brushes together**, no matter what. Whether we have tiny dots, big circles, or both, we must use the mix of paintbrushes. This may work well for some cases, but it\\u2019s clearly less flexible because we can\\u2019t decide to use just one type of brush when we have only tiny or big objects.\", \"**How this Analogy Applies to Image Segmentation:** In medical imaging:\", \"**Small objects**: Tiny tumors or small lesions.\", \"**Large objects**: Big organs like the liver or spleen.\", \"**Mixed objects**: Both small lesions and large organs in the same image.\"], \"title\": \"Official Comment by Authors: Part1\"}", "{\"title\": \"Official Comment by Authors: Part2\", \"comment\": \"### **Addressing \\\"Application from N to N+1\\\"**\\n\\nWe respectfully disagree with the characterization of our work as an incremental application. While our design leverages depth-wise convolutions, it introduces novel mechanisms (based on a new mathematical basis in Equation 2) such as multi-kernel refinement for adaptable feature extraction and attention, as well as seamless 3D extensions of all modules. These innovations improve the state-of-the-art efficiency, which is critical for practical deployment in clinical settings which are scarce in resources. \\n\\nMoreover, our results demonstrate that UltraLightUNet outperforms (by orders of magnitude!) several heavier and lighter models in terms of computational cost and segmentation accuracy across diverse datasets, including both 2D and 3D tasks. This underscores the impact of our approach in advancing lightweight segmentation methods, particularly for real-time medical diagnostics.\\n\\n### **Broader Context and Future Impact** \\n\\nOur work is part of a growing trend in computer vision in general, and biomedical imaging in particular, toward developing **ultralightweight models** that can deliver high precision in real-time tasks. Examples of this trend include: \\n\\n- **MobileNets** (Howard et al., 2017) and **EfficientNet** (Tan et al., 2019) in computer vision, \\n\\n- **TinyBERT** (Jiao et al., 2019) in language processing, \\n\\n- **UNeXt** (Valanarasu et al., 2022), **EGE-UNet** (Ruan et al., 2023), and **Rolling-UNet** (Liu et al., 2024) in biomedical imaging. \\n\\nAll these lightweight architectures made headlines in the field by making possible vision/language/imaging tasks with significantly less resources. In other words, \\u201cless is more\\u201d is the new name of the game as this focus on efficiency is a paradigm shift in the making. We believe that the significance of this type of work will only increase in future years as the demand for efficient, real-time models will continue to grow. From this perspective, UltraLightUNet\\u2019s contribution is particularly timely, by addressing the need for accurate, lightweight segmentation solutions in resource-constrained environments. \\n\\nWe hope these clarifications address the reviewer\\u2019s concerns and highlight the importance and relevance of our contributions to the field. Thank you again for your thoughtful feedback and consideration.\"}", "{\"comment\": \"Although the authors did not inspired by the ConvNeXt, its many variants and its way to using depth-wise convolutions to design lightweight modules have been widely explored in both general computer vision tasks and medical image analysis tasks. Thus, it is necessary to discuss them. Moreover, incorporating one or two more layers to your module is not a novel design and does not make impact even though your modules have a new name.\"}", "{\"summary\": \"The paper introduces UltraLightUNet, a novel ultra-lightweight, multi-kernel U-shaped network designed to improve medical image segmentation. Leveraging a new Multi-kernel Inverted Residual (MKIR) block for efficient multi-scale feature extraction and a Multi-kernel Inverted Residual Attention (MKIRA) block for refined feature enhancement, UltraLightUNet achieves high segmentation accuracy with minimal parameters and computational load. With only few parameters and FLOPs, the 2D version of UltraLightUNet outperforms existing lightweight and transformer-based segmentation models across multiple medical imaging benchmarks, while the 3D variant, UltraLightUNet3D, achieves superior results on complex 3D medical segmentation tasks with even greater efficiency. These performance gains make UltraLightUNet a viable option for real-time applications in resource-constrained environments, such as point-of-care diagnostics.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper's primary strength lies in its originality, presenting a lightweight model architecture, UltraLightUNet, that effectively combines multi-kernel convolutions and attention mechanisms, a novel approach for achieving high segmentation accuracy with low computational overhead. This innovation addresses a critical need for practical, high-performance segmentation models in resource-constrained environments, adding value to the field by bridging computational efficiency and segmentation quality. In terms of quality, the paper\\u2019s methodology appears well-supported by rigorous experimental validation across 11 datasets and comparison with SOTA models, demonstrating robust performance gains and highlighting the practical implications of the architecture's low parameter count and FLOPs. Clarity is another strength; the paper methodically explains the architecture components, from the MKIR and MKIRA blocks to grouped attention mechanisms, making it accessible to readers with a background in medical image segmentation. Finally, the significance of UltraLightUNet is considerable due to its broad applicability across a range of medical imaging tasks and its potential for real-time use in settings like point-of-care diagnostics. The model\\u2019s lightweight design, paired with high accuracy, addresses critical bottlenecks in deploying AI-driven diagnostics in clinical environments, establishing the paper as a meaningful contribution to the field. I am glad to see the author will open-source the code to promote research in this line.\", \"weaknesses\": \"First, while the model is evaluated across various medical imaging datasets, these datasets are relatively straightforward, covering simple segmentation tasks and organs rather than more complex applications like CT/MRI tumor and lesion segmentation.\\n\\nSecond, the masks are mostly binary segmentation tasks. The multi-class segmentation is not well explored. Adding more complicated and multi-class segmentation would better demonstrate the model's capability for broader, real-world medical imaging tasks. \\n\\nThird, while the proposed blocks\\u2014MKIRA, MKIR, MKDC, GAG, and CMFA\\u2014are illustrated in the Method section, the paper lacks sufficient theoretical or conceptual motivation for why these specific block designs should enhance segmentation performance. \\n\\nLastly, the paper does not discuss the limitations of UltraLightUNet. A dedicated discussion on limitations would provide readers with a more balanced understanding of the model\\u2019s practical use and potential future directions for research.\", \"questions\": \"Typically, there is no free lunch. Are there any limitations or tradeoffs of the method? Providing those will be helpful for readers.\\n\\nHow it would work on more complicated problems (e.g., 3D tumors/lesions) and multi-class segmentation problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"New response to all Reviewers comments\", \"comment\": \"We are very grateful to all reviewers for engaging with us in a meaningful conversation based on the initial reviews. Whoever, we believe that there is a lot of potential to clarify further points and, most importantly, make sure both parties (authors and reviewers) get the chance to make things right and correct any remaining misunderstandings.\\n\\nIn our responses below, we further clarify our contribution, provide a new and more intuitive angles to highlight the novelty of our approach, and hopefully provide reasons for these reviewers to revise their initial scores. Given our justification, the huge amount of new results we provide to support our claims, there is no fair way for these lowest scores to remain unchanged, particularly since our results redefine the SOTA for real-time 2D/3D image segmentation with limited resources. \\n\\nOur individual responses below address the very core of all the issues raised in previous iteration. We remain committed to provide any further clarifications these Reviewers may find necessary to better asses the contribution of our paper. \\n\\nThank you.\"}", "{\"title\": \"Response to the comments of Reviewer jeKK: Parameter Efficiency\", \"comment\": \"### **Q3. How is it possible for your method to just have 27000 parameters, while last year paper with the same architecture has at-least 3M parameters (its base version).**\\n\\nThe reviewer questions how **UltraLightUNet** achieves such a low parameter count (27,000) compared to EMCAD and other architectures. \\n\\n**Response:** \\n**UltraLightUNet\\u2019s parameter efficiency** is achieved due to its new architecture, which is entirely designed with utmost efficiency in mind. More precisely:\\n\\n1. **Encoder Efficiency**: \\n We have designed our encoder from scratch using the new **MKIR block**. The MKIR block uses **multi-kernel depth-wise convolutions**, which are inherently lightweight, yet very effective at capturing important features. By avoiding pre-trained encoders, **UltraLightUNet** reduces its parameter count significantly, hence our results in Tables 1, 2, 3, 11, 12, and Figs. 1, 3 in the main paper.\\n\\n2. **Decoder Simplicity**: \\n Unlike EMCAD, which uses the heavier **ECUB block** in addition to MSCAB and LGAG, our **UltraLightUNet** architecture leverages only the **MKIRA** and **GAG blocks** for lightweight attention-based refinement.\\n\\n3. **Unified End-to-End Lightweight Design Philosophy**: \\n Unlike EMCAD, which optimizes only the decoder, **UltraLightUNet** achieves **end-to-end efficiency** by optimizing both the encoder and decoder for parameter efficiency, thus resulting in **27,000 parameters (2D)** compared to EMCAD\\u2019s **3.92M parameters**.\"}", "{\"summary\": \"The author proposed a 2D and 3D ultra-lightweight, multi-kernel U-shaped network for medical image segmentation, termed a UltraLightUNet. It consists of an Multi-kernel Inverted Residual (MKIR) block and an Multi-kernel Inverted Residual Attention (MKIRA) block. MKIR was proposed to efficiently process images through multiple kernels while capturing complex spatial relationships, and MKIRA block refines and emphasizes image salient features via new sophisticated convolutional multi-focal attention mechanisms. This UltraLightUNet outperformed other methods with lower complexity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The proposed network has a low number of parameters and low computational complexity than other widely used baselines and achieved promising segmentation accuracy.\\n2. Methods and results are thoughtfully described.\", \"weaknesses\": \"1. The overall novelty is low. The author mainly proposed two modules, CMFA and MKDC modules. MKDC just applied several depth-wise convolutional layers to extract features from different channels. This idea has already proposed in the Scale-Aware Modulation Meet Transformer [1]. MKDC module employed average and max pooling, and convolution layers, which are the channel-wise and spatial-wise attention. Many various attention-based modules have been proposed between 2018 and 2020 (like the CBAM [2]). The overall network is similar with [3] [4].\\n2. The experimental results are limited. First, authors reported FLOPs and Params to demonstrate that the network has a lower computational complexity than other baselines. It is achieved by mainly replacing convolutional layers with depth-wise convolutional layers. However, it will take longer time to train networks which employ depth-wise convolutional layers compared with those with standard convolution layers. Thus, training and test time are needed to be reported. Second, the comparison in Synapse, MSD prostate, and FETA is insufficient. Synapse is a popular benchmark, but only a few baseline methods were reported. Additionally, the performance reported for these baseline methods in this paper is much lower than the performance in the original paper. For example, Swin Unet reported 79.13 in their paper [5], but only 77.58 was reported for it in this manuscript. If authors run experiments for baselines on their own, please make sure the baseline networks have been fully optimized. Only seven 3D methods proposed before 2022 were compared in MSD prostate and FETA. However, 3D segmentation networks between 2023 and 2024 were not compared, and these networks usually achieve more superior performance with lower computational complexity.\\n3. The motivation is unclear. The author mentioned that several 3D segmentation networks, including 3D U-Net, SwinUNETR, 3D UX-Net, UNETR, nnU-net and nnFormer, have high computational demands, so they proposed their lightweight network. However, these baseline networks are proposed in 2021 and 2022, and in recent years many lightweight 3D segmentation networks have been proposed and this challenge has been tackled, such as [6][7][8][9][10]. However, authors didn't discuss and explore these lightweight networks. Additionally, modules in this manuscript were proposed based on the idea of ConvNeXt, but its lightweight version was not discussed. Some other depth-wise convolution-based lightweight networks were also not discussed [11].\\n4. The theoretical development is not solid. Authors reviewed Vision Transformers in the related work, but it is not related to the work in this manuscript. The network in the manuscript employs a convolutional neural network architecture and attention mechanisms. Additionally, insufficient theoretical development in the Method section, and it is unclear how this design improved the segmentation performance. \\n5. No model interpretability. The interpretability of the CMFA and MKDC modules were not discussed, such as saliency maps. It is important to understand the mechanisms of attention-based modules.\\n6. The overall impact is low. The overall improvement in the segmentation performance is low. For example, its best DSC score in the Polyp dataset was 93.48, but other baselines achieved 93.29 and 93.18. Its best DSC score in the Synapse dataset was 78.68, but other baselines achieved 78.40.\", \"minors\": \"(1) lacking qualitative results for 3D segmentation results in Synapse, MSD Prostate, and FETA.\\n(2) Only overall DSC scores were reported for multi-class segmentation tasks, but organ-specific DSC scores were not reported.\\n(3) lack p-values and standard deviations\\n\\n[1] Lin, W., Wu, Z., Chen, J., Huang, J., & Jin, L. (2023). Scale-aware modulation meet transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6015-6026).\\n[2] Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV) (pp. 3-19).\\n[3] Rahman, M. M., & Marculescu, R. (2023). Medical image segmentation via cascaded attention decoding. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 6222-6231).\\n[4] Rahman, M. M., Munir, M., & Marculescu, R. (2024). Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11769-11779).\\n[5] Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., & Wang, M. (2022, October). Swin-unet: Unet-like pure transformer for medical image segmentation. In European conference on computer vision (pp. 205-218). Cham: Springer Nature Switzerland.\\n[6] Pang, Y., Liang, J., Huang, T., Chen, H., Li, Y., Li, D., ... & Wang, Q. (2023). Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Transactions on Medical Imaging\\n[7] Tang, F., Ding, J., Quan, Q., Wang, L., Ning, C., & Zhou, S. K. (2024, May). Cmunext: An efficient medical image segmentation network based on large kernel and skip fusion. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.\\n[8] Yang, S., Zhang, X., Chen, Y., Jiang, Y., Feng, Q., Pu, L., & Sun, F. (2023). UcUNet: A lightweight and precise medical image segmentation network based on efficient large kernel U-shaped convolutional module design. Knowledge-Based Systems, 278, 110868.\\n[9] He, Y., Gao, Z., Li, Y., & Wang, Z. (2024). A lightweight multi-modality medical image semantic segmentation network base on the novel UNeXt and Wave-MLP. Computerized Medical Imaging and Graphics, 111, 102311.\\n[10] Lin, X., Yu, L., Cheng, K. T., & Yan, Z. (2023). BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 27(7), 3501-3512.\\n[11] Yin, Y., Han, Z., Jian, M., Wang, G. G., Chen, L., & Wang, R. (2023). AMSUnet: A neural network using atrous multi-scale convolution for medical image segmentation. Computers in Biology and Medicine, 162, 107120.\", \"questions\": \"1. Provide more detailed and necessary experimental details, including recent works in lightweight network design and other more advanced baselines, training and test time in experiments.\\n2. Demonstrate more solid theoretical development\\n3. Discuss model interpretability\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Model interpretability\", \"comment\": \"### **Q5. No model interpretability. The interpretability of the CMFA and MKDC modules were not discussed, such as saliency maps. It is important to understand the mechanisms of attention-based modules.**\\n\\nWhile our primary focus was on achieving high performance with lightweight efficiency, we agree that understanding these mechanisms is critical. The **CMFA module** enhances feature refinement by applying max and average pooling across spatial and channel dimensions, selectively focusing on critical features, while the **MKDC module** employs multi-kernel depth-wise convolutions to balance local and global context extraction. These mechanisms are validated through improved segmentation accuracy in our Ablation Study (Tables 4 and 5 in the main paper).\\n\\nTo address this reviewer\\u2019s concern, we have included **activation heatmaps** to visualize how these modules focus on relevant features in **Fig. 4** in the Appendix of our revised draft. In Fig. S1, we plot the average activation heatmaps for all channels in high-resolution layers, focusing on Encoder Stage 1 (ES1) and Decoder Stage 1 (DS1). In ES1, the MKIR block attends to diverse regions, including the polyp region, thus capturing broad spatial features as expected in the initial stages of the encoder. In contrast, the CMFA layer in DS1 sharpens attention, thus focusing more locally on the polyp region. Subsequently, the MKDC within the MKIR block of DS1 further refines these attended features, thus concentrating exclusively on the polyp region (indicated by deep red areas). This progression highlights the effectiveness of our architecture in capturing and refining features, thus resulting in a segmentation map that strongly overlaps with the ground truth.\"}", "{\"title\": \"Response to the comments of Reviewer SWMn: Difference in UltraLightUNet- T, S, and L\", \"comment\": \"### **Q.4 What are the differences among UltraLightUNet- T, S, and L in terms of model? Layer difference? Please explain in details in the manuscript.**\\n\\nWe thank the reviewer for their insightful comment regarding the differences among the various versions of UltraLightUNet. In our manuscript, the different versions\\u2014UltraLightUNet-T (Tiny), UltraLightUNet-S (Small), UltraLightUNet (Base), UltraLightUNet-M (Medium), and UltraLightUNet-L (Large)\\u2014are distinguished by the number of channels used in the five stages of the U-shaped architecture. Specifically: \\n\\n**UltraLightUNet-T (Tiny):** Channels = (4, 8, 16, 24, 32) \\n\\n**UltraLightUNet-S (Small):** Channels = (8, 16, 32, 48, 80) \\n\\n**UltraLightUNet (Base):** Channels = (16, 32, 64, 96, 160) \\n\\n**UltraLightUNet-M (Medium):** Channels = (32, 64, 128, 192, 320) \\n\\n**UltraLightUNet-L (Large):** Channels = (64, 128, 256, 384, 512) \\n\\nThese variations in the number of channels directly scale the model\\u2019s capacity, enabling the architecture to adapt to different resource constraints and performance requirements. The underlying layer-wise structure and module types remain consistent across all versions, ensuring architectural uniformity while allowing flexibility in computational complexity and performance. \\n\\nTo demonstrate the scalability of our design, we reported an ablation study in **Tables 9 and 10 (Section A.7)** of the initial submission, which highlights the trade-offs between computational cost and performance as the number of channels varies. We will further clarify these details in the revised manuscript to address this comment thoroughly.\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Experimental results are limited (Part1)\", \"comment\": \"### **Q2.1. The experimental results are limited. First, authors reported FLOPs and Params to demonstrate that the network has a lower computational complexity than other baselines. However, it will take longer time to train networks which employ depth-wise convolutional layers compared with those with standard convolution layers. Thus, training and test time are needed to be reported.**\\n\\n---\\n\\n**Response:**\\n\\nWe appreciate the reviewer\\u2019s observation regarding the potential impact of depth-wise convolutions on training and test times.\\n\\n**However**, our design prioritizes **extreme lightweight efficiency** for resource-constrained environments by leveraging depth-wise convolutions in the **MKIR** and **MKIRA** blocks. Per reviewer\\u2019s request, we present the training and test times for UltraLightUNet (in **Table R3**) which is comparable to other lightweight baseline methods when evaluated on a NVIDIA A6000 GPU of 48GB memory. Additionally, the extremely small #FLOPs and #Params (e.g., **0.316M Params and 0.314 GFLOPs** for our 2D model) ensure practical usability in real-time applications.\\n\\n**Table R3**: Computational complexity (#Params, #FLOPs, Training Time (Sec), Inference Time (Mili Sec)) comparisons of different architectures including our UltraLightUNet. We train each model for 200 epochs using a batch size of 16 with a total 1000 sample images of resolution 256\\u00d7256 to get the **total training time (sec.)** on a NVIDIA RTX A6000 GPU. While we run the inference on 500 samples on the same GPU with a batch size of 1 and report the **average inference time** (ms) per image. We report the average DICE score of six binary segmentation datasets here for reasonable comparison. \\n\\n| Architecture | #Params (M) $\\\\downarrow$ | #FLOPs (G) $\\\\downarrow$ | Training Time (sec.) $\\\\downarrow$ | Inference Time (ms) $\\\\downarrow$ | Avg DICE (%) $\\\\uparrow$ |\\n|-------|------|------|----------|---------|---------|\\n| UNet | 34.53 | 65.53 | 1732.11 | 0.0084 | 87.28 |\\n| AttUNet | 34.88 | 66.64 | 1988.31 | 0.0092 | 87.86 |\\n| UNet++ | 9.16 | 34.65 | 794.38 | 0.0073 | 88.16 |\\n| PraNet | 32.55 | 6.93 | 685.37 | 0.0156 | 87.79 |\\n| DeepLabv3+ | 39.76 | 14.92 | 695.82 | 0.0078 | 89.15 |\\n| UACANet | 69.16 | 31.51 | 850.53 | 0.0231 | 87.81 |\\n| TransUNet | 105.32 | 38.52 | 1523.68 | 0.0153 | 89.59 |\\n| SwinUNet | 27.17 | 6.20 | 828.99 | 0.0124 | 88.84 |\\n| DeformableLKA | 102.76 | 26.03 | 5450.26 | 0.0663 | **89.92** |\\n| MedT | 1.57 | 1.95 | 7138.91 | 0.1191 | 82.42 |\\n| Rolling-UNet-S | 1.78 | 2.10 | 635.69 | 0.0175 | 87.36 |\\n| CMUNeXt | 0.418 | 1.09 | 450.86 | **0.0057** | 88.25 |\\n| UNeXt | 1.47 | 0.57 | **216.26** | $\\\\underline{0.0058}$ | 86.06 |\\n| EGE-UNet | 0.054 | 0.072 | 360.69 | 0.0099 | 83.82 |\\n| UltraLight_VM_UNet | $\\\\underline{0.050}$ | **0.060** | 318.04 | 0.0102 | 85.53 |\\n| UltraLightUNet-T (**Ours**) | **0.027** | $\\\\underline{0.062}$ | $\\\\underline{312.08}$ | 0.0071 | 87.87 |\\n| UltraLightUNet-S (**Ours**) | 0.093 | 0.125 | 348.78 | 0.0072 | 89.10 |\\n| UltraLightUNet (**Ours**) | 0.316 | 0.314 | 474.02 | 0.0072 | $\\\\underline{89.75}$ |\\n\\n**Table R3** highlights the trade-offs between training/inference time and efficiency. UltraLightUNet variants achieve competitive or superior DICE scores with significantly fewer #Params and #FLOPs compared to all other architectures. For example, UltraLightUNet (474.02 sec training, 0.0072 ms inference) achieves 89.75% DICE with only 0.316M params and 0.314G FLOPs, thus outperforming heavier models like DeepLabv3+ (89.15% DICE, 39.76M params, 14.92G FLOPs) and TransUNet (89.59% DICE, 105.32M params, 38.52G FLOPs). \\n\\nWhile depth-wise convolutions slightly increase the training time due to reduced parallelism, they enable extreme computational efficiency (#Params and #FLOPs), thus making UltraLightUNet ideal for resource-constrained environments. \\n\\nFinally, the inference times remain competitive with lightweight baselines like CMUNeXt (0.0057 ms, 88.25% DICE), while offering higher DICE score. We will add this Table in the Appendix of our revised draft.\"}", "{\"title\": \"Official Comment by Authors: Part3\", \"comment\": \"Table R7 provides a comparative analysis of various **multi-kernel strategies** to show the effectiveness of **same-size kernels** ($k_1 = k_2$, e.g., $[3 \\\\times 3, 3 \\\\times 3, 3 \\\\times 3]$, $[5 \\\\times 5, 5 \\\\times 5, 5 \\\\times 5]$) and **multi-scale kernels** ($k_1 \\\\neq k_2$, e.g., $[1 \\\\times 1, 3 \\\\times 3, 5 \\\\times 5]$) on segmenting objects of varying sizes across three datasets: **MSD Task09_Spleen** (large object), **MSD Task06_Lung** (small object), and **BUSI** (mixed objects).\\n\\n\\n**Table R7:** Comparing the effect of different multi-kernel tricks (same $k_1=k_2$) and multi-scale ($k_1 \\\\neq k_2$) convolutions on small, large, and mixed objects segmentation on MSD Task09_Spleen (large object), MSD Task06_Lung (small object), and BUSI (mixed objects) datasets. DICE scores (%) are reported with our UltraLightUNet3D for MSD Task09_Spleen (large object), MSD Task06_Lung (small object) datasets, while with our UltraLightUNet for BUSI dataset. We report the \\\\#Params and \\\\#FLOPs of our UltraLightUNet3D architecture with an input resolution of $96 \\\\times 96 \\\\times 96$.\\n\\n| **Multi-kernel tricks** | **\\\\#Params (M)** $\\\\downarrow$ | **\\\\#FLOPs (G)** $\\\\downarrow$ | **Spleen (large object)** $\\\\uparrow$ | **Lung cancer (small object)** $\\\\uparrow$ | **BUSI (mixed objects)** $\\\\uparrow$ |\\n|-----------------------------|------------------|-----------------|---------------------------|---------------------------------|--------------------------|\\n| $1 \\\\times 1, 1 \\\\times 1, 1 \\\\times 1$ | 0.279 | 1.01 | 93.65 | 60.25 | 72.13 |\\n| $3 \\\\times 3, 3 \\\\times 3$ | 0.338 | 1.58 | 95.86 | 70.26 | 76.83 |\\n| $3 \\\\times 3, 3 \\\\times 3, 3 \\\\times 3$ | 0.369 | 1.88 | 96.03 | **71.09** | 76.86 |\\n| $1 \\\\times 1, 3 \\\\times 3, 5 \\\\times 5$ | 0.453 | 2.68 | 95.99 | $\\\\underline{70.32}$ | **78.04** |\\n| $5 \\\\times 5, 5 \\\\times 5$ | 0.564 | 3.76 | $\\\\underline{96.20}$ | 69.98 | $\\\\underline{77.88}$ |\\n| $5 \\\\times 5, 5 \\\\times 5, 5 \\\\times 5$ | 0.709 | 5.16 | **96.29** | 70.24 | 77.80 |\\n---\\n\\n\\n**Performance on Large Objects (Spleen):** \\n- Kernels with larger sizes (e.g., $5 \\\\times 5,5 \\\\times 5,5 \\\\times 5$) achieve the highest DICE score (**96.29**), as expected for large Spleen segmentation; this supports our claims above. \\n\\n- Multi-scale kernels (**$1 \\\\times 1,3 \\\\times 3,5 \\\\times 5$**) perform slightly lower (**95.99**), but still better than smaller kernels, showing that adaptability is critical for large regions. \\n\\n**Performance on Small Objects (Lung Cancer):** \\n\\n- Small kernels (**$3 \\\\times 3,3 \\\\times 3,3 \\\\times 3$**) achieve the best DICE score (**71.09**) with lower #Params and #FLOPs, thus confirming their suitability for fine detail segmentation. \\n\\n- Multi-scale kernels (**$1 \\\\times 1,3 \\\\times 3,5 \\\\times 5$**) also perform well (**70.32**) but are slightly less effective compared to same-size small kernels for small object segmentation. \\n\\n**Performance on Mixed Objects (BUSI):** \\n\\n- Multi-scale kernels (1x1,3x3,5x5) achieve the highest DICE score (**78.04**), highlighting their ability to balance segmentation for both small and large objects. \\n\\n- Large kernels alone underperform (**77.80**) in mixed-object scenarios, thus showing limitations in capturing small object details. \\n\\nTo sum up, Table R7 demonstrates the *adaptability* of **UltraLightUNet\\u2019s multi-kernel strategy** in addressing diverse object segmentation challenges. These results highlight that same-size kernels excel in specific scenarios (small or large objects), while **multi-scale kernels** are essential for balancing segmentation in mixed-object datasets. This adaptability distinguishes UltraLightUNet's **MKIR block** from the EMCAD's fixed **multi-scale MSCB**, thus underscoring its broader application and effectiveness in medical imaging tasks.\"}", "{\"title\": \"Official Comment by Authors: Part5\", \"comment\": \"### **C4. Thanks for authors' response. I will not change my score due to the low novelty, low impact to the medical image segmentation, and too lower baseline results.**\\n\\n**Response:** \\n\\nWe respectfully refer the reviewer to our earlier responses; below, provide additional clarification regarding the novelty, impact, and baseline results of UltraLightUNet: \\n\\n**1. Novelty:** UltraLightUNet introduces a **new end-to-end design for both 2D and 3D architectures**, built entirely from scratch to achieve extreme efficiency. Our key contributions include: \\n\\n- **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks, employs a **novel multi-kernel approach** that supports both $k_1\\u200b=k_2\\u200b$ and $k_1\\u2260k_2$, for $k_1, k_2 \\\\in Kernels\\u200b$. These are conceptually distinct from existing multi-scale approaches like EMCAD\\u2019s MSCB and MSCAM. \\n\\n- **New 3D extensions** for volumetric medical imaging tasks, which represent a significant advancement, as existing methods like EMCAD and ConvNeXt lack such adaptations. \\n\\nAdditionally, we note that **3D UX-Net** (Lee et al., 2023), which introduced only a **3D version of ConvNeXt\\u2019s block** in the encoder and reused a computationally heavy decoder, was actually accepted at ICLR 2023! In contrast, UltraLightUNet delivers **conceptually new 2D modules** and their **corresponding lightweight 3D extensions** ($185 \\\\times$ lower #FLOPs and $77 \\\\times$ fewer #Params compared to 3D UX-Net), thus demonstrating a more comprehensive and impactful contribution. So, if 3D UX-Net was deemed relevant for ICLR 2023 acceptance, we believe that UltraLightUNet which provides so much better results and broader applicability is a decent start. \\n\\n**2. Impact:** UltraLightUNet addresses a critical unmet need in real-time medical imaging for resource-constrained environments, achieving **state-of-the-art accuracy with drastically lower computational costs**. This aligns with the growing trend of ultra-lightweight designs in computer vision and biomedical imaging, making our work particularly relevant to real-world applications. Simply put, our results show that UltraLightUNet is the new SOTA in this problem space. \\n\\n**3. Baseline Results:** We updated our comparisons using values from **original sources** (e.g., SwinUNet). UltraLightUNet achieves **competitive accuracy**, while using 7.2 $\\\\times$ fewer parameters, thus underscoring its efficiency and practical applicability. Again, our challenge is not the accuracy of previous approaches, but rather the complexity of resources involved to achieve these accuracies. From this perspective, our UltraLightUNet architecture is top of the class. \\n\\nGiven the novelty, rigorous experimental validation, and demonstrated practical relevance, we believe UltraLightUNet makes a meaningful contribution and is deserving contribution to ICLR 2025. Thank you for your feedback and consideration.\"}", "{\"comment\": \"Thanks for authors' response. I will not change my score due to the low novelty, low impact to the medical image segmentation, and too lower baseline results.\"}", "{\"title\": \"Response to the comments of Reviewer jeKK: Similarity to EMCAD (CVPR 2024) paper (Part2)\", \"comment\": [\"### **Q2. The proposed method (Figure 2) is clearly the same as last year CVPR paper, with approximately no change in any of the modules both in encoder and decoder of the network. Therefore, I see no novelty and contribution in the paper submitted.**\", \"**Below, we elaborate on all the differences in the Table R1 above:**\", \"1. **Motivation**:\", \"**UltraLightUNet** is motivated by the need for a **full (i.e., both encoder and decoder) architecture** that is optimized for **resource-constrained environments**. Consequently, our approach prioritizes the architecture\\u2019s computational **extreme efficiency** without sacrificing the segmentation accuracy for both **2D and 3D segmentation tasks**.\", \"In contrast, **EMCAD** focuses solely on optimizing the **decoder** while targeting multi-scale features refinement for extreme versatility. As such, EMCAD relies on existing **pre-trained encoders** (e.g., PVT\\\\_V2\\\\_B2, PVT\\\\_V2\\\\_B0), which inherently increase the computational complexity depending on the complexity of the encoder. Finally, EMCAD targets primarily **2D segmentation tasks**.\", \"2. **Architectural Differences**:\", \"To optimize both the encoder and decoder for achieving extreme efficiency, **UltraLightUNet** innovates through **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks, which leverage the multi-kernel design, thus allowing for **kernels $k_1$ and $k_2$ be same $(k_1 = k_2)$ or different $(k_1 \\\\neq k_2)$** based on application-specific needs. This design captures diverse spatial features with minimal computational cost.\", \"In contrast, **EMCAD** focuses only on optimizing the decoder and uses the **Multi-Scale Residual Attention Module (MSCAM)**, where **kernels $k_1$ and $k_2$ must be different $(k_1 = k_2)$** to represent multiple scales. This conceptual distinction allows **UltraLightUNet** to adapt kernel sizes based on application-specific needs (e.g., large kernels for large regions, small kernels for small regions, or mixed for both), whereas EMCAD is limited to mixed kernels only.\", \"**Encoder Design**:\", \"**UltraLightUNet** employs a **multi-kernel inverted residual** structure, focusing on an **ultra-lightweight convolutional approach** without relying on heavy attention mechanisms or transformers.\", \"In contrast, **EMCAD** uses **existing and pre-trained Transformer encoders**, thus making its efficiency dependent on the encoder's complexity.\", \"**Decoder Design**:\", \"**UltraLightUNet** uses only the new **Multi-Kernel Inverted Residual Attention (MKIRA)** block, a local attention-based multi-kernel module we defined to selectively refine features. We note that **UltraLightUNet** reduces the computational complexity by using simple bilinear upsampling (i.e., avoiding any convolutional upsampling block).\", \"In contrast, **EMCAD** uses the **Multi-scale Convolutional Attention Module (MSCAM)** and **Efficient Convolutional Upsampling Block (ECUB)**. The use of **ECUB** increases the computational complexity significantly.\", \"3. **Target Use Cases**:\", \"**UltraLightUNet**: A unique architecture supports both **2D and 3D segmentation tasks** while prioritizing extreme efficiency, **thus making it ideal for real-time and low-resource environments (e.g., point-of-care diagnostics)** where computational resources are limited and resource efficiency is critical.\", \"**EMCAD**: Primarily focuses on **2D tasks** and does not explicitly address 3D segmentation. This is suitable for applications where complex, hierarchical feature extraction is needed, by leveraging the power of vision transformers.\", \"We note that many highly cited methods in computer vision achieve novelty by introducing **modular innovations**, such as the Inverted Residual Block (IRB) in MobileNetv2 (Sandler et al., 2018), the SE block in Squeeze-and-Excitation Networks (Hu et al., 2018), and the Residual Block in ResNet (He et al., 2016). Other works combine these modules into new architectures, such as the use of SE blocks in **EfficientNet** (Tan et al., 2019) or Residual Block in almost every architecture.\", \"Similarly, **UltraLightUNet** uses standard attention mechanisms, such as channel attention and spatial attention, however, within a **new end-to-end U-shaped architecture** designed from scratch for extreme efficiency. This full encoder-decoder architecture targeting both **2D and 3D segmentation tasks** is one of the core contribution of our work.\"]}", "{\"title\": \"Response to the comments of Reviewer zJLj: Motivation (Part2)\", \"comment\": \"### **Q3.2. Additionally, modules in this manuscript were proposed based on the idea of ConvNeXt, but its lightweight version was not discussed. Some other depth-wise convolution-based lightweight networks were also not discussed [11].**\\n\\nWe thank the reviewer for their feedback regarding the relationship between our proposed modules and ConvNeXt, as well as the discussion of other depth-wise convolution-based lightweight networks. Below, we address these concerns:\\n\\n**1. Relationship with ConvNeXt:** UltraLightUNet was **not directly inspired by ConvNeXt**. While both employ depth-wise convolutions, **UltraLightUNet introduces multi-kernel depth-wise convolutions (MKDC)**, supporting both same-sized (\\\\(k_1 = k_2\\\\)) and different-sized (\\\\(k_1 \\\\neq k_2\\\\)) kernels for adaptable context extraction. Unlike ConvNeXt which relies on large kernels (e.g., 7x7) for global contexts, MKDC provides flexibility to handle **both local and global contexts effectively**.\\n\\nFurthermore, **UltraLightUNet integrates MKDC into new lightweight modules**, such as **MKIR** and **MKIRA**, tailored for efficient 2D and 3D medical segmentation tasks. While ConvNeXt targets general-purpose computer vision tasks, our work addresses the domain-specific needs of medical imaging in resource-constrained environments.\\n\\n---\\n\\n**2. Discussion of Other Depth-Wise Convolution-Based Lightweight Networks:** The networks mentioned in [11] primarily focus on 2D segmentation tasks. In contrast, **UltraLightUNet extends the depth-wise convolution paradigm to 3D tasks** with novel volumetric modules like **3D MKIR and 3D MKIRA**, enabling lightweight and scalable segmentation in volumetric data. This focus on unified 2D-3D design distinguishes our work from generic depth-wise convolution-based architectures.\\n\\nIn the revised manuscript, we will briefly discuss such networks and highlight the unique contributions of UltraLightUNet in adapting depth-wise convolutions for 3D medical imaging.\"}", "{\"title\": \"Response to the comments of Reviewer jeKK: Comparisons to State-of-the-Art Models\", \"comment\": \"### **Q4: The proposed method is not compared to other SOTA networks (such as (Azad et al., 2024)) which perform much better over some of the datasets used in the paper (such as ISIC)**\\n\\nThe reviewer mentions that **UltraLightUNet** lacks comparisons to certain SOTA models, such as **Deformable Large Kernel Attention**.\\n\\n**Response:**\\n- **Existing Comparisons**: \\n We do already compare **UltraLightUNet** with multiple SOTA models, including **DeepLabv3+**, **TransUNet**, **SwinUNet**, and lightweight architectures like **UNeXt**, **CMUNeXt**, **EGE-UNet**, **Ultra_Light_VM_UNet**. Without exception, these comparisons demonstrate that **UltraLightUNet** achieves superior or competitive segmentation accuracy with significantly fewer parameters and lower computational costs.\\n\\n- **Newly Added Comparisons**: \\n Per reviewer suggestion, in the revised manuscript, we will include additional comparisons with recent SOTA methods, such as **Deformable Large Kernel Attention** (Azad et al., 2024). Below, we provide a detailed comparison of skin lesion segmentation on the ISIC 2018 and BUSI datasets.\\n\\n**Table R2**: Comparison of **UltraLightUNet** with Deformable Large Kernel Attention (DeformableLKA). We present the DICE scores (%) on our data-splits with an input resolution of 256 $\\\\times$ 256, while optimizing the hyper-parameters of **DeformableLKA**.\\n\\n| **Architectures** | **Pretrained Encoder** | **#Params** | **#FLOPs** | **ISIC 2018** | **BUSI** |\\n|---------------------------------------------------------------|---------------------------------------------------------------|-------------|-------------|-------------------------------|----------|\\n| DeformableLKA | Yes | 102.76M | 26.03G | 90.34 | 79.01 |\\n| DeformableLKA | No | 102.76M | 26.03G | 88.17 | 74.62 |\\n| UltraLightUNet-T (Ours) | No | 0.027M | 0.026G | 88.19 | 75.64 |\\n| UltraLightUNet (Ours) | No | 0.316M | 0.314G | 88.74 | 78.04 |\\n| UltraLightUNet-M (Ours) | No | 1.15M | 0.951G | 89.09 | 78.27 |\\n\\nAgain, Table R2 shows that UltraLightUNet provides very competitive results using a fraction of #Params and #FLOPS compared to the DeformableLKA approach. Specifically, DeformableLKA, with a pretrained encoder, achieves the highest DICE score on ISIC 2018 (90.34%) and BUSI (79.01%) but requires 3,805x more #Params and 1,001x more #FLOPs than UltraLightUNet-T (0.027M parameters and 0.026G FLOPs). Without pretraining, DeformableLKA\\u2019s DICE scores drop by 2.17% on ISIC 2018 and 4.39% on BUSI, thus falling below the DICE scores of our UltraLightUNet-T (88.19% and 75.64%). \\n\\nIn contrast, UltraLightUNet-M which does *not* rely on pretraining, delivers very competitive DICE scores: 89.09% (ISIC 2018) and 78.27% (BUSI), thus narrowing the gap to just 1.25% on ISIC 2018 and 0.74% on BUSI compared to pretrained DeformableLKA.\"}", "{\"title\": \"Official Comment by Authors: Part3\", \"comment\": \"### **C2. \\\"The reported results for Swin Unet and TransUNet in our manuscript were taken directly from the CASCADE paper [3], ensuring consistency across all the reported baselines.\\\" It is not fair to copy results of baselines from other papers, and it is better to check the original paper to utilize the hyper-parameters in their original papers. It is not a good way to train your model to the optimal one while not use optimal hyper-parameters to train baseline models. I am very confused why you copied results from the CASCADE paper. This CASCADE paper is not a benchmark paper, so copying results from this paper will not ensure the consistency. These baselines were not proposed in the CASCADE paper, and copying results from this paper will lead to lower segmentation performance and unfair comparison. These baselines have not reached their highest performance, and they still have large potential to reach the higher performance. Therefore, it cannot demonstrate the superiority of the UltraLightUNet.**\\n\\n**Response:** \\n\\nWe thank the reviewer for highlighting this concern and agree that reporting baseline results from original papers should be enforced. In response to the reviewer\\u2019s feedback, we have revised our manuscript to directly report the DICE score for SwinUNet from its original paper (Cao et al., 2021) for the Synapse dataset, as reflected in Table 2 (in revised submission). We note however, that this change does not impact the significance of our results at all; this is because even after reporting the results from the original paper, our UltraLightUNet-L achieves competitive performance (only 0.45% lower) while using **$7.2\\\\times$ fewer parameters**, thus highlighting the efficiency of our approach. We believe this update enhances fairness and transparency in our comparisons, but does not change anything in terms of significance of the results. This is because, our focus in this paper is not to beat the accuracy of previous approaches, but rather to maintain or get as closely as possible to the accuracy achieved by prior approaches with significantly less resources (i.e., #Params, #FLOPS). This is how our paper redefines the SOTA in 2D and 3D image segmentation.\"}", "{\"comment\": \"\\\"The reported results for Swin Unet and TransUNet in our manuscript were taken directly from the CASCADE paper [3], ensuring consistency across all the reported baselines.\\\"\\n\\nIt is not fair to copy results of baselines from other papers, and it is better to check the original paper to utilize the hyper-parameters in their original papers. It is not a good way to train your model to the optimal one while not use optimal hyper-parameters to train baseline models. I am very confused why you copied results from the CASCADE paper. This CASCADE paper is not a benchmark paper, so copying results from this paper will not ensure the consistency. These baselines were not proposed in the CASCADE paper, and copying results from this paper will lead to lower segmentation performance and unfair comparison. These baselines have not reached their highest performance, and they still have large potential to reach the higher performance. Therefore, it cannot demonstrate the superiority of the UltraLightUNet.\"}", "{\"title\": \"Response to authors about addressing my concerns\", \"comment\": \"I appreciate the authors for addressing my concerns and including additional experiments. Below, I outline my further observations.\\n\\nI acknowledge that the EMCAD work focuses on optimizing only the decoder, and I agree with this point. However, I note that the modules you have utilized in your decoder\\u2014and even your primary module in the encoder (MKIR)\\u2014are essentially identical to those in the EMCAD module. Let\\u2019s analyze each module in detail.\\n\\nYou claim to use the \\\"new\\\" Multi-kernel Inverted Residual (MKIR) block, yet the design of the MKIR block appears to be identical to the MSCB block in the EMCAD paper. Furthermore, your MKIRA module (CMFA + MKIR) is functionally the same as the MSCAM module from the EMCAD paper (CAB + SAB + MSCB). Your GAG module is the same as LGAG in the EMCAD paper. Your CMFA module is just the same as (CAB+SAB) in the EMCAD paper.\\n\\nWhile I agree that you have incorporated 3D experiments and presented more extensive experimental results, from an architectural standpoint, I do not see any new or \\u201cnovel\\u201d modules as claimed. While I recognize that novelty is not solely about methodology (a point often emphasized by reviewers), I personally believe that innovation is not limited to architectural design alone.\\n\\nDespite this, every detail in your paper closely mirrors EMCAD, which unfortunately does not persuade me to revise my score upward.\"}", "{\"title\": \"Official Global response by Authors: Part1\", \"comment\": [\"We sincerely thank all reviewers for their constructive feedback, which has been invaluable in improving our manuscript. We are encouraged by the recognition of **UltraLightUNet**\\u2019s contributions to lightweight, high-performance segmentation and its potential for practical applications in resource-constrained scenarios.\", \"We note that our contribution is along the lines of a growing direction in computer vision in general, and biomedical imaging in particular, of using ultralightweight models to perform real-time tasks with high precision. Remarkable examples of this trend are MobileNet and EfficientNet in computer vision, TinyBERT in language processing, UNeXt and EGE-UNet in biomedical imaging. We beleive that the significance of this type of work will only increase in future years.\", \"Below, we provide a summary of the major revisions made in response to the reviewers\\u2019 comments and highlight our specific changes. We marked these changes in **blue** in the revised version of our draft.\", \"## Key Modifications and Contributions\", \"### 1. New Experiments on Complex Datasets\", \"To address the reviewers\\u2019 concerns regarding the evaluation of **UltraLightUNet** on more complex segmentation tasks (**Table R5**), we conducted additional experiments on two challenging datasets:\", \"**MSD Task01_BrainTumour** (multi-level tumor segmentation):\", \"Results show that **UltraLightUNet3D-M** achieves the **best DICE scores** on Tumor Core (TC) and Whole Tumor (WT) segmentation, while having remarkably lower #Params and #FLOPs compared to heavyweight models (**SwinUNETR**, **3D UX-Net**).\", \"The base model, **UltraLightUNet3D**, also performs competitively, outperforming lightweight models like **SlimUNETR** with the lowest computational cost.\", \"**MSD Task06_Lung** (binary CT lesion segmentation):\", \"Results demonstrate **UltraLightUNet**\\u2019s ability to handle complex binary cancer segmentation tasks efficiently, with **UltraLightUNet3D-M** achieving the **best DICE score** compared to other baselines, including lightweight (e.g., **SlimUNETR**) and heavyweight (**3D UX-Net**, **SwinUNETR**).\", \"### 2. Theoretical Motivation, Methodological Novelty, and Interpretability\", \"We elaborated on the theoretical underpinnings of **UltraLightUNet**\\u2019s modules:\", \"Our **multi-kernel trick** supports both **$k_1 = k_2$** (same-size kernels) and **$k_1 \\u2260 k_2$** (different-size kernels) for **$k_1$, $k_2$** $\\\\in Kernels$ versus conventional multi-scale (only **$k_1 \\u2260 k_2$**) designs, thus allowing adaptable context extraction.\", \"This conceptual distinction allows **UltraLightUNet** to adapt kernel sizes based on application-specific needs (e.g., large kernels for large objects, small kernels for small objects, or mixed for both objects segmentation).\", \"**Depth-wise convolutions** across multiple kernels (**$k_1 = k_2$** or **$k_1 \\u2260 k_2$**) ensure lightweight computation without compromising accuracy.\", \"We clarified about our **methodological novelty**:\", \"End-to-end ultralightweight architecture design\", \"Novel 2D modules,\", \"New 3D modules, and\", \"Both 2D and 3D versatility.\", \"**Activation heatmaps (Figure R1)** were added to demonstrate how MKIR and CMFA focus on critical regions in an image, thus providing interpretability evidence for the modules.\", \"### 3. Clarifications on Related Approaches\"], \"we_clarified_the_distinctions_between_ultralightunet_and_related_methods\": [\"**EMCAD**: Table R1 explicitly highlights the differences, including **UltraLightUNet**\\u2019s conceptual distinction (**$k_1 = k_2$** or **$k_1 \\u2260 k_2$**), 3D versatility, and unified end-to-end lightweight design.\", \"**SAMT, CASCADE, ConvNeXt, and SFCNs**: We emphasized the advantages of our multi-kernel approach over multi-scale or single-scale strategies used in these methods.\", \"### 4. Additional Comparisons and New Baseline Results\", \"**2D Segmentation**: We implemented **DeformableLKA** (*Azad et al., 2024*), a recent segmentation method, and compared its performance with **UltraLightUNet** in **Table R2**. Results confirm **UltraLightUNet**\\u2019s superior efficiency and segmentation accuracy.\", \"**3D Segmentation**: **SlimUNETR** (*Pang et al., 2023*), a lightweight 3D method, was implemented and compared in **Table R4**. **UltraLightUNet** outperforms **SlimUNETR** in both segmentation accuracy and computational efficiency.\", \"### 5. Efficiency Metrics\", \"Reported training and inference times, parameter counts, FLOPs, and average Dice scores across six binary segmentation datasets (**Table R3**). **UltraLightUNet** demonstrated a compelling trade-off between accuracy and computational cost.\", \"**Table R3** will be reported in **Table 14 and Appendix A.13** in our revised manuscript.\"]}", "{\"title\": \"Official Comment by Authors: Part4\", \"comment\": \"### **Multi-Kernel Inverted Residual Attention (MKIRA) Block**\\n\\nIt is not about the architectural novelty of the Convolutional Multi-Focal Attention (CMFA) module per se (so we explicitly cited the Channel and Spatial Attention mechanisms in our initial submission), but rather **the novelty comes from the integration of CMFA with MKIR** to achieve **multi-kernel refinement of attention**. This combination enhances the feature refinement across spatial contexts, thus leveraging the adaptability of the multi-kernel design which is absent in EMCAD\\u2019s MSCAM. \\n\\nAdditionally, the **3D extension of CMFA** is novel in the context of medical imaging. This adaptation enables efficient attention mechanisms for volumetric data, which is a critical advancement for 3D medical image segmentation tasks. \\n\\n### **Grouped Attention Gate (GAG)**\\n\\nWe acknowledge that the GAG follows the LGAG module from EMCAD, and we have followed this Review\\u2019s suggestion and cite EMCAD in our revised submission. However, in this ICLR paper, GAG is used in conjunction with our novel MKIR and MKIRA modules, which distinguishes the overall design of our decoder. Furthermore, GAG is seamlessly extended to 3D, which is a new contribution specific to volumetric medical imaging. \\n\\n---\\n\\n## 2. 3D Module Extensions \\n\\nOne of our major contributions is the **3D extension of all modules** (MKIR, MKIRA, CMFA, and GAG). This extension introduces novel volumetric feature extraction and refinement capabilities that are absent in EMCAD, which focuses solely on 2D tasks. To the best of our knowledge, we are the **first to use multi-kernel trick** and **local attention mechanisms in a 3D architecture** for medical image segmentation. Consequently, our UltraLightUNet3D becomes the high-performing SOTA ultralightweight architecture for volumetric image segmentation. \\n\\nOur 3D modules enable UltraLightUNet to effectively handle complex volumetric medical imaging tasks while maintaining a high computational efficiency. We demonstrate this through experiments on challenging datasets, including **MSD Task01_BrainTumour** and **Task06_Lung Cancer** (Table 13 in revised submission), thus highlighting the efficacy of our 3D design. \\n\\n---\\n\\n## 3. Significance Compared to 3D UX-Net (ICLR 2023) \\n\\nLet\\u2019s compare our contribution with a recently accepted paper at ICLR. While **3D UX-Net** (ICLR 2023) extends the ConvNeXt block to 3D, it retains a computationally expensive decoder identical to SwinUNETR, resulting in high resource demands. In contrast: \\n\\n- UltraLightUNet introduces a conceptually new module for the MKIR module (please see our first response and example) and extends all modules to work with 3D segmentation, providing a lightweight and efficient design for both 2D and 3D tasks. \\n\\n- Our work addresses the computational challenges in resource-constrained scenarios, achieving state-of-the-art efficiency without sacrificing segmentation precision. Of note, we are talking about orders of magnitude improvements in efficiency which makes the UltraLightUNet the new SOTA in this problem space. \\n\\n- Given that 3D UX-Net was accepted at ICLR 2023 for its contributions, we believe that the UltraLightUNet with its broader applicability, represents a more substantial contribution to the ICLR community and will inspire future advancements in lightweight medical image segmentation. \\n\\nWe hope these clarifications address the reviewer\\u2019s concerns and emphasize the novelty, practicality, and impact of our contributions. Thank you again for your thoughtful feedback and consideration.\"}", "{\"title\": \"Official Global response by Authors: Part2\", \"comment\": \"### 6. Comprehensive Ablation Studies\\n\\n- We emphasized the contributions of individual modules (e.g., MKIR, MKIRA, CMFA, GAG) using detailed ablation studies (**Tables 4 and 5** in the initial submission). Results show that:\\n - **MKIR and MKIRA** are the most critical for accuracy improvements, especially on datasets like BUSI (from 72.41% to 76.61%).\\n - Combining all modules yields the best overall performance (78.04% on BUSI).\\n- These studies validate the design of **UltraLightUNet** and will be further clarified in the revised manuscript.\\n\\n### 7. Clarity on Model Variants\\n\\n- We clarified the differences among **UltraLightUNet-T, S, M, and L**:\\n - Variations in channel dimensions across encoder-decoder stages (e.g., T: (4, 8, 16, 24, 32); M: (32, 64, 128, 192, 320)).\\n - Scalability analysis (**Tables 9 and 10**) validates the performance improvements with increasing channel counts.\\n\\n---\\n\\n### 8. Limitations and Future Directions\\n\\n- We added a dedicated subsection in **Appendix A.14** to discuss the limitations of **UltraLightUNet**:\\n - Slightly lower performance on highly complex datasets compared to heavyweight SOTA models.\\n - Trade-offs between computational efficiency and segmentation accuracy.\\n- We also outlined future directions, including hybrid architectures, self-supervised pretraining, and extensions to tasks like image reconstruction, enhancement, and denoising.\\n\\n---\\n\\n## Key Revisions\\n\\nBased on the rebuttal, we incorporate the following changes:\\n\\n1. **Expanded Introduction and Related Work**:\\n - Discuss recent lightweight (**SlimUNETR**) and relevant (**DeformableLKA**) methods.\\n - Clearly differentiate **UltraLightUNet** from EMCAD, ConvNeXt, SAMT, CASCADE, and SFCNs.\\n - Highlight the novelty of **UltraLightUNet\\u2019s** unified multi-kernel approach.\\n\\n2. **Revised Methodology**:\\n - Emphasize the theoretical basis for multi-kernel (**$k_1 = k_2$** and **$k_1 \\u2260 k_2$**) vs. multi-scale (**$k_1 \\u2260 k_2$**) approaches and its adaptability across diverse tasks.\\n - Modify **Figure 2** to reflect the conceptual distinctions.\\n - Clarify the differences among **UltraLightUNet-T, S, M, and L**.\\n\\n3. **Updated and Added Tables and Figures**:\\n - Add results for **DeformableLKA** (**Table R2**) and **SlimUNETR** (**Table R4**) to **Tables 1, 3, 12, 13, and 14** in our revised manuscript.\\n - Include results on new datasets (**Table R5**) to **Table 13** in our revised manuscript.\\n - Add activation heatmaps (**Figure R1**) to enhance interpretability discussions (**Figure 3** in our revised manuscript).\\n - Add training/inference metrics (**Table R3**) to **Table 14** in **Appendix A.13**.\\n - Updated the DICE score of SwinUNet in Table 2 on Synapse dataset to reflect the original authors DICE score as per the reviewer suggestion. \\n\\n4. **Added Limitations and Future Directions**:\\n - Include a discussion on limitations and future directions in **Appendix A.14**.\\n\\n---\\n\\nWe believe these revisions comprehensively address the reviewers' comments, strengthen the manuscript, and clarify **UltraLightUNet\\u2019s** significant contributions to lightweight medical image segmentation. Thank you for your valuable feedback, which has greatly enhanced the rigor and clarity of our work. We look forward to submitting the revised version.\\n\\n## References \\n\\nRahman et al., 2024. Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. CVPR (pp. 11769-11779). \\n\\nSandler wt al., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. CVPR (pp. 4510-4520). \\n\\nHu et al., 2018. Squeeze-and-excitation networks. CVPR (pp. 7132-7141). \\n\\nHe et al., 2016. Deep residual learning for image recognition. CVPR (pp. 770-778). \\n\\nTan et al., 2019, May. Efficientnet: Rethinking model scaling for convolutional neural networks. ICML (pp. 6105-6114). \\n\\nAzad et al., 2024. Beyond self-attention: Deformable large kernel attention for medical image segmentation. WACV (pp. 1287-1297). \\n\\nLin et al., 2023. Scale-aware modulation meet transformer. ICCV (pp. 6015-6026). \\n\\nWoo et al., 2018. Cbam: Convolutional block attention module. ECCV (pp. 3-19). \\n\\nRahman et al., 2023. Medical image segmentation via cascaded attention decoding. WACV (pp. 6222-6231). \\n\\nCao et al., 2022. Swin-unet: Unet-like pure transformer for medical image segmentation. ECCV (pp. 205-218). \\n\\nPang et al., 2023. Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Transactions on Medical Imaging \\n\\nTang et al., 2024. Cmunext: An efficient medical image segmentation network based on large kernel and skip fusion. ISBI (pp. 1-5). \\n\\nYang et al., 2023. UcUNet: A lightweight and precise medical image segmentation network based on efficient large kernel U-shaped convolutional module design. Knowledge-Based Systems, 278, 110868.\"}", "{\"comment\": \"Thanks for authors' responses.\\n\\n(1) Like another reviewer jeKK said, the modules in this manuscript is same with modules in the paper EMCAD, so there is no module novelty. Like your title said, the model in this manuscript is still using U-Net architecture, so there is no architectural novelty as you mentioned. \\n(2) The improvement of this manuscript is low, and contributions of this manuscript is low. The major contributions of this manuscript are mainly built over the paper EMCAD, resulting in a low impact to the field of medical image segmentation.\\n(3) The work in this manuscript is trying to incorporate several previously proposed and widely used modules into a UNet architecture, so it doesn't provide theoretical insights to others.\\n\\nOverall, I will not change my score, \\\"1 strong reject\\\".\"}", "{\"title\": \"Response to the comments of Reviewer SWMn: Theoretical basis\", \"comment\": \"We thank the reviewer for recognizing the effectiveness of UltraLightUNet's multi-kernel structures in capturing multi-scale contexts and achieving superior segmentation accuracy with minimal complexity. We also appreciate the constructive feedback and will address each of the reviewer\\u2019s comments in the following responses.\\n\\n### **Q1. Although well-organized, the manuscript could benefit from a deeper focus on explaining how each module reduces computational costs while maintaining high performance. The method section lacks clear evidence or mathematical proof to support the model\\u2019s design, which may present a scientific limitation. Can you clarify the effects of the proposed modules in the model with clear evidence or mathematical insight? The current equations in the manuscript seem somewhat redundant, serving primarily as mathematical restatements. It would be helpful to provide more rigorous insights or proofs demonstrating how each module contributes to performance improvement and computational efficiency.** \\n\\nWe appreciate the reviewer\\u2019s comment and provide additional clarification regarding UltraLightUNet\\u2019s design, which is grounded in clear theoretical concepts and validated through empirical studies and visualizations. Specifically: \\n\\n**Theoretical Basis:** Most existing architectures in computer vision rely on foundational concepts rather than explicit mathematical proofs to justify their design choices. For example, Vision Transformers leverage self-attention mechanisms, and ConvNeXt uses large-kernel convolutions to enhance feature extraction. Similarly, our approach employs the **multi-kernel trick** to balance local and global feature extraction via multi-kernel depth-wise convolutions (MKDC) and incorporates **depth-wise convolutions** for lightweight computation. These foundational design choices, while not purely theoretical (or have no mathematical proof), ensure that UltraLightUNet achieves both high segmentation performance and extreme efficiency. \\n\\n**Empirical Validation:** Ablation studies in Tables 4 and 5 of our initial submission quantify the individual and combined contributions of the modules. For instance, MKIR and MKIRA together significantly boost DICE scores, such as from 72.41% to 76.61% on the BUSI dataset, while the integration of all modules (MKIR, MKIRA, and GAG) achieves the best performance at 78.04%. This demonstrates how the design enhances segmentation accuracy while maintaining computational efficiency. We have additional abaltion experiments reported in Tables 6, 7, and 8 (in the Appendix of initial submission) which show the impact of our individual module over existing counterparts. \\n\\n**Visual Evidence:** Activation heatmap visualizations (Fig. 4 and Section A.7 in our revised draft) further validate the practical impact of our modules. These visualizations show that MKIR, combined with CMFA, effectively attends to and refines critical regions in an image, such as lesion boundaries, improving segmentation quality. \\n\\nTo address the reviewer\\u2019s concern, we will revise the manuscript to contextualize our approach within the broader landscape of existing architectures that rely on foundational concepts rather than mathematical proofs.\"}", "{\"comment\": \"Thanks for authors' response.\\n\\nThe novelty is low since the way to solve the problem has been widely explored and is the same, such as employing depth-wise convolution for lightweight design and splitting channels for isolated convolutions. Thus, putting much efforts on applying the exactly same way to solve the same problem is not very interesting, and this application from N to N+1 does not make impact on the medical image segmentation tasks. \\n\\nThe architectural design cannot be considered as a novel design since the overall design (U-shaped encoder-decoder) is always used in the medical image segmentation. Incorporating several modules into this architecture does not revolutionize the architectural design.\"}", "{\"title\": \"Response to the comments of Reviewer jk4q: Multi-class segmentations\", \"comment\": \"### **Q2. Second, the masks are mostly binary segmentation tasks. The multi-class segmentation is not well explored. Adding more complicated and multi-class segmentation would better demonstrate the model's capability for broader, real-world medical imaging tasks.**\\n\\nWe appreciate the reviewer's suggestion to explore multi-class segmentation tasks to demonstrate the broader applicability of **UltraLightUNet**. We would like to highlight that our initial submission already includes results on several multi-class segmentation tasks across both 2D and 3D settings:\\n\\n- **2D Multi-Class Segmentation**: Results are provided for Synapse 8-organ segmentation (Table 2) and ACDC 3-organ segmentation (Table 11) in the initial submission.\\n- **3D Multi-Class Segmentation**: Results are reported for FETA 7-organ segmentation (Table 3), MSD Task07_Prostate segmentation (Table 3), Synapse 8-organ segmentation (Table 12), and Synapse 13-organ segmentation (Table 12) in the initial submission.\\n\\nAdditionally, to address the reviewer's concern, we conducted further **multi-class segmentation experiments** during the rebuttal phase, specifically on the **MSD Task01_BrainTumour** segmentation dataset, which involves multi-level tumor segmentation. We present these new results in **Table R5 above**, which demonstrates that our **UltraLightUNet3D-M** outperforms existing popular heavyweight architectures (*SwinUNETR, nnFormer, and 3D UX-Net*) with remarkably lower #Params and #FLOPs. These new results (**Table R5 above**) will be included in the revised manuscript (**Table 13 in Appendix A.11**) to demonstrate **UltraLightUNet**\\u2019s capability to handle complex, multi-class segmentation tasks effectively.\\n\\nWe believe these results, along with the new additions, comprehensively address the reviewer's concern and reinforce **UltraLightUNet**\\u2019s applicability to real-world medical imaging scenarios requiring multi-class segmentation.\"}", "{\"title\": \"Response to the comments of Reviewer SWMn: Component ablations and novelty\", \"comment\": \"### **Q2. Additionally, there are no ablation studies evaluating the individual contributions of each module in terms of both segmentation performance and parameter efficiency. Which proposed module is the most critical? Please provide ablation studies to highlight the individual impact of each module on both segmentation performance and computational cost. This would help identify the key components driving the model's success.**\\n\\nIn our initial submission, we do have ablation studies reported in Table 4 and Section 5.2 (in our initial submission) to evaluate the impact of each proposed component on segmentation performance and parameter efficiency. \\n\\nThe results in Table 4 demonstrate that the multi-kernel trick, implemented through MKIR (in the encoder) and MKIRA (in the decoder), is the most critical component for improving the segmentation accuracy, increasing the DICE score from 72.41% to 76.61% on the BUSI dataset. This indicates the significant contribution of the multi-kernel approach to feature extraction and refinement. However, when we integrate all proposed modules\\u2014MKIR, MKIRA, and GAG\\u2014our model achieves the highest overall DICE score of 78.04% on the same dataset. \\n\\nThis pattern is consistent across other datasets (i.e., ClinicDB, ColonDB, ISIC18, DSB18, and EM), showing that while MKIR and MKIRA individually drive most of the performance improvements, the combination of all modules optimally balances accuracy and computational cost. We will include a more detailed explanation of these findings in the revised manuscript to further emphasize the role of each module in driving the model's success. \\n\\n\\n### **Q3. General multi-scale approaches are not novel. For instance, the \\\"Spatial Feature Conservation Networks (SFCNs) for Dilated Convolutions to Improve Breast Cancer Segmentation from DCE-MRI\\\" (International Workshop on Applications of Medical AI, 2022) employs a multi-scale strategy. What distinguishes your model\\u2019s approach to multi-scale feature extraction? Please elaborate on the unique contributions and insights of your method in this context.** \\n\\nWe thank the reviewer for raising this point and giving us the opportunity to clarify. Our UltraLightUNet distinguishes itself by introducing a new and lightweight multi-kernel feature extraction approach that integrates seamlessly into both 2D and 3D architectures. Below, we explain how our method differs and contributes uniquely to the field. \\n\\n**Multi-Kernel Depth-Wise Convolutions (MKDC):** Unlike standard multi-scale approaches, such as SFCNs that rely on dilated convolutions with different dilation rates (e.g., d\\u2081 \\u2260 d\\u2082 for dilation rates d\\u2081, d\\u2082), UltraLightUNet leverages MKDC, which supports both same-size kernels (k\\u2081 = k\\u2082) for uniform context extraction and different-size kernels (k\\u2081 \\u2260 k\\u2082) to balance the local and global contexts adaptively. This flexibility ensures efficient multi-scale feature extraction tailored to diverse spatial complexities while maintaining lightweight efficiency. \\n\\n**Volumetric 3D Extensions:** A key limitation of methods like SFCNs is their restriction to 2D tasks. In contrast, UltraLightUNet extends its multi-kernel strategy to 3D segmentation tasks through 3D versions of its modules, such as the MKIR and MKIRA blocks. These modules employ multi-kernel convolutions and attention mechanisms (e.g., CMFA) to refine features in 3D space, making UltraLightUNet highly effective for complex volumetric medical imaging tasks, such as tumor and organ segmentation. \\n\\n**Decoder Design with Sequential Refinement:** UltraLightUNet introduces the MKIRA block, which performs multi-kernel refinement of attention by combining CMFA for local attention with MKIR for multi-kernel convolutional refinement. Unlike SFCNs, which lack 3D extensions, MKIRA handles multi-kernel attention and feature refinement in both 2D and 3D tasks with significantly reduced computational overhead. \\n\\n**Lightweight Efficiency:** While SFCNs and similar methods rely on computationally intensive standard convolutions or dilations, UltraLightUNet achieves superior computational efficiency by using depth-wise convolutions. For example, our 3D base model, UltraLightUNet3D, achieves new SOTA efficiency with just 0.453M parameters and 3.42 GFLOPs, compared to existing 3D methods that typically require hundreds of Giga FLOPs. This efficiency is critical for resource-constrained scenarios, such as point-of-care diagnostics. \\n\\nIn summary, UltraLightUNet distinguishes itself from standard multi-scale methods like SFCNs by combining multi-kernel depth-wise convolutions, lightweight attention mechanisms, and volumetric extensions into a unified framework. These innovations ensure competitive segmentation performance across both 2D and 3D tasks while significantly reducing computational costs.\"}", "{\"title\": \"Response to the comments of Reviewer jeKK: Similarity to EMCAD (CVPR 2024) paper (Part1)\", \"comment\": \"We are genuinely surprised by the ethical concerns raised regarding our UltraLightUNet paper. To clarify unequivocally and categorically, our ICLR submission is neither an act of plagiarism nor a duplicate submission of the EMCAD (Rahman et al., 2024). While both papers focus on improving efficiency in medical image segmentation, they have fundamentally different goals, methodologies, and contributions.\\n\\nBelow, we address each and every concern and clarify the originality and contributions of our **UltraLightUNet** submission.\\n\\n---\\n### **Q1. Despite the paper presents diverse experiments and ablation studies, I see a very close similarity to a CVPR 2024 paper, named EMCAD (Rahman et al., 2024), which I detail in the next parts**\\n\\n**Response:** \\nWe respectfully disagree with this assessment. While UltraLightUNet and EMCAD share a U-shaped architecture, which is widely used for medical image segmentation, they differ significantly in their **architectural philosophy, key innovations, and target use cases (as summarized in Table R1 below)**:\\n\\n**Table R1: Differences between UltraLightUNet and EMCAD**\\n\\n| **Feature** | **UltraLightUNet** | **EMCAD** |\\n|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|\\n| **Motivation** | Designed as a **full (end-to-end) ultra-lightweight architecture** optimized to the extreme for resource-constrained environments. | Focuses on optimizing the **decoder only**, relying on existing pre-trained encoders with emphasis on versatility. |\\n| **Encoder Design** | **Built from scratch with** the new Multi-kernel Inverted Residual (**MKIR**) block which leverages **extremely efficient** multi-kernel depth-wise convolutions. | Relies on **existing pre-trained encoders** (e.g., PVT_V2_B2, PVT_V2_B0) with **complicated** operations like self-attention.|\\n| **Decoder Design** | Uses the new Multi-kernel Inverted Residual Attention (**MKIRA**) block involving simple **bilinear upsampling** to reduce computational cost. **Note:** Multi-kernel here stands for $(k_1 = k_2) \\\\text{ or } (k_1 \\\\neq k_2), \\\\text{where } k_1, k_2 \\\\in \\\\text{Kernels}$. | Employs Efficient Up-Convolution Block (**ECUB**) and Multi-scale Convolutional Attention Module (**MSCAM**). **Note:** Multi-scale here stands for $( k_1 \\\\neq k_2 ), \\\\text{where } k_1, k_2 \\\\in \\\\text{Kernels} $. |\\n| **2D and/or 3D Versatility** | A unique architecture supporting **both 2D and 3D segmentation tasks**. | Focuses on **2D segmentation tasks only**. |\\n| **Target Use Cases** | Focuses on both encoder and decoder efficiency, targeting **resource-constrained environments** (i.e., **0.027M parameters only** for tiny architecture). | Focuses only on decoder efficiency; overall efficiency depends on the encoder (i.e., **3.92M parameters** for tiny architecture). |\\n| **Experimental Datasets** | Evaluated on **both 2D** (polyp, skin lesion, breast tumor, cell, Synapse abdomen 8-organ, ACDC cardiac organ) **and 3D** (FeTA fetal brain, MSD Task05 Prostate, Synapse abdomen 13-organ and 8-organ) segmentation tasks. | Evaluated **only on 2D** (polyp, skin lesion, breast tumor, cell, Synapse abdomen 8-organ, ACDC cardiac organ) tasks. |\", \"we_can_also_illustrate_these_differences_using_an_analogy_from_transportation_and_urban_planning\": \"**UltraLightUNet** is akin to a **compact electric scooter** built for quick, efficient navigation in crowded city streets. It\\u2019s lightweight, maneuverable, and optimized for short, real-time trips (analogy to real-time diagnostics) in constrained environments where heavy vehicles (complex models) can\\u2019t operate effectively. The focus is on minimalism and efficiency, designed to get the job done swiftly without excess fuel consumption (parameters and FLOPs in our case of resource-constrained scenarios). \\n\\nIn contrast, **EMCAD** is like a **comprehensive highway system** designed to handle various types of traffic (cars, buses, trucks) efficiently, with multi-lane roads (multi-scale attention modules) and advanced traffic management (hierarchical feature refinement) to balance local (urban streets) and global (interstate highways) needs. It aims to provide a versatile, all-purpose solution that works well in different terrains and conditions.\"}", "{\"summary\": \"This manuscript proposes a novel U-shaped network incorporating various modules for medical image segmentation, with a focus on reducing computational costs. Key contributions include the Multi-Kernel Inverted Residual Block, Multi-Kernel Inverted Residual Attention, and Grouped Attention Gate. As a result, the proposed model achieves remarkable computational efficiency and delivers superior segmentation quality compared to state-of-the-art models. The manuscript is well-written and well-organized; however, more detailed technical insights into each module would enhance clarity and depth.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This manuscript presents various experiments validating the model's performance. The multi-kernel structures employed effectively capture multi-scale contexts, enhancing segmentation accuracy without adding complexity. The reported results demonstrate that the proposed model\\u2019s segmentation performance surpasses that of significantly heavier architectures.\", \"weaknesses\": \"Although well-organized, the manuscript could benefit from a deeper focus on explaining how each module reduces computational costs while maintaining high performance. The method section lacks clear evidence or mathematical proof to support the model\\u2019s design, which may present a scientific limitation. Additionally, there are no ablation studies evaluating the individual contributions of each module in terms of both segmentation performance and parameter efficiency.\", \"questions\": \"1. Can you clarify the effects of the proposed modules in the model with clear evidence or mathematical insight? The current equations in the manuscript seem somewhat redundant, serving primarily as mathematical restatements. It would be helpful to provide more rigorous insights or proofs demonstrating how each module contributes to performance improvement and computational efficiency.\\n\\n2. Which proposed module is the most critical? Please provide ablation studies to highlight the individual impact of each module on both segmentation performance and computational cost. This would help identify the key components driving the model's success.\\n\\n3. General multi-scale approaches are not novel. For instance, the \\\"Spatial Feature Conservation Networks (SFCNs) for Dilated Convolutions to Improve Breast Cancer Segmentation from DCE-MRI\\\" (International Workshop on Applications of Medical AI, 2022) employs a multi-scale strategy. What distinguishes your model\\u2019s approach to multi-scale feature extraction? Please elaborate on the unique contributions and insights of your method in this context.\\n\\n4. What are the differences among UltraLightUNet- T, S, and L in terms of model? Layer difference? Please explain in details in the manuscript.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the comments of Reviewer jk4q: Complex applications\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback, and highlighting both the strengths and areas for improvement in our work. We deeply appreciate the recognition of UltraLightUNet\\u2019s originality and its contributions to computational efficiency and high segmentation quality, achieved through its novel lightweight architecture and innovative modules. Below, we address each concern raised by the reviewer and outline how we plan to incorporate these suggestions in our revised manuscript.\\n\\n### **Q1. First, while the model is evaluated across various medical imaging datasets, these datasets are relatively straightforward, covering simple segmentation tasks and organs rather than more complex applications like CT/MRI tumor and lesion segmentation. How it would work on more complicated problems (e.g., 3D tumors/lesions) and multi-class segmentation problems?** \\n\\nWe appreciate the reviewer\\u2019s suggestion to evaluate UltraLightUNet on more complex tasks like CT/MRI tumor and lesion segmentation. To address this, we conducted additional experiments on the MSD Task01_BrainTumour (multi-level MRI segmentation) and Task06_Lung (binary CT segmentation) datasets, and the results are presented in **Table R5 below**. We note that our UltraLightUNet3D-M achieves the best DICE scores in Tumor core (83.41%), Whole tumor (91.51%), and Lung cancer (71.53%) segmentation while maintaining remarkably lower computational costs compared to heavyweight methods like SwinUNETR (62.19M parameters, 328.6G FLOPs) and 3D UX-Net (53.01M parameters, 632.0G FLOPs). Furthermore, UltraLightUNet3D-M achieves the second-best DICE scores in Non-Enhancing Tumor and average Brain Tumor segmentation (78.95%), demonstrating its balanced performance across tumor subregions. \\n\\nCompared to the existing lightweight method SlimUNETR, UltraLightUNet3D-M achieves better segmentation results on all tasks while maintaining similar computational efficiency (1.42M parameters, 7.33 GFLOPs vs. SlimUNETR's 1.78M parameters, 5.25 GFLOPs). \\n\\nAdditionally, our base model, UltraLightUNet3D, demonstrates competitive performance compared to heavyweight models like 3D UX-Net and SwinUNETR, while significantly outperforming SlimUNETR, achieving the lowest computational cost (0.453M parameters, 3.68 GFLOPs). These results validate UltraLightUNet\\u2019s ability to generalize to complex segmentation tasks with an excellent balance of accuracy and efficiency. \\n\\n**Table R5:** Experimental results (DICE %) of 3D Brain tumor and Lung cancer segmentation on MSD Task01_BrainTumour (4-channel inputs) and MSD Task06_Lung datasets. #FLOPs are reported for 4-channel inputs with 96x96x96 volumes. **Note:** Tumor Core (TC), Whole Tumor (WT), Non-enhancing Tumor (NET).\\n\\n| Architecture | #Params (M) | #FLOPs (G) | TC (Task01) | WT (Task01) | NET (Task01) | Avg. (Task01) | Task06_Lung Cancer |\\n|-------------------------|-------------|------------|-------------------------|-------------------------|--------------------------|--------------------------|---------------------|\\n| UNETR | 92.78 | 82.60 | 79.77 | 89.83 | 57.47 | 75.69 | 65.38 |\\n| TransBTS | 31.60 | 110.40 | 80.09 | 88.38 | 55.89 | 74.79 | 63.57 |\\n| nnFormer | 159.03 | 204.20 | $\\\\underline{83.19}$ | 90.14 | 60.15 | 77.82 | 69.79 |\\n| 3D UX-Net | 53.01 | 632.00 | 82.90 | 91.13 | 61.72 | 78.58 | $\\\\underline{71.46}$ |\\n| SwinUNETR | 62.19 | 328.60 | $\\\\underline{83.19}$ | $\\\\underline{91.36}$ | **62.62** | **79.06** | 65.12 |\\n| SlimUNETR | 1.78 | $\\\\underline{5.25}$ | 79.86 | 87.95 | 50.18 | 72.66 | 67.66 |\\n| UltraLightUNet3D (Ours)| **0.453** | **3.68** | 82.98 | 90.56 | 60.23 | 77.92 | 70.32 |\\n| UltraLightUNet3D-M (Ours)| $\\\\underline{1.42}$ | 7.33 | **83.41** | **91.51** | $\\\\underline{61.92}$ | $\\\\underline{78.95}$ | **71.53** |\"}", "{\"title\": \"Official Comment by Authors: Final\", \"comment\": \"Thank you for getting back to us. Sadly, it appears to us that you did not even look over our latest responses where we explicitly address, in great detail, each and every of your previous concerns. In short, to your points above:\\n\\n(1) The novelty of our paper comes mainly from two contributions, i.e., (i) new encoder design (which is not even considered in the EMCAD paper) and (ii) a new mathematical basis for MKIR module operation which leads to the overall extreme efficiency. U-Net is a fundamental architecture that did enable and will continue to enable innovation, just like our approach; there is nothing wrong in improving SOTA based on the U-Net architecture. \\n\\n(2) The impact of our approach in bioimaging is motivated by the orders of magnitude reduction in #Parameters and #FLOPS compared to SOTA (while maintaining the overall accuracy) we provide for both 2D and 3D (volumetric) segmentation. These huge improvements can enable real-time image segmentation in resource-limited scenarios (point-of-service scenarios) which have a high practical relevance; this is where the impact of our approach comes from. \\n\\n(3) The encoder and hence the end-to-end architecture are new and we explain the mathematical basis of our approach. The empirical nature of our paper is well aligned with many other impactful papers published in previous ICLRs, too any to list here; our paper beats SOTA for ultra-lightweight image segmentation and your refusal to acknowledge it does not diminish our contribution. \\n\\nFinally, it is disheartening to see this lack of genuine dialogue; our point-by-point responses addressed all your concerns, yet did not even trigger a careful reading of our arguments. We fail to see how rejecting this paper at all costs does serve this community...\"}", "{\"title\": \"Official Comment by Authors: Part2\", \"comment\": \"**Table R6:** Comparing the effect of different types of multi-kernel strategies such as $k_1=k_2$ (only small $[3 \\\\times 3, 3 \\\\times 3, 3 \\\\times 3]$, only large $[5 \\\\times 5, 5 \\\\times 5, 5 \\\\times 5]$) and $k_1 \\\\neq k_2$ (multi-scale $[1 \\\\times 1, 3 \\\\times 3, 5 \\\\times 5]$) in objects of different sizes (small, large, mixed). We use $3 \\\\times 3$ and $5 \\\\times 5$ average kernels for small and large kernels, respectively. Our network optimize the weight of these kernels. Bold entries show the best results. SSIM stands for structural similarity index measure and MSE stands for mean squared error. **Note:** As OpenReview does not allow us adding figures in the official comment boxes, we could not also include the visual plots of convolved outputs herewith.\\n\\n\\n| Objects in Images | Kernels Used | Object-to-Background Ratio $\\\\uparrow$ | SSIM $\\\\uparrow$ | MSE $\\\\downarrow$ |\\n|--------------------|-----------------------|-------------------------------|------|------|\\n| Small | Small [3\\u00d73, 3\\u00d73, 3\\u00d73] | **41.83** | 0.79 | 0.007 |\\n| Small | Large [5\\u00d75, 5\\u00d75, 5\\u00d75] | 11.69 | 0.526 | 0.024 |\\n| Small | Multi-scale [1\\u00d71, 3\\u00d73, 5\\u00d75] | 37.66 | 0.786 | 0.011 |\\n| Large | Small [3\\u00d73, 3\\u00d73, 3\\u00d73] | 9.35 | 0.513 | 0.039 |\\n| Large | Large [5\\u00d75, 5\\u00d75, 5\\u00d75] | **24.90** | **0.732** | **0.012** |\\n| Large | Multi-scale [1\\u00d71, 3\\u00d73, 5\\u00d75] | 23.70 | 0.69 | 0.018 |\\n| Multi-scale | Small [3\\u00d73, 3\\u00d73, 3\\u00d73] | 13.64 | 0.761 | 0.029 |\\n| Multi-scale | Large [5\\u00d75, 5\\u00d75, 5\\u00d75] | 5.16 | 0.518 | 0.063 |\\n| Multi-scale | Multi-scale [1\\u00d71, 3\\u00d73, 5\\u00d75] | **14.58** | **0.81** | **0.019** |\\n\\n\\n---\\n\\nAll statistics in Table R6 strongly support the benefits of **UltraLightUNet's MKIR** block over the **EMCAD's MSCB** block, thus highlighting MKIR's broader adaptability and effectiveness. While MSCB operates solely on multi-scale kernels (e.g., $[1 \\\\times 1, 3 \\\\times 3, 5 \\\\times 5]$), making it suitable only for multi-scale object segmentation, **MKIR supports both same-size kernels** (e.g., $[3 \\\\times 3, 3 \\\\times 3, 3 \\\\times 3]$ or $[5 \\\\times 5, 5 \\\\times 5, 5 \\\\times 5]$) and **multi-scale kernels** (e.g., $[1 \\\\times 1, 3 \\\\times 3, 5 \\\\times 5]$), thus enabling application-specific optimizations which make the UltraLightUNet architecture extremely efficient. \\n\\n**Key Evidence Supporting MKIR\\u2019s Novelty in Table R6:**\\n\\n**1. Object-to-Background Ratio:** \\n\\n- For **small objects**, MKIR with small kernels achieves the highest ratio (41.83), emphasizing the precise focus on small details, unlike MSCB. \\n\\n- For **large objects**, MKIR with large kernels outperforms multi-scale (24.90 vs. 23.70), validating its suitability for larger regions. \\n\\n- For **mixed objects**, MKIR with multi-scale kernels balances small and large regions effectively (14.58), demonstrating its versatility across complex applications. \\n\\n**2. Mean Squared Error (MSE):** \\n\\nMKIR consistently achieves **lower MSE** for relevant kernel-object pairs, thus indicating better pixel-wise accuracy. \\n\\n- For example, small kernels achieve 0.007 MSE for small objects, outperforming multi-scale (0.011) and large kernels (0.024). \\n\\n- Similarly, large kernels yield the best MSE (0.012) for large objects, reinforcing MKIR\\u2019s adaptability. \\n\\n**3. Structural Similarity Index Measure (SSIM):** \\n\\nMKIR excels in structural preservation across relevant kernel-object pairs, demonstrating adaptability and robustness: \\n\\n- **Small objects:** Small kernels achieve the highest SSIM (0.79), thus outperforming multi-scale (0.786) and large kernels (0.526). \\n\\n- **Large objects:** Large kernels yield the best SSIM (0.732), surpassing multi-scale (0.69). \\n\\n- **Mixed objects:** Multi-scale kernels achieve the highest SSIM (0.81), balancing small and large features better than small (0.761) or large kernels (0.518). \\n\\n**4. Application-Specific Adaptability:** \\n\\n- MSCB in EMCAD paper is inherently restricted to multi-scale designs, thus making it **a special case** of MKIR when kernels can be different. In contrast, MKIR can adapt same-size or mixed kernels for specific applications, achieving optimal performance across small, large, and heterogeneous segmentation tasks. \\n\\n- The flexibility of MKIR allows segmentation models to tailor kernel configurations based on the nature of the objects, significantly broadening its applicability beyond MSCB. \\n\\nThese findings establish MKIR as a fundamentally **new and superior** block compared to MSCB in EMCAD paper. MKIR\\u2019s broader adaptability and application-specific adaptability underscore its significant contribution to advancing lightweight, high-performance segmentation models.\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Overall novelty (Part2)\", \"comment\": \"### 2. New Modules\\n\\n### *2.1 Multi-Kernel Inverted Residual (MKIR) Block* \\n\\nThe **MKIR block** introduces a novel feature extraction approach by leveraging **Multi-Kernel Depth-Wise Convolutions (MKDC)** to efficiently balance local and global context extraction. With its **multi-kernel design**, MKIR supports both **k\\u2081 = k\\u2082 (same-size kernels)** and **k\\u2081 \\u2260 k\\u2082 (different-size kernels)**, thus enabling adaptability across diverse spatial contexts. The MKIR\\u2019s inverted residual structure minimizes the computational overhead while maintaining the representational power.\\n\\nIn its **3D version**, MKIR extends multi-kernel convolutions to volumetric data, thus allowing efficient feature extraction for 3D tasks while preserving its lightweight nature. This is a significant contribution to the SOTA with high relevance in medical bioimaging (we show our results on multiple datasets \\u2013 see Tables 3, 10, 12 in the paper).\\n\\n#### In contrast:\\n- **SAMT\\u2019s Scale-Aware Modulation (SAM)** relies on computationally intensive transformer-based attention, limited to 2D tasks, with no 3D extension.\\n\\n- **CASCADE\\u2019s ConvBlock** relies on standard 3x3 convolutions (i.e., not multi-kernel), which lack adaptability and efficiency. CASCADE also lacks a 3D version of ConvBlock, preventing its applicability to volumetric tasks.\\n\\n- **EMCAD\\u2019s Multi-Scale Convolution Block (MSCB)** employs multi-scale convolutions (**k\\u2081 \\u2260 k\\u2082**) in the decoder, but it is not used in the encoder and has no 3D extension.\\n\\n- **ConvNeXt** employs large-kernel depth-wise convolutions (7x7) that effectively capture large contexts, but lacks flexibility for smaller features. In contrast, MKIR balances both small and large contexts efficiently.\\n\\n\\n### *2.2 Multi-Kernel Inverted Residual Attention (MKIRA) Block*\\n\\nThe **MKIRA block** performs **multi-kernel refinement of attention** by combining the **Convolutional Multi-Focal Attention (CMFA)** module with **MKIR** in a sequential manner. CMFA first applies max and average pooling to compute local attention across spatial and channel dimensions, thus enhancing critical features. MKIR then refines these features using multi-kernel convolutions, ensuring efficient and effective feature refinement.\\n\\nThe **3D version of MKIRA** adapts CMFA and MKIR for volumetric data, employing 3D pooling and multi-kernel convolutions to handle complex 3D medical imaging tasks.\\n\\n#### In contrast:\\n- **SAMT\\u2019s Scale-Aware Modulation (SAM)** employs global attention for scale diversity, but lacks efficiency and scalability to 3D tasks.\\n\\n- **CASCADE\\u2019s Convolutional Attention Module (CAM)** uses 3x3 convolutions for spatial-channel attention in the decoder, increasing computational cost and lacking a 3D version.\\n\\n- **EMCAD\\u2019s Multi-Scale Convolutional Attention Module (MSCAM)** uses multi-scale convolutions for attention, but is restricted to 2D tasks and has no 3D extension.\\n\\n\\n### 3. 3D Versatility\\n\\n**UltraLightUNet** goes beyond existing 2D-only methods like **SAMT, CASCADE, EMCAD, ConvNeXt, CMUNeXt, SwinUNet, TransUNet**, etc., by providing **novel 3D versions** of both the architecture and its modules (MKIR and MKIRA), tailored for volumetric medical imaging. While methods like SAMT, CASCADE, and EMCAD lack 3D extensions, UltraLightUNet introduces a lightweight 3D base model that achieves **new SOTA efficiency** with just **0.453M parameters and 3.42 GFLOPs**, compared to existing 3D methods that typically require **hundreds of Giga FLOPs**. \\n\\nThis makes UltraLightUNet uniquely suited for **real-time 3D medical imaging** tasks in point-of-care scenarios, such as organ and tumor segmentation, where computational constraints are critical.\"}", "{\"title\": \"Final Response\", \"comment\": \"Thanks for your response. It seems that you misunderstood what I wanted to deliver. I read your responses carefully, but your responses didn't address my major concern.\\n\\n(1) The main contributions of this manuscript was proposing several modules, but these modules are similar to modules from the paper EMCAD, as another reviewer mentioned. Although you provided details and tried to clarify they were not 100% same, I believe they are at least 90% similar. (2) You said your architecture was novel since you incorporated your modules into the encoder. Unfortunately, incorporating modules into both the encoder and the decoder is not novel and is a marginal contribution since almost everyone is trying to incorporating their modules into both the encoder and the decoder. In other words, incorporating their modules into U-Net is not novel, and this is a common design. Many works have done it. (3) Using depthwise convolutions to reduce the Params and FLOPS has been explored from ConvNext and UX Net several years ago. These two are just examples, and these are many other networks are trying to use the depth-wise convolutions to improve efficiency after these two works. Thus, this idea is not new, and this idea in this manuscript doesn't provide new theoretical insight to others since this idea has been explored from several years.\\n\\nMost importantly, what I wanted to delivered was from the higher level, not about the details of your modules. You mentioned your designs in architectures and modules, and they may be novel to some other conferences. However, in ICLR, we would like to see some works that propose fundamentally different solutions and theoretically sound methods. Specifically, in the field of medical image segmentation, U-Net and Vision Transformers have been widely explored, so we would like to see some other newly-proposed architecture instead of U-Net and Vision Transformers, or some architectures that have not been applied to medical image segmentation. You utilized the depth-wise convolution to reduce Params and FLOPs, but it has been widely explored after ConNext and UXNet. The biggest selling point of your manuscript is the reduction of Params and FLOPs which benefits from the utilization of depth-wise convolutions, but this solution is not fundamentally different from others. They incorporated depth-wise convolutions into the decoder, and you incorporated it into both the encoder and decoder. These two designs are fundamentally same. We would like to see if you can propose new convolutional operations instead of depth-wise convolutions to reduce Params and FLOPs. In ICLR, I believe improving performance is not the first priority, since improving performance is not very hard, and performance will be influenced by the training strategies and computational resources. Instead, we would like to see solutions that tackle problems in a fundamentally different way, inspiring other readers' to explore this field deeply. Your work may achieve a promising result from the engineering side. However, it doesn't tackle this problem fundamentally, or provides more theoretical insights to others.\"}", "{\"title\": \"Response to the comments of Reviewer jk4q: Theoretical motivation, limitations, and future directions\", \"comment\": \"### **Q3. Third, while the proposed blocks\\u2014MKIRA, MKIR, MKDC, GAG, and CMFA\\u2014are illustrated in the Method section, the paper lacks sufficient theoretical or conceptual motivation for why these specific block designs should enhance segmentation performance.**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the need for more theoretical or conceptual motivation for our proposed blocks. **UltraLightUNet\\u2019s design is grounded in clear theoretical concepts**, which are explained in the Method section and validated in the Ablation Study section:\\n\\n- **Theoretical Basis**: Most existing architectures in computer vision rely on theoretical concepts to justify their design choices (e.g., Vision Transformers focus on self-attention). Similarly, our approach employs the **multi-kernel trick** to improve segmentation performance and **depth-wise convolutions** for lightweight computation. While our contribution is not theoretical in nature, these foundational concepts ensure that **UltraLightUNet** achieves both high performance and extreme efficiency.\\n\\n- **Method Section**: We provided detailed descriptions of how the MKIR and MKIRA blocks integrate **multi-kernel depth-wise convolutions (MKDC)** and **Convolutional Multi-Focal Attention (CMFA)** to enhance segmentation performance. MKDC adapts to diverse spatial contexts, while CMFA focuses on refining critical features.\\n\\n- **Empirical Validation**: Our Ablation Studies (**see Tables 4, 5, 6, 7, and 8 in the Sections 5.1, 5.2, A.4, A.5, and A.6 of our initial submission**) quantify the contributions of each block, thus demonstrating their impact on segmentation accuracy and computational efficiency. These results validate the theoretical motivation underpinning the module designs.\\n\\n\\n### **Q4. Lastly, the paper does not discuss the limitations of UltraLightUNet. A dedicated discussion on limitations would provide readers with a more balanced understanding of the model\\u2019s practical use and potential future directions for research. Typically, there is no free lunch. Are there any limitations or tradeoffs of the method? Providing those will be helpful for readers.**\\n\\nWe appreciate the reviewer\\u2019s suggestion to discuss the limitations of **UltraLightUNet** for a balanced understanding of its practical use. **UltraLightUNet**\\u2019s focus on extreme lightweight efficiency occasionally results in slightly lower performance compared to SOTA methods on complex datasets (e.g., Synapse in Tables 2 and 12). This tradeoff aligns with our goal of addressing resource constraints in real-time and point-of-care scenarios. In the revised manuscript, we include a dedicated discussion on limitations (in **Appendix A.12**), highlighting the efficiency-performance tradeoff, domain-specific optimizations, and the potential to explore hybrid architectures for challenging tasks. We also discuss future directions, including extending **UltraLightUNet** to tasks like 2D/3D image reconstruction, translation, enhancement, and denoising. For the reviewer\\u2019s convenience, we reproduce below the paragraph we add as an independent subsection in our revised draft (in **Appendix A.12**).\\n\\n**\\u201cLimitations and Future Directions:** While **UltraLightUNet** excels in computational efficiency, its focus on extreme lightweight design occasionally results in slightly lower performance compared to SOTA methods on complex datasets (e.g., Synapse). This tradeoff reflects its primary goal of addressing resource constraints in real-time and point-of-care applications.\\n\\nFuture work will explore hybrid architectures that combine lightweight and high-capacity components to handle challenging tasks without sacrificing efficiency. Additionally, strategies like self-supervised pretraining and domain-specific optimizations can enhance its performance further. We also plan to extend **UltraLightUNet** to other dense prediction tasks, such as 2D/3D image reconstruction, translation, enhancement, and denoising. This opens pathways to broaden the **UltraLightUNet**\\u2019s applicability across various computer vision tasks.\\u201d\"}", "{\"comment\": \"**New References in our Rebuttal**\\n\\nLiu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T. and Xie, S., 2022. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976-11986). \\n\\nLee, H.H., Bao, S., Huo, Y. and Landman, B.A., 3D UX-Net: A Large Kernel Volumetric ConvNet Modernizing Hierarchical Transformer for Medical Image Segmentation. In The Eleventh International Conference on Learning Representations. \\n\\nRahman, M.M., Munir, M. and Marculescu, R., 2024. Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11769-11779). \\n\\nMehta, S. and Rastegari, M., MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer. In International Conference on Learning Representations. \\n\\nHatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R. and Xu, D., 2021, September. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI brainlesion workshop (pp. 272-284). Cham: Springer International Publishing. \\n\\nPang, Y., Liang, J., Huang, T., Chen, H., Li, Y., Li, D., Huang, L. and Wang, Q., 2023. Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Transactions on Medical Imaging. \\n\\nChen, S., Xie, E., Ge, C., Liang, D.Y. and Luo, P., 2022. Cyclemlp: A MLP-like architecture for dense prediction. In International Conference on Learning Representation (ICLR), Oral. IEEE.. \\n\\nDosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S. and Uszkoreit, J., 2020, October. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. \\n\\nSimonyan, K., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. (https://arxiv.org/pdf/1409.1556) \\n\\nValanarasu, J.M.J. and Patel, V.M., 2022, September. Unext: Mlp-based rapid medical image segmentation network. In International conference on medical image computing and computer-assisted intervention (pp. 23-33). Cham: Springer Nature Switzerland. \\n\\nRuan, J., Xie, M., Gao, J., Liu, T. and Fu, Y., 2023, October. Ege-unet: an efficient group enhanced unet for skin lesion segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 481-490). Cham: Springer Nature Switzerland. \\n\\nTan, M. and Le, Q., 2019, May. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR. \\n\\nLiu, Y., Zhu, H., Liu, M., Yu, H., Chen, Z. and Gao, J., 2024, March. Rolling-Unet: Revitalizing MLP\\u2019s Ability to Efficiently Extract Long-Distance Dependencies for Medical Image Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 4, pp. 3819-3827). \\n\\nJiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F. and Liu, Q., 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. \\n\\nHatamizadeh, A., Tang, Y., Nath, V., Yang, D., Myronenko, A., Landman, B., Roth, H.R. and Xu, D., 2022. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 574-584).\"}", "{\"title\": \"Official Comment by Authors: Part4\", \"comment\": \"### **C3. Although the authors did not inspired by the ConvNeXt, its many variants and its way to using depth-wise convolutions to design lightweight modules have been widely explored in both general computer vision tasks and medical image analysis tasks. Thus, it is necessary to discuss them. Moreover, incorporating one or two more layers to your module is not a novel design and does not make impact even though your modules have a new name.**\\n\\n**Response:** \\n\\nWe thank the reviewer for their follow-up comments and the opportunity to clarify our contributions further. Below, we address the concerns raised. \\n \\n\\n### **Addressing Similarities with ConvNeXt**\\n\\nWhile ConvNeXt and its variants have explored depth-wise convolutions for lightweight module design, we emphasize that our work is not inspired by ConvNeXt. Our **Multi-Kernel Inverted Residual (MKIR)** and **Multi-Kernel Inverted Residual Attention (MKIRA)** blocks address the specific challenges of medical image segmentation in resource-constrained environments, a focus completely new compared to the ConvNeXt\\u2019s general-purpose design.\", \"key_differences_include\": \"1. Multi-Kernel vs. Large-Kernel Design: ConvNeXt relies on large-kernel depth-wise convolutions (e.g., $7 \\\\times 7$) to capture global context. In contrast, our **multi-kernel approach** supports both $k_1\\u200b=k_2\\u200b$ (same-size kernels) and $k_1 \\\\neq k_2$\\u200b (different-size kernels), enabling adaptable feature extraction for segmenting objects of varying sizes. This adaptability is critical for diverse segmentation tasks where both local and global features are essential (see the concrete example in our response to reviewer **jeKK** of this rebuttal). \\n\\n2. Application-Specific Challenges: Our design explicitly targets medical image segmentation with extreme efficiency, focusing on lightweight computation for *real-time applications* like point-of-care diagnostics. ConvNeXt\\u2019s design neither addresses these specific constraints, nor provides 3D extensions for volumetric tasks. \\n \\n\\n### **Beyond Incremental Layer Design** \\n\\nWe understand the concern about adding layers not being inherently novel. However, our contributions go beyond incremental changes by introducing: \\n\\n**1. Conceptually New Modules:** \\n\\n- **MKIR Block** integrates multi-kernel depth-wise convolutions with an inverted residual structure, ensuring lightweight and efficient feature extraction. The Equation 2 is new and provides a better mathematical basis for efficient computations compared to SOTA; this is what makes the MKIR block new. \\n\\n- **MKIRA Block** integrates CMFA with MKIR blocks for **multi-kernel refinement of attention**, thus offering a lightweight yet effective attention mechanism distinct from existing designs. The integration of these blocks addresses both accuracy and efficiency objectives in novel ways. \\n\\n**2. 3D Extensions:** All our modules, including MKIR and MKIRA, are extended to 3D for volumetric medical image segmentation, introducing entirely new designs tailored for 3D tasks. This is a significant advancement not present in ConvNeXt or other similar works like UNeXt, EGE-UNet, and Rolling-UNet. \\n\\n\\n### **Comparing Contributions with 3D UX-Net (ICLR 2023)**\\n\\nThe reviewer\\u2019s concerns about architectural novelty highlight the importance of placing our contributions in the proper context. For example: \\n\\n- **3D UX-Net** (ICLR 2023) introduces a 3D extension of ConvNeXt\\u2019s large-kernel module, but retains a **computationally heavy decoder** directly adapted from SwinUNETR (Hatamizadeh et al., 2021) and UNETR (Hatamizadeh et al., 2022). It focuses solely on 3D segmentation tasks without lightweight optimizations for resource-constrained scenarios. \\n\\n- In contrast, our **UltraLightUNet** provides both **2D and 3D lightweight designs**, with entirely new 3D versions of our modules (MKIR, MKIRA, GAG, etc.). Our contributions enable both 2D and 3D segmentation with significantly lower computational costs while achieving high accuracy, making our work eminently suited for real-time applications. \\n\\nWe hope these clarifications, along with the 3D-specific contributions and comparisons, address the reviewer\\u2019s concerns and highlight the novelty and impact of our work. Thank you again for your thoughtful feedback and consideration.\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Experimental results are limited (Part2)\", \"comment\": \"### **Q2.2. Second, the comparison in Synapse, MSD prostate, and FETA is insufficient. Synapse is a popular benchmark, but only a few baseline methods were reported. Only seven 3D methods proposed before 2022 were compared in MSD prostate and FETA. However, 3D segmentation networks between 2023 and 2024 were not compared, and these networks usually achieve more superior performance with lower computational complexity.**\\n\\n---\\n\\n**Response:**\\n\\nWe understand the reviewer\\u2019s concerns regarding the thoroughness of our comparisons, particularly with recent 3D segmentation methods (2023\\u20132024).\\n\\n- **For Synapse**, we already compared UltraLightUNet with **12 baseline methods for 2D segmentation** (Table 2) and **7 baseline methods for 3D segmentation** (Table 12). These comparisons include well-established baselines (e.g., TransUNet, SwinUNet, nn-Former, SwinUNETR) and more recent methods like **3D UX-Net** (Lee et al., 2022), **CMUNeXt** (Tang et al., 2023), and **Rolling-UNet** (Liu et al., 2024).\\n\\n- Additionally, we have included results for a recent 3D method, **SlimUNETR (Pang et al., 2023)**, in **Table R4 below**, which shows that **UltraLightUNet3D-S** achieves **10.19% higher DICE** score on Task05 Prostate, **0.17% higher on FETA**, **1.47% higher on Synapse 8-organ**, and **2.25% higher on Synapse 13-organ** while using **9.9x fewer #parameters and 5.9x fewer #FLOPs** than SlimUNETR. This comparison highlights the better performance and efficiency tradeoff achieved by UltraLightUNet.\\n\\n**Table R4**: Comparison of UltraLightUNet with SlimUNETR (Pang et al., 2023) [6]. We present the DICE scores (%) on our data-splits with an input resolution of 96x96x96, while optimizing the hyper-parameters of SlimUNETR.\\n\\n| Architectures | #Params | #FLOPs | Task05_Prostate | FETA | Synapse 8-organ | Synapse 13-organ |\\n|---------------------------|-------------|-----------|-----------------|-------|-----------------|------------------|\\n| SlimUNETR | 1.78M | 11.99G | 59.01 | 86.98 | 80.42 | 72.56 |\\n| **UltraLightUNet3D-S (Ours)** | **0.163M** | **2.03G** | **69.20** | 87.15 | 81.89 | 74.81 |\\n| UltraLightUNet3D (Ours) | 0.453M | 3.42G | 70.52 | 87.92 | 81.87 | 76.33 |\\n| UltraLightUNet3D-M (Ours) | 1.42M | 7.1G | 71.51 | **88.40** | **82.58** | **77.46** |\\n\\n### **Q2.3. Additionally, the performance reported for these baseline methods in this paper is much lower than the performance in the original paper. For example, Swin Unet reported 79.13 in their paper [5], but only 77.58 was reported for it in this manuscript.**\\n\\n---\\n\\n**Response:**\\n\\nWe appreciate the reviewer\\u2019s attention to discrepancies in reported baseline performance (e.g., **Swin Unet: 77.58 vs. 79.13**).\\n\\n**Clarification**: The reported results for **Swin Unet** and **TransUNet** in our manuscript were taken directly from the **CASCADE paper [3]**, ensuring consistency across all the reported baselines. These results may differ from those reported in the original papers due to differences in the experimental setups, such as data splits or preprocessing strategies. To maintain consistency in evaluation, we opted not to re-train these baselines ourselves.\\n\\nTo address this reviewer\\u2019s concerns (**Q2**), we will make the following updates in the revised manuscript:\\n\\n- **Training and Test Time Reporting**:\\n - Include a detailed comparison of training and inference times across UltraLightUNet and baseline methods on the same hardware platform.\\n - Discuss the trade-offs between computational complexity, FLOPs, and training/inference time for depth-wise convolutions versus standard convolutions.\\n\\n- **Expanded Comparisons**:\\n - Emphasize the addition of **SlimUNETR** results, highlighting that UltraLightUNet outperforms SlimUNETR both in performance and efficiency.\\n\\n- **Clarifying Baseline Performance**:\\n - Explicitly state that results for certain baselines (e.g., SwinUNet, TransUNet) are taken from the **CASCADE paper [3]** and explain the potential experimental setup differences.\"}", "{\"title\": \"Response to the comments of Reviewer zJLj: Overall impact\", \"comment\": \"### **Q6. The overall impact is low. The overall improvement in the segmentation performance is low. For example, its best DSC score in the Polyp dataset was 93.48, but other baselines achieved 93.29 and 93.18. Its best DSC score in the Synapse dataset was 78.68, but other baselines achieved 78.40.**\\n\\nWe respectfully disagree with this assessment. While the performance gains in DICE scores may appear small, the **key impact** of our method lies in achieving these DICE results with **extreme computational efficiency**:\\n\\n- **Polyp Dataset**: UltraLightUNet achieves **93.48% DICE** with only **0.316M parameters and 0.314G FLOPs**, compared to UACANet (93.29%, 69.16M params, 31.51G FLOPs) and TransUNet (93.18%, 105.32M params, 38.52G FLOPs). UltraLightUNet is **219x smaller** and **122x more efficient** than UACANet.\\n\\n- **Synapse Dataset**: UltraLightUNet achieves **78.68% DICE** with **0.316M params and 0.257G FLOPs** (224x224 input), compared to DeepLabv3+ (78.40%, 39.76M params, 11.456 FLOPs) and TransUNet (78.40%, 105.32M params). UltraLightUNet uses **125x fewer parameters** and **44x fewer FLOPs** than DeepLabv3+.\\n\\nThese are **orders of magnitude improvements** across the board! These results demonstrate UltraLightUNet\\u2019s significant impact in resource-constrained settings, achieving competitive performance with only a **fraction** of the computational cost.\\n\\nThank you for your constructive feedback.\\n\\n\\n### **References** \\n\\n[1] Lin, W., Wu, Z., Chen, J., Huang, J., & Jin, L. (2023). Scale-aware modulation meet transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6015-6026). \\n\\n[2] Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV) (pp. 3-19). \\n\\n[3] Rahman, M. M., & Marculescu, R. (2023). Medical image segmentation via cascaded attention decoding. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 6222-6231). \\n\\n[4] Rahman, M. M., Munir, M., & Marculescu, R. (2024). Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11769-11779). \\n\\n[5] Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., & Wang, M. (2022, October). Swin-unet: Unet-like pure transformer for medical image segmentation. In European conference on computer vision (pp. 205-218). Cham: Springer Nature Switzerland. \\n\\n[6] Pang, Y., Liang, J., Huang, T., Chen, H., Li, Y., Li, D., ... & Wang, Q. (2023). Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Transactions on Medical Imaging \\n\\n[7] Tang, F., Ding, J., Quan, Q., Wang, L., Ning, C., & Zhou, S. K. (2024, May). Cmunext: An efficient medical image segmentation network based on large kernel and skip fusion. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE. \\n\\n[8] Yang, S., Zhang, X., Chen, Y., Jiang, Y., Feng, Q., Pu, L., & Sun, F. (2023). UcUNet: A lightweight and precise medical image segmentation network based on efficient large kernel U-shaped convolutional module design. Knowledge-Based Systems, 278, 110868. \\n\\n[9] He, Y., Gao, Z., Li, Y., & Wang, Z. (2024). A lightweight multi-modality medical image semantic segmentation network base on the novel UNeXt and Wave-MLP. Computerized Medical Imaging and Graphics, 111, 102311. \\n\\n[10] Lin, X., Yu, L., Cheng, K. T., & Yan, Z. (2023). BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 27(7), 3501-3512. \\n\\n[11] Yin, Y., Han, Z., Jian, M., Wang, G. G., Chen, L., & Wang, R. (2023). AMSUnet: A neural network using atrous multi-scale convolution for medical image segmentation. Computers in Biology and Medicine, 162, 107120.\"}" ] }
BeT8QvxCk2
No more hard-prompts: SoftSRV prompting for synthetic data generation
[ "Giulia DeSalvo", "Jean-François Kagy", "Lazaros Karydas", "Afshin Rostamizadeh", "Sanjiv Kumar" ]
We present a novel soft-prompt based framework, SoftSRV, that leverages a frozen pre-trained large language model (LLM) to generate targeted synthetic text sequences. Given a sample from the target distribution, our proposed framework uses data-driven loss minimization to train a parameterized ``variable'' soft-prompt. This soft-prompt is then used to steer the frozen LLM to generate synthetic sequences that are similar to the target distribution. We argue that SoftSRV provides a practical improvement over common hard-prompting approaches that rely on human-curated prompt-templates, which can be idiosyncratic, labor intensive to craft, and may need to be specialized per domain. We empirically evaluate SoftSRV and other baselines, using a frozen large decoder-only model to generate synthetic fine-tuning data for a small Gemma model. To test generality, we evaluate across three different domains (coding, math, reasoning) without any particular specialization to each domain. In this challenging setting, SoftSRV significantly improves upon hard-prompt baselines, generating data with superior fine-tuning performance and that better matches the target distribution according to the {\sc mauve} similarity metric.
[ "Synthetic Data Generation", "Language Models", "LLMs", "Fine-tuning" ]
Reject
https://openreview.net/pdf?id=BeT8QvxCk2
https://openreview.net/forum?id=BeT8QvxCk2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vfJwbVfX9p", "tWLzDZtImH", "nfggA0bURm", "hSyf09nM3Y", "hI4jbxSzjQ", "eAwK12dRUI", "dbTJzfD292", "dSA95WuLdG", "ZFPmBjtDC7", "XVsHfjWqag", "RsDLhxiHUw", "GwTalbdHxY", "GTkmr1Ulqw", "6iz9OL9wZl", "0X6ApQRslG", "0AUVqApUBP" ], "note_type": [ "official_review", "official_review", "meta_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730683086997, 1730300312759, 1734725440551, 1732570624112, 1730640714532, 1729167136875, 1737524133129, 1732571106453, 1732726269277, 1730662549254, 1732569915681, 1732570050356, 1732623205756, 1732631535044, 1732574280112, 1732778290947 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_zpvo" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_pGWo" ], [ "ICLR.cc/2025/Conference/Submission11598/Area_Chair_Zh2D" ], [ "ICLR.cc/2025/Conference/Submission11598/Authors" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_MiC5" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_5RCw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11598/Authors" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_zpvo" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_NGUv" ], [ "ICLR.cc/2025/Conference/Submission11598/Authors" ], [ "ICLR.cc/2025/Conference/Submission11598/Authors" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_MiC5" ], [ "ICLR.cc/2025/Conference/Submission11598/Authors" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_NGUv" ], [ "ICLR.cc/2025/Conference/Submission11598/Reviewer_pGWo" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces SoftSRV, a novel framework that uses soft prompts to generate synthetic training data using frozen large language models (LLMs). Rather than relying on manually crafted hard prompts, SoftSRV learns parameterized \\\"contextual\\\" soft prompts through data-driven optimization. The authors evaluate three variants of their approach (SSNSP, SSMPk, SSMC) across different domains (coding, math, reasoning) and show superior performance compared to hard-prompting baselines when using the generated data to fine-tune smaller models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Requires minimal human intervention\", \"Introduces contextual conditioning for better distribution matching\", \"Outperforms hard-prompting baselines across multiple domains\", \"Shows better distribution matching (MAUVE scores)\", \"Supports different soft prompt architectures\", \"Demonstrates practical alternative to manual prompt engineering\"], \"weaknesses\": \"1. Domain-Agnostic Parameters: Assumes fixed hyperparameters (prompt length=128, 20K training steps, learning rate=5e-6) work across domains\\n\\n2. Sufficiency of Context Vector: Assumes the context vector derived from an example sequence captures enough information to generate meaningful variations\\n\\n3. Small Training Sample Sensitivity: For datasets with small training sets (like MBPP with only 384 examples), more complex SoftSRV variants perform worse than simpler ones, suggesting the approach may be sensitive to training sample size.\\n\\n4. Task Complexity Impact: The approach appears less effective for more complex tasks like BoolQ that require generating longer passages and more diverse content. The authors note this is \\\"perhaps the most difficult task to generate synthetic data for.\\\"\\n\\n5. No Direct Performance Indicator: The authors note that the MAUVE similarity score they use to measure distribution matching is \\\"not a direct indicator of downstream fine-tuning performance,\\\" suggesting a lack of clear metrics to predict effectiveness.\\n\\n6. Problem Setup Limitations:\\n- Assumes fixed maximum sequence length m (Section 2, pg 2)\\n- Restricts to scenarios where input and output sequences have equal length\\n\\n7. Methodological Concerns:\\n- Relies heavily on a \\\"lossy\\\" sequence embedder without strong justification\\n- No clear guidance on how to select the degree of \\\"lossiness\\\"\\n\\n8. Validation Gaps:\\n- Initial results focused on only three domains (coding, math, reasoning)\\n- No clear guidelines for choosing between different variants (SSNSP, SSMPk, SSMC)\\n\\n9. Heavy reliance on MAUVE score which is acknowledged to not directly indicate downstream performance\\n\\n10. Comparison Scope:\\n- Primarily compares against hard-prompting baselines\\n- Limited comparison with other synthetic data generation approaches\\n- No comparison with other parameter-efficient tuning methods\", \"questions\": \"1. How sensitive is the approach to the quality and diversity of the initial sample data from the target distribution?\\n2. What is the minimal sample size needed for effective training across different domains?\\n3. How does the choice of sequence embedder affect performance? \\n4. How well does the approach handle very specific or niche domains not well-represented in the LLM's training data?\\n5. Why choose MLPs for the SSMC variant?\\n6. How was the number of basis prompts (k=2) chosen for SSMPk? What's the tradeoff between k and performance?\\n7. How robust is the approach to different random seeds and initialization?\\n9. How does it compare to other synthetic data generation approaches beyond hard prompting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, authors want to introduce a data synthetic generation method by using a soft-prompt based framework. It aims to train a parameterized soft-prompt by using data-driven loss minimization, and thus synthesize sequences to satisfy the target distribution $D$. Specifically, they found that the parameterized families of soft prompt can be conditioned on an input context and can fit the target distribution. Experimental results indicate that the proposed method can improve the fine-tuned model to achieve better performance based on the synthetic data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Strengths**\\n\\n1. This paper introduces how to train soft prompts to build a data synthetic generation to fit target distribution.\\n2. Experimental results demonstrate the effectiveness of the proposed method can fit the target distribution.\", \"weaknesses\": \"**Weaknesses**\\n\\n1. To build such a data synthetic generation, the proposed method first needs to obtain a lot of samples from the target distributions to train the soft prompt. So, which is advantage to use the proposed method as directly use hard prompt does not require any target samples for training?\\n2. The generalization of the proposed method is also a concern. It seems if we want to use a different LLM (e.g., LLama-70B) for data generation, we also need to train a corresponding model. Therefore, the proposed method (i.e., the trained soft prompt) cannot be adapted to different LLMs, while hard prompt does not have such a concern.\\n3. Experiments on more LLMs are required. In this paper, authors only use Gemma-2B as a backbone network.\\n4. In authors' setting, training SoftSRV also require some training (e.g., 20K steps). So what happened if our downstream tasks do not have enough data for training?\\n5. In this paper, authors only choose three standard datasets (i.e., Code, Math and Reasoning) for generation. Do you try some other open-domain scenarios, like some datasets which are not in question-answer format (e.g., Chat)?\", \"questions\": \"Please see my comments on Weaknesses. I am willing to increase my score if authors can address my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a simple approach for synthetic data generation which learns soft prompts on a small amount of data that can subsequently be used for data generation. On the positive side the approach is simple, and coupled with some empirical results that show that it may be promising. On the negative side, it is unclear how practical the approach is, whether it would be generlizable. and whether the evaluation of the method (e.g, using MAUVE) is reliable.\", \"additional_comments_on_reviewer_discussion\": \"Many reviewers pointed out that this approach is not too novel (given the extensive line of work on prompt tuning) and would be unlikely to generalize to other tasks/datasets. All reviewers except for one chose to maintain their score after engaging with the authors.\"}", "{\"comment\": \"* Question: \\\"The method shows limited novelty.... A model fine-tuned on ~20K samples is naturally better aligned with these domains than zero-shot\\\"\\n\\nWe are not comparing to the zero-shot setting. All methods, including the hard-prompt baselines, are fully fine-tuned on the same number of synthetic data (see Figure 2). The comparison is between fine-tuning on synthetic data generated by hard prompting methods versus by SoftSRV methods. \\n\\nPerhaps the reviewer\\u2019s question is about question generation? At question generation time, both hard prompting methods and SoftSRV use data from the benchmark's training fold. In particular, the hard-prompt seeds the prompt template with examples from the training fold. Please see section 3.3 for more details. \\n\\n\\n* Question: \\\"Given this, the approach appears straightforward. Rather than fine-tuning a model for data generation, why not simply PEFT fine-tune the model directly on these tasks? As shown in Table 1, the \\\"train\\\" column demonstrates superior performance compared to the \\\"HP\\\" columns.\\\" \\n\\nIn Table 1, we are fully fine-tuning Gemma 2B where each column indicates the data we are using (e.g. original training data, synthetic data generated by HP, etc). We find that fine-tuning using the SS_MC generated data performs better than fully fine-tuning on the original training data for all datasets except BOOLQ. See discussion in line 357-363. In general, we expect that PEFT Gemma on the original training data would perform worse compared to fully finetuning Gemma on the original training data. Note also that HP columns are the competitor baseline methods. \\n\\nPerhaps the reviewer is asking about PEFT the large model to use on the downstream task directly? This is a different experiment and we are imagining a (typical) scenario where we want to use a small downstream model, for example, for serving efficiency. \\n\\nPerhaps the reviewer is asking about using PEFT for synthetic data generation? Please see discussion with Reviewer 5RCw.\\n\\n\\n* Question: \\\"Current comparisons are unfair. More baselines that incorporate training-based data generation methods are needed.\\\"\\n\\nThe hard prompting baselines do use the training data and we are not aware of other published baselines that are conceptually different than the ones we already tested.\\n\\n\\n\\n* Question: \\\"Testing with out-of-distribution (OOD) data instead of MBPP, GSM8K, and BoolQ could further validate the method\\u2019s robustness. The paper evaluates only one LM; assessing additional LMs could strengthen its claims.\\\"\\n\\nWe agree that evaluating on out-of-distribution domains and testing more LLMs is an important future direction, which we plan on pursuing.\"}", "{\"summary\": \"The paper introduces SoftSRV, a prompt-tuning framework designed to synthesize data with frozen LLMs. Unlike standard prefix-tuning prepending a soft prompt to an existing hard prompt, SoftSRV uses only the soft prompt as input context for the LLM. The paper presents three structures of SoftSRV: $SS_{NSP}, SS_{MPk}, SS_{MC}$. The authors then fine-tune SoftSRV on coding, mathematics, and reasoning domains to synthesize data. Experiments show that fine-tuning Gemma-2B on SoftSRV synthesized data outperforms zero-shot synthesized data on those domains.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The target problem is practical and useful.\", \"The proposed method is simple and intuitive.\", \"Experiments show the promise of the proposed method and the improvements are also very intuitive.\"], \"weaknesses\": [\"The method shows limited novelty. It is evident that generating data using a PEFT fine-tuned model on specific domains improves domain alignment, outperforming zero-shot data generation. A model fine-tuned on ~20K samples is naturally better aligned with these domains than zero-shot, which relies on only 0 sample.\", \"Given this, the approach appears straightforward. Rather than fine-tuning a model for data generation, why not simply PEFT fine-tune the model directly on these tasks? As shown in Table 1, the \\\"train\\\" column demonstrates superior performance compared to the \\\"HP\\\" columns.\", \"Testing with out-of-distribution (OOD) data instead of MBPP, GSM8K, and BoolQ could further validate the method\\u2019s robustness.\", \"The paper evaluates only one LM; assessing additional LMs could strengthen its claims.\", \"Current comparisons are unfair. More baselines that incorporate training-based data generation methods are needed.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes SoftSRV, which trains and adapts soft prompt in synthetic data generation, while previous works mainly generate pseudo data by manually hard prompts. Empirical results on diverse tasks show that the models trained by SoftSRV-generated data performs better than those trained on data generated by baseline hard-prompting approaches. Besides, softSRV always generates data with better matches the target distribution.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper well written and the analysis of distribution matching is clear.\", \"The experimental results compared with hard prompt baselines are promising.\"], \"weaknesses\": \"* SoftSRV is only applied on synthetic data generation, actually the application can be more diverse and broader. What about the performance on some specific downstream tasks if we can train soft prompts on these tasks? Will it perform better than other soft prompts like Prefix-tuning[1], RLPrompt[2], P-tuning[3,4]?\\n* The paper mentioned SoftSRV is different from previous soft prompt methods, rather than prepending parameters, it instead uses the soft prompt alone as input context to the LLM. But it does not give some comparisons between SoftRSV and these soft prompt methods. What if those soft prompt methods be used in synthetic data generation? \\n* To some extent, this paper only adapts soft prompts in pseudo data generation, rather than propose a brand new, novel method. The contribution might not meet the level of substantial novelty expected at a conference like ICLR. So if SoftSRV performs better than other soft prompt baselines on other downstream tasks instead of only data generation, I'll consider increasing my score. \\n\\n[1] Li, Xiang Lisa, and Percy Liang. \\\"Prefix-tuning: Optimizing continuous prompts for generation.\\\" arXiv preprint arXiv:2101.00190 (2021).\\n\\n[2] Deng, Mingkai, et al. \\\"Rlprompt: Optimizing discrete text prompts with reinforcement learning.\\\" arXiv preprint arXiv:2205.12548 (2022).\\n\\n[3] Liu, Xiao, et al. \\\"GPT understands, too.\\\" AI Open (2023).\\n\\n[4] Liu, Xiao, et al. \\\"P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks.\\\" arXiv preprint arXiv:2110.07602 (2021).\", \"questions\": [\"What about the performance on some specific downstream tasks of SoftSRV compared with other soft prompt methods?\", \"This paper only compares SoftSRV with hard prompt baselines on synthetic data generation. More comparisons with previous soft prompt methods applied in this data generation area are necessary.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"* Question: \\\"To build such a data synthetic generation, the proposed method first needs to obtain a lot of samples from the target distributions to train the soft prompt. So, which is advantage to use the proposed method as directly use hard prompt does not require any target samples for training?\\\"\\n\\nThe hard prompt methods do require target samples to populate the templates, that is the templates use example questions that are taken directly from the training set. See Section 3.2 and Appendix B. To train SoftSRV, we use the same amount of training data as hard prompting methods. From mere 100-1000s of examples, we create 10x as many synthetic examples. \\n\\n* Question: \\\"The generalization of the proposed method is also a concern. It seems if we want to use a different LLM (e.g., LLama-70B) for data generation, we also need to train a corresponding model. Therefore, the proposed method (i.e., the trained soft prompt) cannot be adapted to different LLMs, while hard prompt does not have such a concern.\\\"\\n\\nThis is true, but also would be true of any training based approach. Even for hard-prompts, one would potentially need to refine the hard-prompt for each model type to match the idiosyncrasies of each model. This can be due to, for example, different instruction tuning procedures for each model.\\n\\n* Question: \\\"In authors' setting, training SoftSRV also require some training (e.g., 20K steps). So what happened if our downstream tasks do not have enough data for training?\\\"\\n\\nWe find that 100-1000's of examples is sufficient to train SoftSRV. Note that the hard-prompt baselines also use this data to seed the hard prompts templates. \\n\\n\\n* Question: \\\"Experiments on more LLMs are required. ... Do you try some other open-domain scenarios, like some datasets which are not in question-answer format (e.g., Chat)?\\\" \\n\\n We agree that trying open-domain scenarios and testing more LLMs is an important future direction, which we plan on pursuing.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your responses. However, I would like to keep my score.\"}", "{\"summary\": \"This paper suggests an alternative to hand engineering/crafting prompts for designing synthetic data, which is based on soft prompting to learn soft tokens or embedding strategies that minimize NLL on the small amount of human data available, and then generating conditioned on those soft tokens.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper addresses a very real problem (the generation of synthetic data) in a novel way. Their approach is fairly rigorous and the experimental sections contain lots of information; for instance, they report on three variations of their idea (SS_{NSP}, SS_{MPK}, SS_{MC}) and compare performance on a diverse number of metrics (including both downstream task performance after finetuning on the dataset and the human baseline coverage.\\n\\nThe experiments demonstrate quite convincingly that the approach does better than hand-engineering prompts to generate this synthetic data **when enough human examples are available to tune the prompts**. Furthermore, it is clear that this is a more \\\"sustainable\\\" approach, as it is not feasible to generate hand-engineered prompts for every domain if you have many domains you are perhaps interested in.\", \"weaknesses\": \"I could be wrong, but it seemed to me like they only applied this in domains where it was possible to learn an entire neural network to maximize the likelihood of the \\\"real\\\" data without overfitting, which I imagine is only the case when you have a lot of human data. To me, this seems like the least likely instance where you'd need to do synthetic generation. Thus, while I'm convinced the application is real and the empirical results are valid, I would imagine (but would be happy to be proven wrong) that the intersection of problems where you **could** apply this approach and problems where you'd **need** to apply this approach is fairly limited.\\n\\n\\nFurthermore, I see no novel algorithms or mathematical foundations in this paper. It is a very straightforward application of an approach developed by Lester et al and Li and Liang (who really should be cited), just to this novel task. And, since, as I pointed out earlier, there may not be many use cases for this task, it may be that the only reason no one has done this yet is that it isn't a very useful thing to do in practice. However, again, I'd be happy to be proven wrong.\", \"questions\": \"What is the minimum amount of human data you need to make this approach not overfit?\\n\\nHow would you generalize this approach to when you only have `k` examples of human data? (This is asking a bit much, but I'm curious if you've thought about it)\\n?\\nDo you see your paper as having novel mathematical contributions, and, if so, what are they?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Can standard soft-prompt methods be used for synthetic data generation?\", \"comment\": \"We want to stress that standard soft-prompt methods (e.g. prefix-tuning) are not directly applicable to synthetic data generation. Concretely, consider a standard soft-prompt method trained on a question/answer task, the standard soft-prompt methods will try to generate the answer given the question as opposed to generating a similar question on the same topic. In order to generate a similar question on topic using standard soft-prompt methods, one would, for example, need to create a dataset with pairs of similar questions and then train a soft-prompt method on this new dataset. Building a dataset of paired examples is challenging due to a potential scarcity of similar examples and difficulty in defining a notion of similarity.\\n\\nOur goal is to develop a synthetic data generation framework and targeting standard soft-prompt setting is beyond the scope of this paper.\"}", "{\"comment\": \"* Question: \\u201conly [useful] case when you have a lot of human data\\u201d\\n\\nThe parametrized soft-prompts have a relatively small number of parameters, compared to the frozen LM, and can be trained with a small amount of data (even just hundreds of examples). Specifically, MBPP only has 384 training examples while GSM8K and BOOLQ have ~7K and ~9K training examples respectively. Once the parameterized soft-prompt is trained, we then greatly expand the fine-tuning set generating tens of thousands of synthetic examples. Thus, we view SoftSRV as very useful in the data scarce setting.\\n\\n\\n* Question: \\u201capplication of Lester et al and Li and Liang (who really should be cited)\\u201d\\n\\nIndeed we meant to cite Li and Liang, this was an oversight that is now remedied. However, we do not view it as a straightforward application, given that we introduce the idea of parameterized *contextual* soft-prompts that allows us to generate a more diverse and representative set of synthetic data. Additionally, unlike Lester et al and Li and Liang, we are not pre-appending our soft-prompts to the input, but are effectively replacing the input with the soft-prompts, see Figure 1. Lastly, the standard soft-prompt methods are not directly applicable to synthetic data generation. See discussion with Reviewer 5RCw.\\n\\n\\n* Question: \\u201cminimum amount of human data\\u201d\\n\\nWe find, in our experimental setting, hundreds to a few thousand examples were sufficient, even for the most complex soft prompt family SS_MC. As mentioned above, MBPP only has 384 training examples.\\n\\n\\n* Question: \\\"Do you see your paper as having novel mathematical contributions, and, if so, what are they?\\\"\\n\\nTheoretically, it would be interesting to show that generating a sample from the SoftSRV framework is close to drawing a sample from the target distribution. Previous work has shown the soft-prompt guide the LLM towards the target task, but arguing the generated data distribution is close to the target distribution is a more difficult task.\"}", "{\"title\": \"Official comment by reviewer MiC5\", \"comment\": \"Thank you for clarifying my first two questions, they addressed my points there. My main concern is about the generalization of this work. Therefore, I tend to recommend rejection. I increased the soundness and overall rating.\"}", "{\"comment\": \"* Question: \\\"Small Training Sample Sensitivity: For datasets with small training sets (like MBPP with only 384 examples), more complex SoftSRV variants perform worse than simpler ones, suggesting the approach may be sensitive to training sample size.\\\"\\n\\nThe results in Figure 2 demonstrate that the most expressive SoftSRV variant, SS_MC, consistently exhibits superior finetuning performance, irrespective of the training set size. While Section 3.7 highlights that certain SoftSRV variants achieve improved MAUVE scores compared to SS_MC, this analysis is secondary to our primary objective of enhancing finetuning performance. \\n\\nThe goal of Section 3.7 is that SoftSRV methods result in a synthetic data that is more aligned with the underlying true distributions compared to hard prompted methods (HP, HP_SR) since SoftSRV directly optimizes a data-driven objective guiding the pre-trained model towards the target distribution. \\n\\n* Question: \\\"No Direct Performance Indicator: The authors note that the MAUVE similarity score they use to measure distribution matching is \\\"not a direct indicator of downstream fine-tuning performance,\\\" suggesting a lack of clear metrics to predict effectiveness.\\\"\\n\\nThe writing in that paragraph is convoluted and it has misled the reviewer. We will make sure to revise it. Here, we wanted to point out that the MAUVE should only be used as a tool for a secondary analysis since it is not a direct indicator of downstream performance. In the previous sections, we do show that SoftSRV admits direct downstream performance improvements on the test set \\u2013 please see Figure 2 and Table 1. \\n\\n* Question: \\\"Problem Setup Limitations: Assumes fixed maximum sequence length m (Section 2, pg 2). Restricts to scenarios where input and output sequences have equal length\\\"\\n\\nAll LLM have some max length in practice. Input and output sequences do not need to be the same and in Section 2, we assume them to have equal length only for notional simplicity, without loss of generality. Note, in our experiments, input and output sequences are not necessarily the same. \\n\\n\\n* Question: \\\"Comparison Scope: Primarily compares against hard-prompting baselines. Limited comparison with other synthetic data generation approaches. No comparison with other parameter-efficient tuning methods\\\"\\n\\nAll prior synthetic data generation approaches that have shown promising results are based on hard prompting. We would be grateful for the reviewer to point us to other promising approaches in this setting. \\n\\nThis paper focuses on developing a new data-driven framework for generating synthetic data. While SoftSRV could potentially be applied to parameter-efficient tuning methods, exploring that application is outside the scope of our current research. Note that parameter-efficient tuning methods cannot be directly applied to targeted data generation. See discussion with Reviewer 5RCw.\\n\\n\\n* Question: \\\"Validation Gaps: Initial results focused on only three domains (coding, math, reasoning). No clear guidelines for choosing between different variants (SSNSP, SSMPk, SSMC).\\\" \\n\\nThe SS_MC method outperforms the other SoftSRV methods on all three domains and hence our suggestions is to use this method. We will clarify this in the paper. \\n \\n* Question: Reviewer asks a series of questions about SoftSRV hyperparameters & empirical setup (prompt length, training steps, learning rate, quality and diversity of the initial sample, minimal sample size, choice of sequence embedder, niche domains, why use MLPs, basis prompts for SSMPk, random seeds, and initializations). \\n\\n While tuning hyperparameters for SoftSRV (e.g. choice of embedder, SoftSRV parametrizations, etc.) or varying the underlying setting (e.g. training set size, random seeds/initiatilizations, etc) could yield interesting insights and further performance improvements, we of course have limited time and compute for the study and find our methods already outperform hard prompt baselines even when fixing these hyperparameters and experimental setup a priori.\\n\\n* Question: \\\"Sufficiency of Context Vector: Assumes the context vector derived from an example sequence captures enough information to generate meaningful variations\\\"\\n\\nWe showed empirically that the context vector contains sufficient information to beat hard-prompting baseline methods. It would indeed be interesting to conduct a theoretical analysis to further understand the sufficiency of context vectors.\\n\\n\\n* Question: \\\"Task Complexity Impact: The approach appears less effective for more complex tasks like BoolQ that require generating longer passages and more diverse content. The authors note this is \\\"perhaps the most difficult task to generate synthetic data for.\\\"\\n\\nYes, BoolQ is the hardest task among the three, but nevertheless we still find that SS_MC admits the best performance.\"}", "{\"title\": \"Thank you for your comment\", \"comment\": \"I have adjusted my score, as my concerns were answered somewhat effectively.\"}", "{\"comment\": \"Thanks for your answers. After reading the response, I will maintain my score as authors also admit the problem of generalization in the proposed method.\"}" ] }
BeOEmnmyFu
Playing Language Game with LLMs Leads to Jailbreaking
[ "Yu Peng", "Zewen Long", "Fangming Dong", "Congyi Li", "Shu Wu", "Kai Chen" ]
The advent of large language models (LLMs) has spurred the development of numerous jailbreak techniques aimed at circumventing their security defenses against malicious attacks. An effective jailbreak approach is to identify a domain where safety generalization fails, a phenomenon known as mismatched generalization. In this paper, we introduce two novel jailbreak methods based on mismatched generalization: natural language games and custom language games, both of which effectively bypass the safety mechanisms of LLMs, with various kinds and different variants, making them hard to defend and leading to high attack rates. Natural language games involve the use of synthetic linguistic constructs and the actions intertwined with these constructs, such as the Ubbi Dubbi language. Building on this phenomenon, we propose the custom language games method: by engaging with LLMs using a variety of custom rules, we successfully execute jailbreak attacks across multiple LLM platforms. Extensive experiments demonstrate the effectiveness of our methods, achieving success rates of 93% on GPT-4o, 89% on GPT-4o-mini and 83% on Claude-3.5-Sonnet. Furthermore, to investigate the generalizability of safety alignments, we fine-tuned Llama-3.1-70B with the custom language games to achieve safety alignment within our datasets and found that when interacting through other language games, the fine-tuned models still failed to identify harmful content. This finding indicates that the safety alignment knowledge embedded in LLMs fails to generalize across different linguistic formats, thus opening new avenues for future research in this area. Our code is available at https://anonymous.4open.science/r/encode_jailbreaking_anonymous-B4C4.
[ "large language model", "jailbreaking attack", "language game" ]
https://openreview.net/pdf?id=BeOEmnmyFu
https://openreview.net/forum?id=BeOEmnmyFu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oLTdtPNlto", "ZPdCAwYiDD", "Z9onpfZODW", "TQHWIEEUtN", "CzqgPxy620" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730445743678, 1730606951682, 1730060742754, 1731762829705, 1730515130594 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6076/Reviewer_MKGg" ], [ "ICLR.cc/2025/Conference/Submission6076/Reviewer_UM8Y" ], [ "ICLR.cc/2025/Conference/Submission6076/Reviewer_cdZ9" ], [ "ICLR.cc/2025/Conference/Submission6076/Authors" ], [ "ICLR.cc/2025/Conference/Submission6076/Reviewer_G43c" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies the usage of \\\"language games\\\" in jailbreaking large closed-source LLMs. Language games are essentially conversational games played with a LLM that follow some simple rules (e.g. character replacement, insertion, etc.) The paper demonstrates that through the usage of language games, a user may jailbreak existing models. The paper also finds that larger models are *more* susceptible to the attack than smaller ones.\\nFinally, the paper fine-tunes a Llama model to defend against the attack, and alarmingly demonstrates that fine-tuning against a single language game does not grant defense against other language games.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method is simple, and easy to follow. The method itself demonstrates a new class of easy to create jailbreaks which may be difficult to defend against. Moreover, the result that larger models are more susceptible to the attack is interesting, and presents a potential challenge for alignment of future models.\", \"weaknesses\": \"- I am extremely concerned about the novelty and evaluation of the attack. Prior work [1, 2, 4, 5] suggests different techniques to modify the prompt to jailbreak the model. While the paper cites several of these methods, it does not compare against them; it is unclear if the proposed attack is more concerning than those already presented in prior work. In particular, I would like to know if the claim made by the paper (that larger models are *more* susceptible to this class of attack) is true for other attacks as well (this could be a very interesting result!).\\n\\n- The evaluation also lacks comparison to a baseline (i.e. where no language game was used). It is unclear how susceptible the base models themselves are to the subset of prompts used during evaluation.\\n\\n- I am concerned about the usage of gpt-4o mini as an evaluator; Cross-validation of the judge compared to prior methods (e.g. MD-Judge from SALAD-Bench) would be helpful in establishing its correctness. Additionally, using closed-source models as evaluators may not be reproducible, as their behavior is black-box and may change over time [3]. \\n\\n- The results showing lack of generalization of the defense in fine-tuned Llama 70b are interesting! The paper claims that larger models are more susceptible, so it would be interesting to compare to a fine-tuned variant of a smaller model (e.g. Llama 3.1 8b). \\n\\n[1] Bianchi, Federico, Mirac Suzgun, Giuseppe Attanasio, Paul R\\u00f6ttger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. \\u201cSAFETY-TUNED LLAMAS: LESSONS FROM IMPROV- ING THE SAFETY OF LARGE LANGUAGE MODELS THAT FOLLOW INSTRUCTIONS,\\u201d 2024.\\n\\n[2] Deng, Yue, Wenxuan Zhang, Sinno Jialin Pan, and Lidong Bing. \\u201cMultilingual Jailbreak Challenges in Large Language Models.\\u201d arXiv, March 4, 2024. http://arxiv.org/abs/2310.06474.\\n\\n[3] Xie, Tinghao, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang, Luxi He, et al. \\u201cSORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors.\\u201d arXiv, June 20, 2024. http://arxiv.org/abs/2406.14598.\\n\\n[4] Zeng, Yi, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. \\u201cHow Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs.\\u201d arXiv, January 23, 2024. https://doi.org/10.48550/arXiv.2401.06373.\\n\\n[5] Zou, Andy, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. \\u201cUniversal and Transferable Adversarial Attacks on Aligned Language Models.\\u201d arXiv, December 20, 2023. https://doi.org/10.48550/arXiv.2307.15043.\", \"questions\": [\"I noticed the evaluation prompt used by gpt-4o mini first translates to Chinese (A.1) . Why is this needed? And does this impact the judge at all?\", \"Have you compared to a simple random baseline? (e.g. randomly modifying or inserting characters?)\", \"What happens to success rates when the language game prompt is not present?\", \"When fine-tuning the model, was the language game prompt included?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a jailbreak technique leveraging language games to obfuscate malicious prompts through, e.g., inserting \\\"ub\\\" or \\\"ob\\\" before syllable rimes. The authors propose both natural and custom language game variants, evaluating their approach on three language models using a single dataset, achieving jailbreak success rates up to 93%.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed approach demonstrates that simple linguistic transformations can effectively circumvent LLM safety measures, providing valuable insights into model vulnerabilities.\", \"weaknesses\": \"1. The approach lacks systematic methodology, relying heavily on manual crafting of language games. A more principled framework for automatically generating and adapting language patterns would strengthen the contribution.\\n\\n2. The empirical evaluation requires significant expansion. The authors should include additional benchmarks beyond a single dataset (e.g., AdvBench[1]) to demonstrate generalizability, and evaluate on more models like Llama-2 to show broader applicability.\\n\\n3. While the fine-tuning experiments are valuable, baseline results on vanilla Llama3 would provide important context for understanding the effectiveness of this defense approach.\\n\\n4. The paper would benefit from comprehensive comparisons with current state-of-the-art attacks such as GCG[1], AutoDAN[2], PAIR[3], TAP[4], DeepInception[5], as well as how robust the proposed method is against defenses such as paraphrasing[6], SmoothLLM [7], Backtranslation[8]. \\n\\n[1] Andy Zou, Zifan Wang, J. Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial\\nattacks on aligned language models. CoRR, abs/2307.15043, 2023. doi: 10.48550/ARXIV.2307.\\n15043. URL https://doi.org/10.48550/arXiv.2307.15043.\\n\\n[2] Xiaogeng Liu, Nan Xu, Muhao Chen, and Chaowei Xiao. AutoDAN: Generating stealthy jailbreak\\nprompts on aligned large language models. In The Twelfth International Conference on Learning\\nRepresentations, 2024. URL https://openreview.net/forum?id=7Jwpw4qKkb\\n\\n[3] Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, and Eric Wong.\\nJailbreaking black box large language models in twenty queries. CoRR, abs/2310.08419, 2023.\", \"doi\": \"10.48550/ARXIV.2310.08419. URL https://doi.org/10.48550/arXiv.2310.\\n08419.\\n\\n[4] Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron\\nSinger, and Amin Karbasi. Tree of attacks: Jailbreaking black-box llms automatically. CoRR,\\nabs/2312.02119, 2023. doi: 10.48550/ARXIV.2312.02119. URL https://doi.org/10.\\n48550/arXiv.2312.02119\\n\\n[5] Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, and Bo Han. Deepinception:\\nHypnotize large language model to be jailbreaker. CoRR, abs/2311.03191, 2023. doi: 10.48550/\\nARXIV.2311.03191. URL https://doi.org/10.48550/arXiv.2311.03191.\\n\\n[6] Jain, Neel, et al. \\\"Baseline defenses for adversarial attacks against aligned language models.\\\" arXiv preprint arXiv:2309.00614 (2023). \\n\\n[7] Robey, Alexander, et al. \\\"Smoothllm: Defending large language models against jailbreaking attacks.\\\" arXiv preprint arXiv:2310.03684 (2023). \\n\\n[8] Wang, Yihan, et al. \\\"Defending llms against jailbreaking attacks via backtranslation.\\\" arXiv preprint arXiv:2402.16459 (2024).\", \"questions\": \"The paper contains ambiguous terminology that needs clarification. For example, in discussing defense results, the distinction between \\\"other forms of attacks\\\" (0-3% success) and \\\"other custom language games\\\" (failed defense) is unclear. Please specify which attack categories these refer to.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes novel LLM jailbreaking methods via *playing language games with LLMs*. The authors both consider natural language games (that already exist) and design novel custom language games. The results show that SOTA proprietary LLMs are vulnerable against such jailbreaking attempts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed attack strategies are novel, utilizing simple language games.\", \"More interestingly, the authors also study whether safety fine-tuning against one language game attack could help the model resist other custom language game attacks.\"], \"weaknesses\": [\"Need more implementation details on experimental setup. For example, what's the temperature of the three LLMs you evaluated against? Also, I would like to see repetitive experiment results (or error bars).\", \"In Line 360-367, you mentioned that \\\"GPT-4o-mini exhibited different success rates for Self 4 (**86\\\\%**) and Self 5 (**82\\\\%**).\\\" This seems like a small variations. Are you sure this is not caused by the randomness during language model decoding?\", \"Need to validate the claims on more models. For now, the experiments only demonstrate effectiveness on 3 proprietary models. Would be necessary to report results against more models (e.g., Llama-3-405B or other smaller open-weight models). More results are good to know, even if these models are not capable enough to play the language games.\", \"Need more details on how you conducted the safety alignment fine-tuning (Sec 4.5). Currently it's quite vague. For example, how is the jailbreak dataset constructed?\", \"What if you fine-tune the Llama-3.1-70B model over more custom language games (say Self 1- Self 4)? I wonder whether safety training over multiple language games can help the safety refusal behaviors generalize better on unseen language game attacks.\", \"While I appreciate the authors' efforts on exploring the potential defense by conducting safety fine-tuning over a single language game attack, it would be interesting to see whether fine-tuning over more language game attacks could allow better generalizations and thus help real-world model developers defend against the proposed attacks.\", \"May need to evaluate the proposed attacks against some existing defense strategies [1-3].\", \"How much would the model utility drop if uers chat with LLMs under these language games? I think the authors can evaluate the models on some utility benchmarks (e.g., MT-Bench) when playing the language games, in order to show that the models can still provide useful responses in general.\", \"I feel the contribution of this work is somehow diminished when compare it to the existing work [4] that jailbreak LLMs via encryption and encoding (though the authors indeed discussed the difference).\", \"[1] Baseline defenses for adversarial attacks against aligned language models, Arxiv 2023\", \"[2] Defending chatgpt against jailbreak attack via self-reminders, NMI 2023\", \"[3] Smoothllm: Defending large language models against jailbreaking attacks, Arxiv 2023\", \"[4] Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher, ICLR 2024\"], \"questions\": [\"I still wonder why \\\"human redability\\\" is important for a jailbreak attack? May need better justification for this point.\", \"Line 435-436: \\\"Notably, the fine-tuned model was able to successfully defend against **other forms of attacks**.\\\" Is this a typo? Do you mean that the fine-tuned model can defend well against the attack which was considered in the fine-tuning dataset (but not the others)?\", \"Why do you use GPT-4o-mini as the judge, but not MD-Judge which comes along with SALAD-Bench? Also, why translate to Chinese during jailbreak judgments?\", \"I'm not following well from Line 322 to Line 327. What do you mean by \\\"GPT-4o and GPT-4o-mini often address questions while framing their answers in a positive manner?\\\" Why is this bad? Can you show some qualitative examples to justify this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper explores how language games can be used to jailbreak LLMs, where each game is comprised of a set of rules for how to transform a text prompt. The authors argue that these language games give rise to a class of jailbreaks that 1. Even when the LLM makes errors in applying the rules of the game, it is still easy for a human to infer how these errors are to be corrected due to the natural language nature of the game, and 2. Are easy to construct, such that if the LLM is fine-tuned to be robust against one language game, it\\u2019s relatively easy to create a new language game jailbreak that still bypasses safety. Both previously known and novel language games are evaluated on GPT-4 and Claude 3.5 models, and the authors report that the proposed jailbreak technique is successful against these models. The authors also investigate safety generalization across language games when the model is fine-tuned to be robust against a specific language game, and find that generalization in general is quite poor, even between very similar games. This is in agreement with the conjecture that successful jailbreaks against aligned LLMs may be explained by the phenomenon of mismatched generalization [1].\\n\\n[1] Wei, A., Haghtalab, N. and Steinhardt, J., 2024. Jailbroken: How does llm safety training fail? Advances in Neural Information Processing Systems, 36.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed jailbreaking technique is easy to understand and easy to create variations of (i.e., the custom language games.)\\n2. It is good that the proposed attacks are tested on SoTA closed-source models; this indeed shows that these attacks have practical, real-world safety consequences.\\n3. The results in table 5 are interesting; they demonstrate that safety generalization after fine-tuning to be robust against specific games is poor even to extremely similar games. To the best of my knowledge previous claims of poor safety generalization to other jailbreaks usually just consider jailbreaks that are very different [1], so this is an interesting finding.\\n\\n[1] Mazeika, Mantas, et al. \\\"Harmbench: A standardized evaluation framework for automated red teaming and robust refusal.\\\" arXiv preprint arXiv:2402.04249 (2024).\", \"weaknesses\": \"1. Line 284-285: \\u201cEach model is in its default settings to ensure consistency and to simulate real-world use cases.\\u201d One downside of this is that sampling is used during the decoding process (as opposed to using greedy decoding), meaning the results are not quite reproducible. I would suggest the authors either 1. Report results using greedy decoding and a fixed seed, or 2. If using sampling, report some summary statistic (e.g., average with standard deviation) over multiple runs (ideally, also with a fixed seed). This may also help clarify whether the table 3 observation that similar language games can have different behaviors is more so due to randomness or if there is a genuine difference in behavior.\\n2. There is no measurement of how reliable the GPT-4o-mini-based safety judge is. It may strengthen the results to add some human evaluation to measure reliability, e.g., by reporting false positive/negative rates on a sample of data points.\\n3. The method of translation seems a bit unreliable. For example, in figure 8, \\\"Certalfainly! Halerfe alfare stalfealfe\\u201d is translated to \\u201crtainly! halerfe ar stealf,\\u201d but if the rules of the game are followed it should\\u2019ve been translated to \\u201cCertainly! Here are stee.\\u201d Doesn\\u2019t it make more sense to just programatically perform the translation (e.g., for alfa balfa, just remove instances of \\u201calf\\u201d after the first consonant/before vowels) before being evaluated by the safety judge? Otherwise, this just adds another source of error that may drastically affect your reported evaluation results.\\n\\nI would be willing to raise my score if these points are addressed and the results are found to be convincing.\", \"questions\": \"1. Lines 160-161: \\u201cHowever, while these methods \\u2026 is generalized across the intermediate layers of LLMs.\\u201d This seems abrupt \\u2014 no discussion about the role of intermediate layers had been introduced before this. Is this point more about mismatched generalization of the input data?\\n2. Can you compare/contrast natural language games with ciphers (e.g., with those found in Yuan et al., 2023)? It seems they are similar in spirit as ciphers also apply various rules for transforming the input, so the differences seem less clear to me. It may be helpful to readers to make this clear in the paper.\\n3. Line 214: \\u201cThe models used in the experiment possess prior knowledge of these types of linguistic manipulations.\\u201d Can you provide some more information about this? I agree this is probably the case, but I\\u2019m just curious whether there are sources backing this up or if it remains just a conjecture (if so, it should probably be rephrased as such), given these models are closed-source and that their training data details have not been released.\\n4. Please clarify why the prompt in Appendix A.1 has the model perform translation to and from Chinese; it is never explained in the paper. Also, the prompt only asks to provide a 1 or a 0 as the judgement and contains no mention about \\u201cunclear\\u201d labeling, so how are unclear results determined? Lastly, the referenced figure on line 598 is incorrect (16 instead of 4).\\n5. Lines 323-327: can you provide specific case studies for these claims? For example, the case studies in the appendix don\\u2019t seem to have examples of unclear responses for GPT-4o/4o-mini.\\n6. For each case study in the appendix, can you also provide the labels that were given by the safety judge?\\n7. Line 435-436: \\u201cNotably, the fine-tuned model was able to successfully defend against other forms of attacks, with a success rate of 0% to 3%.\\u201d Do you mean to say defending against the attack it was fine-tuned on?\\n8. (Optional, just curious) It could be interesting to see if safety generalization to other language games can be achieved by fine-tuning against multiple language games at a time. Is it possible that at some point (i.e., with a sufficient amount of games in the fine-tuning set), the model is able to overcome shortcut learning/overfitting to specific games?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BdmVgLMvaf
Adaptive teachers for amortized samplers
[ "Minsu Kim", "Sanghyeok Choi", "Taeyoung Yun", "Emmanuel Bengio", "Leo Feng", "Jarrid Rector-Brooks", "Sungsoo Ahn", "Jinkyoo Park", "Nikolay Malkin", "Yoshua Bengio" ]
Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable. When sampling is modeled as a sequential decision-making process, reinforcement learning (RL) methods, such as generative flow networks, can be used to train the sampling policy. Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration. We propose to use an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student). The Teacher, an auxiliary behavior model, is trained to sample high-loss regions of the Student and can generalize across unexplored modes, thereby enhancing mode coverage by providing an efficient training curriculum. We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge, two diffusion-based sampling tasks, and four biochemical discovery tasks demonstrating its ability to improve sample efficiency and mode coverage. Source code is available at https://github.com/alstn12088/adaptive-teacher.
[ "amortized inference", "generative models", "reinforcement learning", "GFlowNets" ]
Accept (Poster)
https://openreview.net/pdf?id=BdmVgLMvaf
https://openreview.net/forum?id=BdmVgLMvaf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1xUlbyTSo", "r0Ezm5ymSl", "my4Wqjt3tJ", "jcfnpK2vK5", "iTPSApNS0Z", "hdOWb03ZCA", "cpItvzoJuJ", "OGEoLk9hC9", "Jd97ffMBrF", "F0NRIk6bBd", "EuRn0Mb9P9", "EKN1xZgeZK", "EDv5eOVH1i", "DihJZQUKP0", "D6HCGNRBo6", "AoSIds72DN", "AoFUmePZMA", "9z5QQR4drT", "7WkcJH82L9", "4G0ln5Kqgh", "478fZaGSOA", "3iZie1x9be", "1ug2zHWQ0A" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732272207530, 1732127878523, 1730356180845, 1733118386223, 1737523448308, 1734925360104, 1732127960496, 1732127833324, 1732840430582, 1730871342859, 1732820395495, 1732127849653, 1732152560606, 1730787814389, 1732652967343, 1732127791287, 1732548514437, 1732548956167, 1730327261722, 1732127775497, 1732272171067, 1732127935313, 1732127974856 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_NTRq" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1343/Area_Chair_MSAP" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_tjcQ" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_tjcQ" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_NTRq" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_y6wN" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_CGLL" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Reviewer_CGLL" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ], [ "ICLR.cc/2025/Conference/Submission1343/Authors" ] ], "structured_content_str": [ "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer CGLL,\\n\\nWe just want to let you know that we've updated Appendix F with several larger-scale deceptive grid world results and the experiment on LLM red-teaming, in response to your questions about scaling performance. We hope these help to address your concerns and look forward to your feedback.\\n\\nThe authors\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review and the positive assessment of our paper. We answer your concerns below.\\n\\n### Why amortized inference?\\n\\nThank you for your suggestion. We agree that the ability to reuse a shared computational module for inference across multiple data points, as opposed to performing inference independently for each data point, is a major motivation for amortized inference compared to MCMC. Following your recommendation, we have cited the suggested reference and revised the first paragraph of introduction to better highlight this key advantage of amortized inference.\\n\\n\\n### Related work on experimental design\\n\\nThank you for pointing out this relevant literature. We agree that it is related to our work, as our Teacher plays the same role as the entropy regularized adversary that generates tasks. We have included discussion of the suggested reference in the related work (Sec. 4).\\n\\n### Figure 1\\n\\nThe illustration is intended to show that the behavior policy (Teacher, Student, or Buffer) contributes to the data flow for both Teacher and Student training. This implies that the Buffer sometimes provides data to the Teacher and Student during training. \\n\\nIn the revised version, we have added an arrow from the Student to the Buffer to Figure 1, as trajectories are collected from both models.\\n\\n### Why are high-loss trajectories informative?\\n\\nThis is a hypothesis, motivated by the following arguments:\\n- In all learning systems, samples with high error, where the current iteration of the model struggles, tend to be more informative. This is a principle used in active learning, hard example selection, curriculum learning, prioritized experienced replay in RL, etc.\\n- For amortized inference systems in particular, discovery of poorly modeled modes (especially those whose density is underestimated) is critical. This is because errors in *already observed* modes can self-correct, as they are revisited during training, but missing modes are unlikely to be visited by on-policy (or near-on-policy) sampling, making them harder to recover. \\n\\nThus, to promote discovery of modes, the Teacher should guide the Student toward samples where the sampling density divergences from the target, especially favouring the samples whose density is underestimated. This motivates the proposed reward for the Teacher (equation 5).\\n\\n\\n### Is this an adversarial game?\\n\\nGood question. Assuming the function classes of the Student and Teacher policies can express the unique point where both achieve zero loss, the joint learning problem between Teacher and Student is not adversarial in the sense of having a saddle point at the optimum. The losses for both models are strictly positive and are zero precisely at the optimum (cf. Proposition 1 in Appendix A), so any deviation from the optimum will not decrease the losses of teacher, student, or both.\\n\\nFurthermore, the Student's reward does not depend on the parameters of the Teacher, even though its *training policy* is, which implies that **the optimality of a Student policy is independent of the Teacher's parameters**.\\n\\nHowever, if the optimal policies are not representable by the policy networks of the Student and Teacher, a saddle point at the optimum is possible.\\n\\n### Details of behavior policies\\n\\nThe details are provided in Appendix B (lines 827-831). The balancing rule is straightforward: the Teacher focuses on multi-modal exploration, the Student emphasizes exploitation, and the replay buffer is used for sample efficiency. Depending on the characteristics of the target task, users can adjust the balance according to their specific needs, similar to how exploration-exploitation trade-offs are tuned in other exploration methods in RL.\\n\\n**We appreciate your valuable input; please let us know if we can provide any further clarifications.**\"}", "{\"summary\": \"This work focuses on the efficient exploration of RL training during amortized inference. The primary contribution lies in developing an adaptive training distribution to guide the amortized sampler in prioritizing difficult ones. The proposed method is examined in a collection of benchmarks, including both synthetic and real-world scenarios. The exploration efficiency and other benefits are reflected in both mode coverage and sample efficiency.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"I can easily follow this work, and this work tries to amortize prediction by simply conditionally inputting some variables.\\n\\nOverall, (1) I find this work easy to follow with clear motivations. Decision-making for amortized inference, particularly the development of GFlowNets, is impactful, and this work focuses on an important issue, namely efficient exploration under RL frameworks in the field. (2) The developed strategy is novel and practical in implementation. (3) The experiments are inspiring and well-supported claims.\", \"weaknesses\": \"While a lot of merits in this work, I find some parts are necessary to modify or revise.\\n\\n---\\n\\n(1) It seems to lack the necessity of amortized inference. In line28-30, it states the mechanism of amortized inference and related bottleneck. It is necessary to include the role of amortized inference compared with traditional methods such as MCMC, e.g., citing [1] and adding something like\\n\\n\\\"The amortized inference adopts a shared inference module for all data points instead of performing inference one by one. In this way, we can reuse the computational module for other data point's inference.\\\"\\n\\n(2) There exists literature work [2] that raises similar learning modules in terms of training adaptative distributions for few-shot experimental design; it is necessary to discuss them in detail in Section 4 related work.\\n\\n\\n(3) Other suggestions or questions: (i) Figure 1 is clear enough, but I am not sure whether there should exist links between the Buffer and the Teacher or the Student to reveal the data flow. (ii) in Line-74, it says \\\"we believe that trajectories with high loss are particularly informative for mode coverage\\\", is this a hypothesis? Are there any explanations from either experiments or other intuitions? (iii) In Line 60, it says \\\"the Student's target distribution does not depend on the Teacher\\\", hence I am wondering whether the optimization pipeline is an adversarial game. (iv) In line 237, I want to know how to balance the student, the teacher or a buffer.\\n\\n\\n**Reference:**\\n\\n[1] Margossian C C, Blei D M. Amortized Variational Inference: When and Why?[J]. arXiv preprint arXiv:2307.11018, 2023.\\n\\n[2] Wang, Cheems, et al. \\\"Robust Fast Adaptation from Adversarially Explicit Task Distribution Generation.\\\" arXiv preprint arXiv:2407.19523 (2024).\\n\\n---\\n\\nI will be happy to update my score if these concerns are well addressed during the rebuttal discussion.\\n\\n---\\n\\nPost Rebuttal\\n\\nThe author has well addressed my concerns, and I updated my score to accept.\", \"questions\": \"See Weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer tjcQ,\\n\\nJust a gentle reminder that our discussion period is ending soon. If you feel that your concerns have been addressed in our previous responses, please consider revising the score. If there are any remaining issues, please let us know as soon as possible.\\n\\nBest regards,\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"In this paper, the authors improve the efficiency of training off-policy RL by developing a method to perform amortized inference in such a manner as to prioritize high-loss regions of the loss. They do this by proposing a \\\"teacher\\\" that creates an adaptive training distribution to prioritize high-loss regions of a \\\"student\\\" model. The method is demonstrated with GFlowNets and they show that this improves mode coverage and sample efficiency.\\n\\nThe reviews were high variance but leaning towards accept, with two accepts and two marginal leaning reject. Of the two more negative reviewers, one had concerns about novelty (e.g. compared to hard-negative mining and uncertainty sampling) and the other mostly asked for theoretical justification. One of these reviewers raised their score from 3 to 5 after reading the author rebuttal. The reviewers in general found the paper sound, well-written and well motivated. \\n\\nIn subsequent discussion, one of the reviewers voiced that they wished to champion the paper for acceptance noting that the method is novel, the paper very well written and that theoretical justification seems unnecessary. When prompted, none of the reviewers voiced any concerns or disagreed.\\n\\nGiven that two reviewers are willing to champion accept and that the average is over the bar, the recommendation is to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors responded to all the reviewers' comments. One of the reviewers, the one asking for theoretical justification, raised their score from a 3 to a 5. The other 5 read the rebuttal but decided to leave their score unchanged.\\n\\nIn the reviewer / AC discussion period I prompted the reviewers about whether the concerns about novelty and theoretical justification were sufficient to strongly argue against acceptance (or if anyone would champion the paper). Reviewer NTRq strongly argued for acceptance, citing the authors' responses as compelling. None of the other reviewers chimed in. In my view, unless the other reviewers have issues with technical correctness, attribution, or other compelling reasons, then if two reviewers argue strongly for acceptance the paper should be interesting / exciting to at least a subset of the community.\"}", "{\"title\": \"Response (2/3)\", \"comment\": \"### Q2 (hyperparameters)\\n\\nThank you for your insightful questions regarding the selection of the behavior policy during training and the choice of hyperparameters in our experiments.\\n\\n**Selection of Behavior Policy Ratios:**\\n\\nFirstly, we acknowledge that the behavior policy ratios are crucial hyperparameters that influence the exploration-exploitation trade-off during training. These ratios determine the proportion of samples obtained from the teacher policy, the student policy, and the replay buffer.\\n\\nOur approach involves carefully selecting these ratios based on the specific characteristics of each **domain** or **task**. While each task represents a significant and distinct domain (e.g., diffusion sampler, biochemical design), we ensure consistency by using identical hyperparameters across all subtasks within each domain. For instance, in the biochemical design tasks\\u2014including DNA, RNA, atomic molecule, and fragment molecule generation\\u2014we use the same behavior policy ratios and hyperparameters. This consistency demonstrates the stability and robustness of our algorithm within each domain.\", \"the_choice_of_behavior_policy_ratios_is_guided_by_the_nature_of_the_task\": \"- **Teacher Ratio:** Increased when exploration and mode discovery are critical, as the teacher helps the model explore diverse and high-reward regions of the state space.\\n- **On-Policy (Student) Ratio:** Increased when the task has a steep reward landscape but is less multimodal, allowing the model to exploit known high-reward areas more effectively.\\n- **Replay Buffer Ratio:** Increased when sample efficiency is important, enabling the model to learn from past experiences and improve sample utilization.\\n\\nNote that, in the hypergrid task, we experimented with different behavior policy ratio of (teacher, student, buffer), specifically (1,1,1) and (1,1,0). Both configurations significantly outperformed the baselines in terms of number of modes discovered. The $L1$ distance tends to worsen when the buffer is utilized, whether or not Teacher is used. This is likely due to the regularization effect introduced by the highly diverse experiences provided by the buffer.\\n\\n**Minor note:** We mistakenly reported the $L_1$ value for $\\\\alpha=0.5$ and $(d=4, H=32)$ in Table 6. We fixed it in the revised version.\\n\\n**Choice of $\\\\alpha$:**\\n\\nSimilar to the behavior policy ratios, $\\\\alpha$ also balances between exploration and exploitation, in a different way. A high value of $\\\\alpha$ makes the Teacher focus on high-reward areas, and thus promotes exploitation. This can be especially effective in environments with a vast search space where most regions have low rewards, as it allows the teacher to ignore the low-reward areas.\\n\\nFor this reason, we set $\\\\alpha = 0$ for the hypergrid task, where exploration is more critical by design, and $\\\\alpha = 0.5$ for other tasks. It\\u2019s important to note that these choices were based on simple intuition rather than an extensive hyperparameter search.\\n\\nWe studied the effect of $\\\\alpha$ in Appendix D.4, providing empirical evidence for the necessity of this hyperparameter. Notably, $\\\\alpha = 0.5$ also performs reasonably well in the hypergrid task, suggesting that it could be a good starting point for new environments.\\n\\n**Choice of $C$:**\\n\\nRegarding the hyperparameter $C$, our experiments in the hypergrid task show that varying $C$ does not significantly impact the performance of our algorithm (Appendix D.3, Table 5). We consistently outperform other baselines across different values of $C$, demonstrating the robustness of our method to this parameter.\\n\\nWe appreciate your feedback, as it has allowed us to clarify these important aspects of our work.\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for such a positive assessment of our paper's relevance, exposition, and empirical evaluation. We've attempted to answer your questions below.\\n\\n### Complexity of training and simpler exploration strategies\\n\\nThe question of whether the benefits brought by the proposed algorithm are worth the added complexity of training a teacher model, relative to simpler exploration methods, is very important, and we appreciate the opportunity to comment upon this point.\\n\\nCertainly, the additional tunable parameters are unavoidable when we introduce a new component (like the Teacher network) to an algorithm. However, these parameters are often beneficial in that they provide controllability of a desired behavior, in our case, exploration. This can be compared to the way that GANs improve the training of a generator by introducing a secondary network, the discriminator, into the optimization.\\n\\nAlthough the joint optimization of two networks increases training complexity, there are reasons to believe the benefits outweigh the shortcomings, even at large scales. It is clear that large-scale tasks where exploration is difficult require a well-designed training policy for an agent. The more complex the task, the harder it is for a replay buffer to capture all modes of the target distribution. The Teacher network can be viewed as an amortized replay buffer that provides generalization abilities that a buffer cannot -- it can generate arbitrarily many new training samples for the Student on the fly. This approach is fundamentally more scalable compared to non-learned sample selection methods.\", \"as_for_comparison_to_simpler_exploration_methods\": [\"We compared our method to simple approaches such as epsilon-greedy and replay buffer methods, but they did not achieve satisfactory performance, particularly in large-scale tasks.\", \"Methods like Thompson sampling, which rely on maintaining a Bayesian posterior over parameters, become extremely complex at scale, while RND requires training a second model, typically of a similar complexity to the sampler (just as our Teacher). Note that we also compare with both of these methods in the form they were proposed in prior work, but again found them to underperform.\", \"Finally, we want to emphasize that methodological simplicity is not equivalent to the complexity or scalability of an algorithm. While you mention that our method requires joint optimization and may seem complex, this does not necessarily mean it introduces significant complexity at scale. Our approach simply involves training one additional network at any scale, leading to a constant multiplicative increase in number of parameters.\"]}", "{\"title\": \"Diffusion models and GFN are orthogonal methods; combining them has proven to be useful.\", \"comment\": \"We believe there is a slight misunderstanding. We agree that diffusion models are very useful and that we must continue researching how to improve such models.\\n\\n**Diffusion models and GFNs are orthogonal**: A GFN is a training method for diffusion models without data, but with energy/reward.\\n\\nTraining diffusion models with denoising score matching (DSM) or maximum likelihood estimation (MLE)\\u2014typical methods for diffusion models\\u2014is scalable and practical when massive datasets are available. However, when we aim to enhance diffusion models with a reward model or perform intractable inference using energy functions, we need reinforcement learning (RL)-like training methods because there is no such massive dataset to imitate (making DSM or MLE on a dataset impossible). Among these methods, GFN are a promising candidate.\\n\\nFor example, Venkatraman et al. [1] demonstrated that fine-tuning diffusion models with a reward model can be effectively applied to various tasks, including **inverse image problems**, **language model infilling using discrete diffusion**, **text-to-image model fine-tuning**, and **offline RL** through GFN training over diffusion models.\\n\\nMoreover, Seong et al. [2] shows that using the same GFN objective with Venkatraman et al. [1], we can model **molecular dynamics (MD)** by sampling rare transition paths. In such scientific discovery applications, there are desired models called **Boltzmann Generators** that aim to sample proportionally to the Boltzmann energy distribution $e^{-E(x)/T}$ (in MD and N-body particle simulations). For Boltzmann Generator training, we believe this combination is particularly useful: (1) we have to use diffusion models, and (2) train them using a GFN-like objective. There is active research following (1) and (2) to meet these demands [3, 4].\\n\\n---\\n\\n[1] Venkatraman et al. \\\"Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Controls\\\", NeurIPS 2024.\\n\\n[2] Seong et al. \\\"Collective Variable Free Transition Path Sampling with Generative Flow Network\\\", ICML Workshop on Structured Probabilistic Inference and Generative Modeling 2024.\\n\\n[3] Sendera et al. \\\"Improved Off-policy Training of Diffusion Samplers\\\", NeurIPS 2024.\\n\\n[4] Akhound-Sadegh et al., \\\"Iterated Denoising Energy Matching for Sampling from Boltzmann Densities\\\", ICML 2024.\"}", "{\"summary\": \"This paper proposed a method to improve mode coverage and training efficiency in amortized inference methods like GFlowNets. Specifically, the authors use offline RL training to encourage the discovery of diverse, high-reward candidates, and addressed the key challenge -- exploration in off-policy RL. The main idea is to use an adaptive \\\"Teacher\\\" model to help the \\\"Student\\\" sampler by focusing on regions with high loss. The teacher model is used as as an auxiliary model, is trained to target areas where the Student model has high errors. This allows it to cover unexplored modes (which usually have high errors) and provide a more efficient training process. Empirically, the authors show that this approach works well in various tasks, such as discrete sequence design and continuous diffusion sampling, with better sample efficiency and mode coverage.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of exploring high-error region and increase the sampling probability of data in these region for training the student model intuitively makes sense to me, which is essentially resembles to hard-negative mining in the classic machine learning literature.\\n\\n2. The experiments were well-executed and supports the main claim in the paper.\\n\\n3. The math on GFlowNets and their connection to amortized inference is helpful, especially helps contextualize the significance of the contributions.\", \"weaknesses\": \"1. The idea is not new; it closely resembles hard negative mining (i.e., sampling negative examples where the model shows high error), which limits the novelty of the proposed approach.\\n\\n2. While the idea of sampling more in high-error regions seems intuitively reasonable, its effectiveness may depend on whether the student model has sufficient capacity to fit the distribution. Also, I would like to see more comparisons and discussion with the active learning literature, such as uncertainty sampling, etc.\", \"questions\": \"Can the author explain why you chose GFlowNets for the experiments? Are they more effective than diffusion models for chemical or drug discovery? In my understanding, diffusion models can fairly easily to fit multimodal distributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors responses. The authors response addressed many of my concerns. However, I am skeptical about the practicality of the GFlowNets, especially given the wide adoption of diffusion models in many domains, such as image/video/audio generations, protein design, drug discovery etc. The training of diffusion models is actually quite simple and scalable. Maybe I am missing something, could the author explain what's hindering the adoption of Gflownets in various applications? I am borderline to this paper.\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"### Missing details in biochemical tasks\\n\\nWe apologize for the unclear explanation of the biochemical discovery experiments. We explain how we define the reward function below and have added these details to the manuscript (Sec. 5.3).\\n\\n**QM9:** Our goal is to generate a molecular graph. The reward function is a HOMO-LUMO gap on the target transcription factor, which is obtained via a pre-trained MXMNet proxy from [1]. We use a reward exponent of 5. We define modes as the top 0.5% quantile of $R(x)$.\\n\\n**sEH:** Our goal is to generate binders of the sEH protein. The reward function is a binding affinity to soluble epoxide hydrolase (sEH), which is provided by the pre-trained proxy model from [2]. We use a reward exponent of 6. We define modes as the top 0.01% quantile of $R(x)$, with additional filtering to exclude candidates that are too similar to each other based on Tanimoto similarity, following [3].\\n\\n**TFBind8:** Our goal is to generate a DNA sequence of length 8. The reward function is a binding affinity to a human transcription factor, which is obtained via a pre-trained proxy model provided by [4]. We use a reward exponent of 3. We use a pre-defined set of modes provided by [5].\\n\\n**L14-RNA1:** Our goal is to generate a RNA sequence of length 14. The reward function is a binding affinity to a human transcription factor, which is obtained via a pre-trained proxy model provided by [6]. We use a reward exponent of 8. We define modes as the top 0.01% quantile of $R(x)$ and the diversity threshold as 1 unit of Levenstein distance, also following [3].\\n\\n[1] Zhang, Shuo, Yang Liu, and Lei Xie. \\\"Molecular mechanics-driven graph neural network with multiplex graph for molecular structures.\\\" arXiv preprint arXiv:2011.07457, 2020. \\n[2] Bengio, Emmanuel, *et al.* \\\"Flow network based generative models for non-iterative diverse candidate generation.\\\" Neural Information Processing Systems (NeurIPS), 2021. \\n[3] Kim, Minsu, *et al.* \\\"Learning to scale logits for temperature-conditional GFlowNets.\\\" International Conference on Machine Learning (ICML), 2024. \\n[4] Trabucco, Brandon, *et al.* \\\"Design-bench: Benchmarks for data-driven offline model-based optimization.\\\" In International Conference on Machine Learning (ICML), 2022. \\n[5] Shen, Max W., *et al.* \\\"Towards understanding and improving GFlowNet training.\\\" International Conference on Machine Learning (ICML), 2023. \\n[6] Sinai, Sam, *et al.* \\\"Adalead: A simple and robust adaptive greedy search algorithm for sequence design.\\\" arXiv preprint arXiv:2010.02141, 2020.\\n\\n**Thank you again for your feedback and interesting comments. We are happy to respond to any further questions you may have.**\"}", "{\"title\": \"Good paper and updated the score\", \"comment\": \"Thanks for the author's detailed response and clarifications.\\nAfter revision, the updated manuscript is complete enough.\\nTaking other reviewers' comments and my assessments, I think this work is well-motivated and proposes a novel approach to Bayesian deep learning with sufficient evaluation.\\nHence, I have updated my score and think this work deserves acceptance.\"}", "{\"summary\": \"The paper presents a novel method to improve amortized inference for complex distributions using an adaptive \\\"Teacher-Student\\\" training framework. The Student is an amortized sampler parameterized as a generative flow network (GFlowNets) and trained using RL. The primary contribution of the work is introducing the\\u00a0Teacher as an auxiliary model that acts as the 'exploration policy' for the student. It is trained to guide the Student training by focusing on high-loss regions, thereby promoting the discovery of unexplored modes of the target distribution. The proposed method is evaluated on synthetic environments, diffusion-based sampling tasks, and biochemical discovery tasks, demonstrating improved mode coverage and sample efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Strengths** :\", \"Addresses an important problem: Mode coverage/exploration is an important problem in the training of GFlowNets. The paper proposes a novel and interesting solution to the problem.\", \"Very Well written :\\u00a0 I really enjoyed reading the paper. The paper did an excellent job of introducing and walking through the relevant literature and the methods and putting itself in context. Although I was not myself very familiar with the specific work line of work around GFlowNets, I was easily able to follow along all the details.\", \"Lots of interesting details : The training formulation was interesting, especial given I wasn't very familiar with the glownets literature before this. I also particularly liked the use of a search procedure combined with the Teacher network to reduce the teacher network induced bias in the exploration process (although this was already introduced in previous work!). This approach effectively guides the student towards more diverse solutions, improving the overall learning efficiency and robustness.\", \"Impressive empirical results: The paper demonstrates the versatility of the Teacher-Student framework by applying it to a range of tasks, including synthetic benchmarks and biochemical discovery problems. The empirical results consistently show that the Teacher-Student setup leads to better mode coverage and training efficiency compared to existing methods. Especially the results in more complicated tasks with a large number of modes.\"], \"weaknesses\": [\"**Weaknesses/Questions**\", \"The introduction of an adaptive Teacher adds additional complexity to the training process, requiring the joint optimization of both Teacher and Student networks. At least in the RL literature, these types of exploration methods were tried and given up on as they required extensive tuning and didn't scale well enough. I'm curious how the authors think that compares with the use cases here and if the authors genuinely believe the results shown in the paper will hold the test of time?\", \"This is maybe a dumb question. But I do wonder how these methods compare with using standard sampling based strategies for\\u00a0mode discovery e.g thompson sampling etc. My understanding is those become intractable as the problem size increases. The approach suggested seems pretty complex and I do wonder if the those standard mode discovery methods could've helped make things simpler.\", \"Details on the biochemical discovery experiments were a little unclear. Eg. How do you define the reward function etc was not very clear to me.\"], \"questions\": \"same as weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Increase the score\", \"comment\": \"Thanks to the author for their response and explanation. Based on the changes, I decided to increase my score to 5.\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"### Comparison with diffusion models (including in biochemical tasks)?\\n\\nFirstly, it's important to clarify that diffusion models (as the term is typically understood) are generative models, usually trained to maximize a variational bound on log-likelihood of a dataset. GFlowNets, on the other hand, are **training methods** that fit the parameters of any sequential generative process to sample a given density function (reward), absent a dataset. \\n\\nBecause diffusion models assume a sequential generative process, GFlowNet algorithms can indeed be used to train diffusion models *without data samples, but given a target energy that we wish to sample*. In fact, this is done in our second task, the \\\"diffusion sampler\\\", and most fully explored in the reference [Sendera et al., 2024]. In our work, we showed that the proposed Teacher-Student method can be used to improve the training of diffusion samplers.\\n\\nRegarding chemical or drug discovery, our benchmarks are built directly on the previous work by [Shen et al., 2023], where a bidirectional sequence generative model was trained using GFlowNet algorithms. For a fair comparison, we used the same generative model architecture but explored different GFlowNet training methods. As far as we are aware, discrete diffusion models have not previously been applied to this particular task, and comparing the effectiveness of discrete diffusion models versus other generative models is not related to the focus of our work. Our goal is to improve exploration in GFlowNets as a training method, regardless of the underlying generative model.\\n\\n**Thank you again for your comments. We hope we have addressed them satisfactorily above, but do not hesitate to let us know if you have further questions.**\"}", "{\"comment\": \"Dear reviewer tjcQ,\\n\\nAs we reach the end of the discussion period, we\\u2019d like to ask we can provide any more information that could affect your assessment of the paper. We believe that our answer above has addressed your original concerns and clarified a few points that may have been missed. Thanks again for your attention to our work.\\n\\nBest,\\nAuthors\"}", "{\"comment\": \"Dear Reviewer CGLL,\\n\\nAs we reach the end of the discussion period, we'd like to ask if we can provide any more information that could affect your assessment of the paper. In the discussion period, we believe that we clarified the concerns and questions raised. Specifically, we included additional experiments to address concerns about scalability and flexibility under different backward policy settings (the learned $P_B$). This will be a good improvement to our manuscript; thanks again for your feedback.\\n\\nThe authors\"}", "{\"summary\": \"This paper introduces a method for training a neural network to approximate a distribution with a specified unnormalized density. Specifically, it proposes an adaptive training distribution, termed the \\\"Teacher,\\\" which guides the primary amortized sampler, or \\\"Student,\\\" by prioritizing high-loss regions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The Teacher model has the potential to generalize across the Student's high-loss regions, thereby improving mode coverage. The algorithm\\u2019s effectiveness is demonstrated through both discrete and continuous tasks, comparing favorably against other GFlowNet algorithms.\", \"weaknesses\": \"This paper appears to fall slightly below the standard expected at ICLR for several reasons:\\n\\n1. A critical issue is the lack of a solid theoretical foundation; the paper primarily reports numerical results without a deeper mathematical analysis. For instance, there is no mathematical description or guarantee of convergence rate for Algorithm 1.\\n\\n2. Regarding the experiments, the explanation of the architecture design and the choice of hyperparameters would benefit from greater clarity and justification. I will outline these concerns in more detail in the questions below.\", \"questions\": \"1. Convergence Rate of Algorithm 1: What is the convergence rate of your Algorithm 1? Although it performs well in exploring more modes in the example tasks, the convergence rate is also a key factor in sampling methods. Could you provide more details on this?\\n\\n2. Selection of Behavior Policy: How did you determine the behavior policy during training (line 220)? From lines 827 to 831, it appears that different tasks require different ratios. What standard guided these choices? The explanation on lines 835 to 840 lacks proof of the algorithm\\u2019s robustness across different ratios, which raises concerns about whether these choices were made deliberately and may impact the generality of the results. The same issue applies to the choice of C in Table 5 and \\u03b1 in Tables 6 and 7.\\n\\n3. Choice of Backward Policy: How did you select the backward policy during training? It appears that a uniform random policy is used in the deceptive grid world and biochemical design tasks. In my understanding, the backward policy in your algorithm plays a role similar to the proposed transition kernel in MCMC, which is crucial for convergence. Could you elaborate on the role of the backward policy in your algorithm and its impact on the convergence rate of Algorithm 1 across different tasks?\\n\\n4. Algorithm Performance on High-Dimensional Tasks: How does the algorithm perform in high-dimensional tasks? Sampling from a specified unnormalized density is particularly challenging in high dimensions. Testing the algorithm on high-dimensional tasks could strengthen the evaluation of its effectiveness.\\n\\n5. Performance in the Manywell Task: In the Manywell task, the performances of PER+LS and the Teacher-based methods appear similar (table 8). What is the underlying intuition for this?\\n\\nI would be willing to increase the score if the above questions could be well clarified.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for your review. We appreciate that you described the proposed algorithm as intuitive and found the exposition and experiments clear and well-executed. Below we answer the questions and concerns you raised.\\n\\n### Novelty and resemblance to hard negative mining\\n\\nOur main idea is to **amortize** the sampling of high-loss examples through the use of a teacher network, rather than selecting hard examples from a dataset. This clearly distinguishes our work from hard example mining [[Robinson et al.](https://openreview.net/forum?id=CR1XOQ0UTh-), 2021]. \\n\\nIn the RL setting (of which GFlowNets are one instance), hard example mining is related to prioritized experience replay (PER), in which a buffer functions as the dataset from which hard examples are chosen. We indeed consider PER as a baseline in our study.\\n\\n### Model capacity\\n\\nWe agree that the effectiveness of the sampler resulting from our proposed algorithm depends on the student model's capacity to fit the distribution. This is a consideration common to all training methods for amortized samplers and involves such questions as the neural network architecture and the update rule (optimizer) used for the policy network for each given sample.\\n\\nHowever, **the problem we address is orthogonal**: for a student model of given architecture, how do we best select the *training distribution* (i.e., behavior policy) that provides it the samples to learn from? Our solution, which introduces an auxiliary teacher model, leads to improved exploration relative to other techniques that use the same student model architecture. In fact, our technique *decouples* the modeling capacity of the student from the behaviour policy, unlike other methods (noisy on-policy, Thompson sampling, etc.) that use modifications to the student's policy to induce exploration.\\n\\n### Comparison with active learning\\n\\nThank you for pointing out the connection with active learning. We agree that there are strong connections between active learning and our method, as both aim to leverage information about regions where the current model performs poorly. However, there are also clear distinctions between the problems considered in the two areas. \\n\\n- **Active learning** is primarily a framework for supervised learning, where the goal is to select the most informative data points for labeling. These methods typically rely on a form of *predictive uncertainty* to select inputs $x$ that optimally inform the learning of a mapping to labels $y$, using information derived from the classifier's probabilistic model $p(y\\\\mid x)$ but **without seeing the true label $y$**. Some forms of uncertainty used to guide example selection include margin sampling, maximum-entropy sampling, etc., as well as Bayesian uncertainty quantification approaches (ensembles and BNNs, *inter alia*), all of which query the samples $x$ where the predictor is, in some sense, most uncertain of the label.\\n- In contrast, **our method** is based on reinforcement learning (RL), not supervised learning. Instead of training a classifier to model $p(y\\\\mid x)$, we train a sequential decision-making agent characterized by a policy $P_F(\\\\tau) = \\\\prod_{t=1}^n p(s_t|s_{t-1})$, where the trajectory $\\\\tau$ represents a sequence of states $(s_0, s_1, \\\\ldots, s_n)$. Gradient updates are made using a loss that depends on a trajectory $\\\\tau$, which is not necessarily sampled from the policy itself. Our solution to the trajectory selection problem does notes not rely on predictive uncertainty. Instead, it leverages information from the *loss values*, which are computed using the true terminal reward values. \\n \\nThe two problems have fundamentally different characteristics, despite their conceptual similarities: in active learning, one seeks areas with high uncertainty (that is, possibly high *unknown* loss value), while we amortize the sampling of areas with high (*known*) loss value.\\n\\nIn the revised manuscript (Sec. 4, Related Work), we have included a more detailed comparison with the active learning literature, including uncertainty sampling methods.\"}", "{\"title\": \"New experiments (scaling, LLMs) and changes to the paper\", \"comment\": \"Dear reviewers,\\n\\nWe'd like to thank you again for your effort in reviewing our paper and the thoughtful comments you've made, particularly Reviewer NTRq, who has already answered our initial comment.\\n\\nWe have responded individually to your feedback, but would like to use this message to summarize the main changes to the paper (a second revision has just been uploaded) and to share some new experiment results that we believe strengthen our claims.\\n\\n### Summary of main changes\\n\\n- **Introduction**: In the first paragraph, we added a brief explanation of why amortized inference is essential, as suggested by NTRq.\\n- **Figure 1**: We included an arrow from the Student to the Buffer to more accurately represent the algorithm, again based on NTRq\\u2019s feedback.\\n- **Related Work**: We added additional references and discussions on active learning (thanks to tjcQ) and experimental design (thanks to NTRq).\\n- **Section 5.3**: We provided more detailed descriptions of the experimental settings for biochemical discovery, as requested by y6wN.\\n\\n### New results\\n\\n- **Appendix F**: In response to the feedback from CGLL and y6wN, we conducted additional experiments to validate the scalability of our algorithm. These experiments include: \\n - a large-scale deceptive grid world (Appendix F.1);\\n - the task of sampling attack prompts on pretrained LLMs (Appendix F.2).\\n\\nThese new experiments illustrate that the proposed algorithm remains effective in combinatorially complex environments (hypergrid) and with large models on real-world tasks (LLM).\\n\\n**Thanks again, and please let us know if you have any questions.**\\n\\nThe authors\"}", "{\"title\": \"Response (1/3)\", \"comment\": \"Thank you for your detailed feedback and effort in reviewing our paper. We have attempted to answer your questions, provide new experimental results to support our claims, and clarify some possible misunderstandings below.\\n\\n### W1 (lack of theoretical analysis)\\n\\nPlease see Q1 below.\\n\\n### W2 (lack of explanation of the architecture design and hyperparameters)\\n\\nAs you mentioned, the explanation of the architecture design is crucial for greater clarity and justifiction. We mention details on implementation for each experiment in Appendix C. Here we explain more details on model architecture for each experiment.\\n\\n**Deceptive Grid World:** We use a two-layer MLP with 256 hidden units for the parameterized policy $P_F(\\\\cdot;\\\\theta)$ along with a learnable parameter for $\\\\log Z_\\\\theta$. The backward policy $P_B$ is fixed to a uniform policy.\\n\\n**Diffusion Sampling:** We employ the same architecture as [1, 2]. We encode the diffusion timestep $t$ with 128-dimensional harmonic (Fourier) features use 2 linear layers with $N=64$ hidden units to extract the signal ($x$) feature. We use a two-layer MLP with $N$ hidden units to extract feature for $x_t$ and concatenate it with the signal feature. Finally, we apply a three-layer MLP with $N$ hidden units to this concatenated representation to get $u(x_t, t;\\\\theta)$. We initialize $\\\\log Z_{\\\\theta}$ to 0 for all methods. For the Manywell task, we increase $N$ from 64 to 256 to accommodate the high-dimensional tasks, and apply this adjustment to all baselines.\\n\\n**Biological and Chemical Discovery:** We use a similar setting to that proposed by [3]. To parameterize the forward policy, we adopt a relative edge flow policy parametrization mapping (SSR) from [3]. For QM9 and sEH tasks, we employ a two-layer architecture with 1024 hidden units, while for the other tasks, we choose to use a two-layer architecture with 128 hidden units. We initialize $\\\\log Z_{\\\\theta}$ to 5.0 for all methods. For the backward policy, we use a fixed uniform policy.\\n\\n[1] Sendera, Marcin, et al. \\\"Improved off-policy training of diffusion samplers.\\\" The Thirty-eighth Annual Conference on Neural Information Processing Systems.\\n\\n[2] Zhang, Qinsheng, and Yongxin Chen. \\\"Path Integral Sampler: A Stochastic Control Approach For Sampling.\\\" International Conference on Learning Representations.\\n\\n[3] Shen, Max W., et al. \\\"Towards understanding and improving GFlowNet training.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\nConcerns about hyperparameters are addressed in the questions below.\\n\\n### Q1 (convergence rate)\\n\\nOur paper makes an methodological and empirical contribution. We believe that our numerical results are sufficient to validate our proposed idea, which we expect to be influential for machine learning researchers interested in the practical aspect of GFlowNets, e.g., their application to drug discovery [1], biological sequence design [2], and language models [3,4]. To show its wide applicability, we tested the proposed algorithm on 10 tasks from different domains and found improvements over exploration methods from past work in a wide range of setting.\\n\\nNevertheless, we appreciate your emphasis on the importance of convergence rates in evaluating sampling algorithms. Providing a theoretical convergence rate for Algorithm 1 is indeed valuable; however, it presents significant challenges due to the complexity inherent in deep learning-based methods. Convergence rates of deep learning models **even under a fixed sampling distribution** remains an difficult problem in the field, not to mention RL agents with a dynamic behavior policy updated in a bi-level optimization procedure.\\n\\n[1] Shen, Tony, *et al.* \\\"TacoGFN: Target-conditioned GFlowNet for Structure-based Drug Design\\\". Transactions on Machine Learning Research, 2024. \\n[2] Jain, Moksh, *et al.* \\\"Biological sequence design with gflownets.\\\" International Conference on Machine Learning (ICML), 2022. \\n[3] Hu, Edward J., *et al.* \\\"Amortizing intractable inference in large language models.\\\" International Conference on Learning Representations (ICLR), 2024. \\n[4] Lee, Seanie, et al. \\\"Learning diverse attacks on large language models for robust red-teaming and safety tuning.\\\" arXiv preprint arXiv:2405.18540, 2024.\"}", "{\"title\": \"Response (3/3)\", \"comment\": \"### Q3 (choice of backward policy)\\n\\nWe believe there is a possible misunderstanding here. The backward policy in GFlowNets and the transition kernel in MCMC do not have a similar role. THe backward policy models 'destructive' transitions from terminal states through intermediate (incomplete) states, representing a posterior distribution over the 'constructive' sequences modeled by the forward policy starting at the initial state, passing through incomplete states, and reaching a terminal state. On the other hand, an MCMC kernel on a space models transitions between two successive complete states in a Markov chain. Put simply, MCMC is used for local exploration, while the backward policy specifies a distribution over ways to construct a given object.\\n\\nIn GFlowNets, it is quite common to fix the backward policy to a uniform distribution. We also provide new experimental results on a **learned** backward policy: \\n\\n| $d=2, H=256$ | #modes | $L1$ dist. |\\n| -------- | -------- | -------- |\\n| TB (on-policy) | 1289.0 $\\\\pm$ 75.2 | 1.41 $\\\\pm$ 0.10 | \\n| + GAFN | 1361.0 $\\\\pm$ 27.3 | 1.31 $\\\\pm$ 0.05 | \\n| + PER | 2139.3 $\\\\pm$ 224.1 | 1.44 $\\\\pm$ 0.06 | \\n| + Teacher (Ours) | **2149.3** $\\\\pm$ 89.4 | **1.25** $\\\\pm$ 0.04 |\\n\\n| $d=4, H=32$ | #modes | $L1$ dist. |\\n| -------- | -------- | -------- |\\n| TB (on-policy) | 9.7 $\\\\pm$ 1.2 | **1.632** $\\\\pm$ 0.000 |\\n| + GAFN | 18.7 $\\\\pm$ 2.9 | 1.650 $\\\\pm$ 0.003 | \\n| + PER | 20.7 $\\\\pm$ 7.4 | 1.636 $\\\\pm$ 0.000 | \\n| + Teacher (Ours) | **226.7** $\\\\pm$ 13.2 | 1.633 $\\\\pm$ 0.000 |\\n\\nThese results show that our claims continue to hold in the setting of a learned backward policy.\\n\\n### Q4 (high-dimensional tasks)\\n\\nThanks for pointing this out. Although the benchmarks we considered are quite standard in the sampling literature, cf. [Sendera et al., 2024] -- note that sampling an unnormalized density with no prior information is much more difficult than generative modeling given data -- we also test our algorithm in terms of scaling in two additional experiments: (1) scaling the combinatorial space of the hypergrid and (2) application to a real-world LLM red-teaming benchmark.\\n\\nFor (1),(2) we get better performances than baseline GFlowNets and replay buffer-based off-policy training methods. The results for (1) are shown below; for plot-based results on (2), please see Appendix F of the revised manuscript.\\n\\n**(1) The results on larger hypergrid**\\nWe test in a grid setting $(d=4, H=128)$, where the total number of terminal states $\\\\vert \\\\mathcal{X} \\\\vert = 268,338,173$. Note that we report only the number of modes discovered (#modes) since we can't calculate $L1$ distance in larger problems due to the computational burden for obtaining the target distribution analytically.\\n\\n| $d=4, H=128$ | #modes |\\n| -------- | -------- |\\n| TB (on-policy) | 228.7 $\\\\pm$ 38.1 |\\n| + GAFN | 180.0 $\\\\pm$ 51.9 |\\n| + PER | 164.3 $\\\\pm$ 23.7 |\\n| + Teacher (Ours) | **728.0** $\\\\pm$ 192.0 |\\n\\nFrom this result and Table 1 of the manuscript, we can see that the relative performance of Teacher gets better as the dimension increases, showing its scalablility.\\n\\n\\n### Q5 (performance on ManyWell)\\n\\nWe suspect there may be a small misunderstanding in the interpretation of the results. The ELBO and EUBO should be below and above the true $\\\\log Z$ value, respectively, which in this task is ~164.696. The results should be understood in terms of the *distance from their true value*: in fact, the proposed algorithm's performance is **very close to the optimum** and **significantly better than PER+LS**.\\n\\nWe also note that in the Manywell task, the local search (LS) method combined with Prioritized Experience Replay (PER) employs the Metropolis-adjusted Langevin algorithm (MALA). MALA is a powerful sampling technique because it utilizes gradient information of the energy function to guide the search process effectively. \\n\\nOur Teacher-based method, as described in Task 2, is designed for general black-box reward and energy functions and does not rely on gradient information. The fact that our Teacher-based method slightly outperforms PER+LS -- even though both methods nearly achieve optimal sampling -- is particularly noteworthy. This is promising because our method achieves comparable or better results without the additional assumptions and computational overhead required by MALA, such as access to the energy function's gradients. **This outcome demonstrates the effectiveness of our approach in efficiently sampling from complex distributions even when gradient information is unavailable.**\\n\\n**Thank you again for your comments. We hope our answers have helped to resolve many of your concerns about our work, and we are happy to answer any further questions you may have during the discussion period.**\"}" ] }
Bdhro9gxuF
The Advancement in Stochastic Zeroth-Order Optimization: Mechanism of Accelerated Convergence of Gaussian Direction on Objectives with Skewed Hessian Eigenvalues
[ "Yilong Wang", "Haishan Ye", "Yong Liu", "Guang Dai", "Ivor Tsang", "Jingdong Wang" ]
This paper primarily investigates large-scale finite-sum optimization problems, which are particularly prevalent in the big data era. In the field of zeroth-order optimization, stochastic optimization methods have become essential tools. Natural zeroth-order stochastic optimization methods are primarily based on stochastic gradient descent ($\texttt{SGD}$). The method of preprocessing the stochastic gradient with Gaussian vector is referred to as $\texttt{ZO-SGD-Gauss}$ ($\texttt{ZSG}$), while estimating partial derivatives along coordinate directions to compute the stochastic gradient is known as $\texttt{ZO-SGD-Coordinate}$ ($\texttt{ZSC}$). Compared to $\texttt{ZSC}$, $\texttt{ZSG}$ often demonstrates superior performance in practice. However, the underlying mechanisms behind this phenomenon remain unclear in the academic community. To the best of our knowledge, our work is the first to theoretically analyze the potential advantages of $\texttt{ZSG}$ compared to $\texttt{ZSC}$. Unlike the fundamental assumptions applied in general stochastic optimization analyses, the quadratic regularity assumption is proposed to generalize the smoothness and strong convexity to the Hessian matrix. This assumption allows us to incorporate Hessian information into the complexity analysis. When the objective function is quadratic, the quadratic regularity assumption reduces to the second-order Taylor expansion of the function, and we focus on analyzing and proving the significant improvement of $\texttt{ZSG}$. For other objective function classes, we also demonstrate the convergence of $\texttt{ZSG}$ and its potentially better query complexity than that of $\texttt{ZSC}$. Finally, experimental results on both synthetic and real-world datasets substantiate the effectiveness of our theoretical analysis.
[ "stochastic zeroth-order optimization", "quadratic regularity", "gaussian direction", "skewed Hessian eigenvalues" ]
Reject
https://openreview.net/pdf?id=Bdhro9gxuF
https://openreview.net/forum?id=Bdhro9gxuF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tGRicT7gJ8", "rD9gTekrl3", "lWPv1MYgow", "iASb82IxuN", "f4rWdLiLXk", "ZkiS7W17iA", "ZSbY6lxnrg", "XakiySBC9X", "U2ySHMy18U", "R52hdxYWLD", "Jd9cFETLc1", "JNgLI76lIO", "HPtjzPPvP5", "8cybXq6Vpa", "6iQnoMSAtS", "6MHeGA5vXf", "3ucxSj5VDA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732027488040, 1732027081907, 1732548618356, 1730402025800, 1733042657661, 1733154985685, 1733227483185, 1734149193487, 1732864788733, 1730639172914, 1732027320992, 1730468799540, 1737524296613, 1732949476245, 1732027583769, 1730748872688, 1732865501811 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_cWQZ" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_rsNb" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_rsNb" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_Gt8G" ], [ "ICLR.cc/2025/Conference/Submission14041/Area_Chair_mZfM" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_Gt8G" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_cWQZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ], [ "ICLR.cc/2025/Conference/Submission14041/Reviewer_Styw" ], [ "ICLR.cc/2025/Conference/Submission14041/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal Two\", \"comment\": \"Q3: The rate of zeroth-order method has already been extensively studied under similar assumptions as the quadratic regularity assumption, e.g., [Malladi et al, 2023], [Yue et al, 2023], [arXiv: 2310.09639]. Stochastic mini-batch settings are also considered in these paper. Therefore, I am not sure how novel and challenge to obtain the results in the current paper given all these previous works.\", \"a3\": \"We need to emphasize that these works are entirely different from ours. First, we outline a notable work that uses zeroth-order optimization algorithms to fine-tune large language models and highlight the differences from our contributions. Although Malladi et al. (2023) propose the descent theorem ($\\\\mathbb{E}[\\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_{t+1}) | \\\\boldsymbol{\\\\theta}_t] - \\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_t) \\\\leq -\\\\eta \\\\|\\\\nabla \\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_t)\\\\|^2 + \\\\frac{1}{2} \\\\eta^2 \\\\ell \\\\cdot \\\\gamma \\\\cdot \\\\mathbb{E}[\\\\|\\\\nabla \\\\mathcal{L}(\\\\boldsymbol{\\\\theta}; \\\\mathcal{B})\\\\|^2]$) for ZO-SGD, the ultimately proven global convergence rate $t = \\\\mathcal{O} \\\\left( \\\\left( \\\\frac{r}{n} + 1 \\\\right) \\\\cdot \\\\left( \\\\frac{\\\\ell}{\\\\mu} + \\\\frac{\\\\ell \\\\alpha}{\\\\mu^2 B} \\\\right) \\\\log \\\\frac{\\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_0) - \\\\mathcal{L}^*}{\\\\epsilon} \\\\right)$ is essentially that of gradient descent rather than stochastic gradient descent, as it includes a logarithmic term related to precision. Malladi et al. (2023) do not reveal the true convergence rate of ZSG! Yue et al. (2023) provide the convergence rate $ \\\\mathcal{O} \\\\left(\\\\frac{ED_1}{\\\\sigma_d}\\\\log \\\\left( \\\\frac{1}{\\\\epsilon} \\\\right) \\\\right) $ for standard zeroth-order optimization algorithm and the convergence rate $ \\\\mathcal{O} \\\\left( \\\\frac{ED\\\\_{\\\\frac{1}{2}}}{\\\\sqrt{\\\\sigma_d}} \\\\cdot \\\\log \\\\frac{L}{\\\\mu} \\\\cdot \\\\log \\\\left( \\\\frac{1}{\\\\epsilon} \\\\right) \\\\right)$ for accelerated zeroth-order optimization algorithm. Similarly, these are both based on gradient descent rather than stochastic gradient descent! The iterative algorithm $x\\\\_{t+1} \\\\gets x_t - \\\\alpha \\\\left( \\\\frac{1}{n} \\\\sum\\\\_{i=1}^n \\\\operatorname{clip}_C \\\\left( \\\\frac{f(x_t + \\\\lambda u_t; \\\\xi_i) - f(x_t - \\\\lambda u_t; \\\\xi_i)}{2\\\\lambda} + z_t \\\\right) u_t \\\\right)$ proposed by Zhang et al. (2023) still relies on full gradient information rather than stochastic gradient information. Additionally, the vector $u_t$ used to construct the gradient estimate obeys Spherical distribution instead of Gaussian distribution. In summary, our theoretical analysis is entirely different from previous works. It does not require access to all sample information at each iteration, and the convergence rate $\\\\mathcal{O} \\\\left( \\\\frac{\\\\operatorname{tr}(M) \\\\sigma^2}{\\\\lambda\\\\_{\\\\min}^2(M)} \\\\frac{1}{\\\\epsilon} \\\\right)$ we achieve is unique.\", \"q4\": \"The author claims in Corollary 4.4 that the algorithm will not converge with a fixed stepsize. I don't think this is correct. One can choose $ \\\\eta = \\\\frac{\\\\log T}{T} $, and then the algorithm converges with rate $ T = \\\\frac{1}{\\\\epsilon} \\\\log \\\\left( \\\\frac{1}{\\\\epsilon} \\\\right) $. Or can the authors clarify what they mean by a \\\"fixed\\\" step. In Corollary 4.6, when choosing $ \\\\sigma = 0 $, the complexity should reduce to the deterministic linear rate $\\\\log \\\\left( \\\\frac{1}{\\\\epsilon} \\\\right)$. Is the current analysis tight?\", \"a4\": \"First, We apologize for the ambiguity in our statement. What we mean by a fixed step size is one that is independent of $T$. $\\\\eta$ only needs to be a constant that satisfies relation $\\\\eta \\\\leq \\\\frac{1}{12 \\\\operatorname{tr}(M)}$. Then, for decreasing step size, it is unnecessary to assume $\\\\sigma=0$ in Corollary 4.6 to recover the convergence rate of gradient descent. Because it is rare to gradually decrease the step size during gradient descent in practical scenarios. We primarily use mathematical induction to prove our conclusions, drawing inspiration from the optimal convergence analysis framework for stochastic gradient descent algorithms proposed by Stich (2019). Therefore, we believe that our analysis is rigorous.\"}", "{\"title\": \"Rebuttal One\", \"comment\": \"Dear Reviewer Styw,\\nThank you very much for your time and your comments on our work. We will address the weaknesses and questions in the following QnA format:\", \"q1\": \"The main idea in the paper is (i) $\\\\mathrm{tr}(\\\\mathbf{M})$ \\u226a $d \\\\lambda\\\\_{max}(\\\\mathbf{M})$ and (ii) one algorithm has $\\\\mathrm{tr}(\\\\mathbf{M})$ and the other has $d \\\\lambda_{max}(\\\\mathbf{M})$, the complexity of the former algorithm is better than the latter. Without a formal lower bound for the latter, such a conclusion cannot be made.\", \"a1\": \"There is a highly influential paper in the field of SGD that can provide some support. Considering the objective function $F$ is both strongly convex and smooth, Rakhlin et al. (2011) have already established the optimal convergence rate: $ \\\\mathbb{E}[F(\\\\mathbf{w}_T) - F(\\\\mathbf{w}^*)] \\\\leq \\\\frac{2\\\\mu G^2}{\\\\lambda^2 T}.$ We calculate the partial derivatives in $d$ directions to obtain the gradient estimate, which can be directly extended to the optimal lower bound. Our primary goal is to theoretically explain why ZSG outperforms ZSC in practice in most cases and why researchers tend to prefer the ZSG algorithm for model optimization. Many studies have adopted ZSG to fine-tune large language models, such as (Malladi et al., 2023), (Zhao et al., 2024), (Guo et al., 2024), (Chen et al., 2024), and so on. Our experiments also confirm the superiority of ZSG. This is because, for real-world datasets, the eigenvalue distribution of the Hessian is often skewed, meaning condition $\\\\mathrm{tr}(\\\\mathbf{M})$ \\u226a $d \\\\lambda\\\\_{max}(\\\\mathbf{M})$ holds. Our proposed theory can explain this phenomenon and provide valuable guidance for practical applications.\", \"q2\": \"Even ignoring this, similar results have been obtained in the deterministic setting previously and extension to the stochastic finite-sum setting is not significant and raise up to the level of ICLR acceptance.\", \"a2\": \"We believe that the theoretical analysis in the stochastic finite-sum setting provides sufficient theoretical contributions to make our work acceptable to ICLR. We illustrate our point using a highly influential paper that applies zeroth-order optimization algorithms to fine-tune large language models as an example. Although Malladi et al. (2023) propose the descent theorem $\\\\mathbb{E}[\\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_{t+1})|\\\\boldsymbol{\\\\theta}_t]-\\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_t) \\\\leq -\\\\eta \\\\|\\\\nabla \\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_t)\\\\|^2 + \\\\frac{1}{2} \\\\eta^2 \\\\ell \\\\cdot \\\\gamma \\\\cdot \\\\mathbb{E}[\\\\|\\\\nabla \\\\mathcal{L}(\\\\boldsymbol{\\\\theta}; \\\\mathcal{B})\\\\|^2]$ for ZO-SGD, the ultimately proven global convergence rate $t = \\\\mathcal{O} \\\\left( \\\\left( \\\\frac{r}{n} + 1 \\\\right) \\\\cdot \\\\left( \\\\frac{\\\\ell}{\\\\mu} + \\\\frac{\\\\ell \\\\alpha}{\\\\mu^2 B} \\\\right) \\\\log \\\\frac{\\\\mathcal{L}(\\\\boldsymbol{\\\\theta}_0) - \\\\mathcal{L}^*}{\\\\epsilon} \\\\right)$ is essentially that of gradient descent rather than stochastic gradient descent, as it includes a logarithmic term related to precision. Malladi et al. (2023) do not reveal the true convergence rate of ZSG! In the context of large-scale optimization problems, Malladi et al. (2023) fail to provide the true convergence rate of ZSG in the stochastic finite-sum setting. In other words, they do not successfully attempt to theoretically explain why ZSG performs better in practice. This highlights the challenging nature of our work while also demonstrating its potential to offer significant insights.\", \"references\": \"[1]Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647, 2011.\\n\\n[2]Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D Lee, Danqi Chen, and Sanjeev Arora. Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems, 36:53038\\u201353075, 2023.\\n\\n[3]Yanjun Zhao, Sizhe Dang, Haishan Ye, Guang Dai, Yi Qian, and Ivor W Tsang. Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer. arXiv preprint arXiv:2402.15173, 2024.\\n\\n[4]Guo W, Long J, Zeng Y, et al. Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity[J]. arXiv preprint arXiv:2406.02913, 2024.\\n\\n[5]Chen Y, Zhang Y, Cao L, et al. Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures[J]. arXiv preprint arXiv:2410.07698, 2024.\"}", "{\"comment\": \"Many thanks for the response!\\n\\nA1. I don't think the highly influential paper considers the quadratic regularity assumption. This means that the settings in the current paper and in the highly influential paper are different, and it is very likely that different optimal rates with different design and analysis techniques hold. My main point is that there is no theoretically sound explanation for why ZSC cannot achieve the Tr(M) rate.\\n\\nA2. According to (Hanzely et al., 2018), ZSC can achieve Tr(M) rate when $p_i\\\\sim M_{ii}$ (Corollary 4.3.). I guess this is the Hessian information mentioned by the authors. However, in the analysis of ZSG in Theorem 4.5 in the current paper, the knowledge of Tr(M), $\\\\lambda_{min}(M)$, and $\\\\lambda_{max}(M)$ is also required to set up different parameters. This is also information about the Hessian matrix and is not feasible in the context of zeroth-order optimization.\\n\\nA3. Although these are different settings, similar analysis techniques can be used, which makes the contribution less significant.\\n\\nA4 and A5. Thanks for the explanation!\"}", "{\"summary\": \"The paper studies zeroth-order optimization methods (aka. black-box optimization). Specifically, the authors provide theoretical explanation for an observation in practice that the zeroth-order Gaussian gradient descent (ZSG) usually outperforms the zeroth-order version of coordinate descent (ZSC). They show that the improvement mainly comes from the skewness of the Hessian matrix of the objective function. Loosely speaking, the iteration complexity of ZSG scales with $Trace(H)/ \\\\lambda_{\\\\min}(H)$, while the complexity of ZSC scales with $d*\\\\lambda_{\\\\max}(H)/\\\\lambda_{\\\\min}(H)$ where $H$ is the Hessian matrix, $d$ is the dimension. They also perform numerical experiments to show that ZSG outpeforms ZSC in practice.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and the research motivation is clear. Zeroth-order optimization plays a crucial role in domains where privacy in training is essential or where model size makes gradient computation impractical. The theoretical findings are also interesting.\", \"weaknesses\": \"While the authors managed to show the improvement of ZSG over ZSC when the objective's Hessian is skewed for the quadratic function, the claim is not so clear for the general function. For example, the factor $\\\\gamma_u / \\\\gamma_l^2$ in eq (23) can be quite uncontrollable compared to the advantage gained from the skewness. I suggest the authors discuss this trade-off more, i.e., pinpoint cases where the improvement is meaningful.\", \"questions\": \"(1) Does ZSC have the \\\"alpha term\\\" that you ignored from the complexity of ZSG? Does this alpha term affect the comparison between the two algorithms given that it is not ignored?\\n\\n(2) Please provide more explanation on the sentence in lines 172 and 173. E.g., why are those parameters independent of the condition number of the data?\\n\\n(3) Regarding conditions (2) and (3), is it \\\"exists z\\\" or \\\"for all z\\\"? Please discuss these conditions more, especially how strong they are compared to \\\"L-smooth\\\" and \\\"strongly convex\\\"? Some discussions would be helpful instead of just citing [Frangella el at. 2023].\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Gt8G,\\n\\nThank you very much for your feedback. We will address each of the concerns and issues you raised, and provide clarifications accordingly.\\n\\n1.We have carefully considered your suggestions. Due to space constraints, we plan to incorporate a summary of our results and algorithm efficiency in the form of a table in the introduction section of the camera-ready version.\\n\\n2.We are sharing a portion of the main code for our experiments, which can be accessed at the following link:https://anonymous.4open.science/r/222-3F2F.\\n\\nWe would like to express our sincere gratitude once again for your valuable feedback\\uff01\"}", "{\"comment\": \"Thank you for your reply. I appreciate the difficulty when working with general functions (with quadratic regularity assumption). However, if the analysis mainly focuses on quadratic function -- we are not able to claim improvement in the general case, I do not think it is significant enough and am not able to raise the score.\", \"future_suggestion\": \"I believe the gain ratio $\\\\frac{Tr(M)}{\\\\lambda_{min}(M)}$ is closely related to the extra term $\\\\frac{\\\\gamma_u}{\\\\gamma_l^2}$ as both of them are problem's parameters. If the authors manage to show that the trade-off is meaningful, e.g., the gain is not dominated by the extra term, the improvement is indeed significant.\"}", "{\"comment\": \"Dear Authors,\\n\\nSince my comments were not taken into account in the revised version of the paper, I am decreasing my grade to 3.\"}", "{\"metareview\": \"This paper studies stochastic zeroth-order optimization. Traditionally, zeroth-order stochastic algorithms are based on SGD, which using Gaussian smoothing to estimate the gradient using zeroth-order information. This is called ZSG method. This paper studies estimating partial derivatives along coordinate directions, which is called ZSC method. The authors claim that ZSC achieves a better complexity than ZSG. However, the reviewers found that this is not rigorously justified. To support this claim, the authors need to develop a lower bound for ZSG, but this is not developed in the paper.\", \"additional_comments_on_reviewer_discussion\": \"Further discussed the novelty.\"}", "{\"comment\": \"Thank you for your reply!\\n\\nA.1 The strictly theoretical bound you described is not the main focus of our paper. Our core objective and primary contribution lie in attempting to explain, from a theoretical perspective, why ZSG demonstrates advantages in practice. This offers an insightful and innovative perspective, rather than merely conducting application-oriented research with ZSG in a conventional manner. We still believe that our work provides a sufficiently novel perspective.\\n\\nA.2 Hanzely et al. (2018) introduce importance sampling to achieve $ \\\\mathrm{tr}(\\\\mathbf{M})$ rate for ZSC. Their approach requires knowledge of the all diagonal elements of the Hessian matrix in order to accurately perform each iteration. The step size involved in our theorem is related to $ \\\\mathrm{tr}(\\\\mathbf{M})$, which aids in proving our result. In practice, although we do not know the exact value of $ \\\\mathrm{tr}(\\\\mathbf{M})$, we can improve practical performance by only adjusting step size (only related to $ \\\\mathrm{tr}(\\\\mathbf{M})$), which is not possible in (Hanzely et al., 2018). Additionally, $\\\\lambda_{\\\\min}(M)$ and $\\\\lambda_{\\\\max}(M)$ mentioned are only relevant to the proof process. In practice, there is no need to obtain these values.\\n\\nA.3 If you believe our contributions are not significant, then the paper ''Zeroth-order optimization with weak dimension dependency'', published in COLT, may also lack sufficient contribution. Additionally, in the field of fine-tuning large language models, Malladi et al. (2023) claim to prove the rate of ZO-SGD. However, what they actually prove is the rate of gradient descent (GD), without revealing the true rate of ZO-SGD. Based on your perspective, we believe this paper may not warrant publication either. We believe our work makes significant contributions and has the potential to provide valuable insights to the optimization community.\"}", "{\"summary\": \"This paper investigates large-scale finite-sum optimization within the zeroth-order (ZO) stochastic optimization paradigm, focusing specifically on two methods: ZO-SGD-Gauss (ZSG), which pre-processes the stochastic gradient with a Gaussian vector, and ZO-SGD-Coordinate (ZSC), which estimates partial derivatives along coordinate directions. The study addresses the notable performance gap between ZSG and ZSC, aiming to provide theoretical insights that explain ZSG's empirically observed advantages. To achieve this, the authors introduce the \\\"quadratic regularity assumption\\\" on the Hessian matrix, a relaxation of typical smoothness and strong convexity assumptions. They demonstrate that this assumption allows for incorporating Hessian information into complexity analysis, yielding convergence rates that reveal ZSG's improved efficiency in certain settings. The authors validate their analysis through synthetic and real-world experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors reinforce their theoretical findings with experimental results on both synthetic and real-world datasets, which enhances the paper's credibility. The empirical results are presented clearly and support the theoretical claims regarding convergence rates and query complexity.\", \"weaknesses\": [\"The paper seems to be very interesting, however, the following points are present in the paper which hinder the perception of readiness and clarity of the paper:\", \"Introduction. The introduction is not well designed.... It is possible to improve this point, for example, a table where the result of the work will be clearly visible, as well as the efficiency compared to other algorithms.\", \"Could not find a link to github or other source where I can find the code of the experiments.\"], \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal One\", \"comment\": \"Dear Reviewer cWQZ,\\nThank you very much for your comments. We will address the weaknesses and questions in the following QnA format:\", \"q1\": \"The authors prove that ZSG converges with $ \\\\mathrm{tr}(\\\\mathbf{M})$, while previous work suggests that ZSC converges with $d\\\\lambda(\\\\mathbf{M})$. Based on this, the authors claim that ZSG is better when $\\\\mathrm{tr}(\\\\mathbf{M})\\\\le d\\\\lambda\\\\mathbf{M}$. I don't think this is a correct statement. There is no result in the paper showing that ZSC cannot achieve the rate $\\\\mathrm{tr}(\\\\mathbf{M})$, and its $ d\\\\lambda\\\\mathbf{M}$ rate may come from the fact that previous analysis is not tight. To make the claim mathematically rigorous, the authors should provide the lower-bound under the current quadratic regularity assumption, showing that the rate of ZSC is $ \\\\Omega(d\\\\lambda(\\\\mathbf{M}))$. Only then it is valid to say $ZSG\\\\le \\\\mathrm{tr}(\\\\mathbf{M})\\\\le d\\\\lambda\\\\mathbf{M} \\\\le ZSC$.\", \"a1\": \"There is a highly influential paper in the field of SGD that can provide some support. Considering the objective function $F$ is both strongly convex and smooth, Rakhlin et al. (2011) have already established the optimal convergence rate: $ \\\\mathbb{E}[F(\\\\mathbf{w}_T) - F(\\\\mathbf{w}^*)] \\\\leq \\\\frac{2\\\\mu G^2}{\\\\lambda^2 T}.$ We calculate the partial derivatives in $d$ directions to obtain the gradient estimate, which can be directly extended to the optimal lower bound (Rakhlin et al., 2011). Our primary goal is to theoretically explain why ZSG outperforms ZSC in practice in most cases and why researchers tend to prefer the ZSG algorithm for model optimization. Many studies have adopted ZSG to fine-tune large language models, such as (Malladi et al., 2023), (Zhao et al., 2024), (Guo et al., 2024), (Chen et al., 2024), and so on. Our experiments also confirm the superiority of ZSG. This is because, for real-world datasets, the eigenvalue distribution of the Hessian is often skewed, meaning condition $\\\\mathrm{tr}(\\\\mathbf{M})$ \\u226a $d \\\\lambda\\\\_{max}(\\\\mathbf{M})$ holds. Our proposed theory can explain this phenomenon and provide valuable guidance for practical applications.\", \"q2\": \"I am also not sure why ZSC cannot achieve the rate $\\\\mathrm{tr}(\\\\mathbf{M})$. The current ZSC considered in the paper queries all dimension at each iteration. Its complexity is thus deducted as times that of first-order methods. However, in ZSC, one can also only do random sampling at each iteration. For example, sampling from $\\\\\\\\{1, 2, \\\\ldots, d\\\\\\\\}$ instead of iterating over all dimension. This also builds a gradient estimator similar to ZSG, and similar analysis could apply. Specifically in the previous paper [Hanzely et al, 2018] and [Wang et al, 2024] mentioned by the authors, the rate of ZSC is also $\\\\mathrm{tr}(\\\\mathbf{M})$ under the quadratic regularity assumption, e.g., Table 1 of [Hanzely, et al, 2018]. I am confused why authors say ZSC only achieves $d\\\\lambda(\\\\mathbf{M})$.\", \"a2\": \"These works are different from ours. First, Hanzely et al. (2018) and Wang et al. (2024) address different research problems compared to ours. Our study focuses on optimization problems in the finite-sum form, where at each iteration, we only need to access a subset of samples to construct the stochastic zeroth-order gradient estimate. In contrast, Hanzely et al. (2018) and Wang et al. (2024) require accessing the entire sample set at each iteration to construct their gradient estimates.\\nSecond, Hanzely et al. (2018) demonstrate that replacing Gaussian sampling with coordinate sampling, which corresponds to the commonly used coordinate descent method, can achieve the rate $\\\\mathrm{tr}(\\\\mathbf{M})$, provided that importance sampling technique is employed. However, in the context of zeroth-order optimization, it is not feasible to obtain information about the Hessian matrix, making it impossible to utilize importance sampling techniques.\"}", "{\"summary\": \"The paper studies zeroth-order methods for finite-sum optimization and compares the complexity of two algorithms, ZSG and ZSC. The authors claim in the paper to rigorously and theoretically prove that ZSG is better than ZSC, under the quadratic regularity assumption.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The study on zeroth-order optimization is a trendy and important topic in optimization and machine learning, given lots of interesting applications in black-box attack, reinforcement learning, and fine-tuning language models.\", \"weaknesses\": \"1. The authors prove that ZSG converges with $tr(M)$, while previous work suggests that ZSC converges with $d\\\\lambda(M)$. Based on this, the authors claim that ZSG is better when $tr(M)\\\\leq d\\\\lambda(M)$. I don't think this is a correct statement. There is no result in the paper showing that ZSC cannot achieve the rate $tr(M)$, and its $d\\\\lambda(M)$ rate may come from the fact that previous analysis is not tight. To make the claim mathematically rigorous, the authors should provide the lower-bound under the current quadratic regularity assumption, showing that the rate of ZSC is $\\\\Omega(d\\\\lambda(M))$. Only then it is valid to say ZSG $\\\\leq tr(M) \\\\leq d\\\\lambda(M) \\\\leq$ ZSC.\\n\\n2. I am also not sure why ZSC cannot achieve the rate $tr(M)$. The current ZSC considered in the paper queries all $d$ dimension at each iteration. Its complexity is thus deducted as $d$ times that of first-order methods. However, in ZSC, one can also only do random sampling at each iteration. For example, sampling from $\\\\\\\\{1,2,\\\\cdots,d\\\\\\\\}$ instead of iterating over all $d$ dimension. This also builds a gradient estimator similar to ZSG, and similar analysis could apply. Specifically in the previous paper [Hanzely et al, 2018] and [Wang et al, 2024] mentioned by the authors, the rate of ZSC is also $tr(M)$ under the quadratic regularity assumption, e.g., Table 1 of [Hanzely, et al, 2018]. I am confused why authors say ZSC only achieves $d\\\\lambda(M)$.\\n\\n3. The rate of zeroth-order method has already been extensively studied under similar assumptions as the quadratic regularity assumption, e.g., [Malladi et al, 2023], [Yue et al, 2023], [arXiv: 2310.09639]. Stochastic mini-batch settings are also considered in these paper. Therefore, I am not sure how novel and challenge to obtain the results in the current paper given all these previous works.\\n\\n4. The author claims in Corollary 4.4 that the algorithm will not converge with a fixed stepsize. I don't think this is correct. One can choose $\\\\eta=(\\\\log T)/T$, and then the algorithm converges with rate $T=(1/\\\\epsilon)\\\\log(1/\\\\epsilon)$. Or can the authors clarify what they mean by a \\\"fixed\\\" step. In Corollary 4.6, when choosing $\\\\sigma=0$, the complexity should reduce to the deterministic linear rate $\\\\log(1/\\\\epsilon)$. Is the current analysis tight?\\n\\n5. I feel the paper is written in a rush and not well polished. There are lots of mistakes in grammar. For example, it should be smoothness assumption and strong convexity assumption in line 144; line 162-163 is not well written English; In Theorem 4.3, 4.5, it should be \\\"let objective be quadratic\\\" and \\\"let x be update\\\", etc.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer rsNb,\\n\\nThank you very much for your feedback. We will address each of the concerns and issues you raised, and provide clarifications accordingly.\\n\\n1.Quadratic regularity and the quadratic regularity ratio generalize the notions of strong convexity, smoothness, and condition number to the Hessian norm. It is important to note that our main proof focuses on quadratic functions. The upper and lower quadratic regularization constants help us generalize the results, although the factor of $\\\\frac{\\\\gamma_u}{\\\\gamma_l^2}$ is indeed difficult to control. Frangella et al. (2023) point out that for any objective with a Lipschitz Hessian, the quadratic regularity ratio approaches one as the optimal value is approached. Many studies also consider proving convergence rates locally. Additionally, our experiments not only include quadratic objectives but also logistic regression. From the experimental perspective, we further validate the advantages of the ZSG algorithm on other objective functions.\\n\\n2.There exists a term involving $\\\\alpha$, but it does not affect the comparison between the two algorithms. The impact of the $\\\\alpha$-related term on the iterative complexity is negligible, as in practice, $\\\\alpha$ can be chosen sufficiently small to effectively eliminate its influence.\\n\\n3.Frangella et al. (2023) prove that under the quadratic regularization assumption, the convergence rate of algorithms such as SketchySVRG is independent of the condition number, and that quadratic regularization provides a tighter bound than condition number-based methods. Because the quadratic regularity ratio equals one for quadratic objectives and approaches one as the iterate approaches the optimum for any objective with a Lipschitz Hessian.\\n\\n4.It refers to the fact that conditions (2) and (3) hold for all z. Notably, the bounds guaranteed by smoothness and strong convexity are looser than the bounds guaranteed by the quadratic regularity assumption. Frangella et al. (2023) prove that if F is L-smooth and \\u00b5-strongly convex, then, F is quadratically regular with $\\\\frac{\\\\mu}{L} \\\\le \\\\gamma_l \\\\le \\\\gamma_u \\\\le \\\\frac{L}{\\\\mu}$. To see why, suppose F is an ill-conditioned quadratic. Clearly, $\\\\gamma_l=\\\\gamma_u=1$ and $\\\\frac{\\\\mu}{L} \\\\le 1 \\\\le \\\\frac{L}{\\\\mu}$. In addition, the quadratic regularity and the quadratic regularity ratio generalize the notions of strong convexity, smoothness, and condition number to the Hessian norm. We also suppose F is an ill-conditioned quadratic. In the case of the L-smoothness assumption, the quadratic term is weighted by a matrix whose diagonal entries correspond to the largest eigenvalue of the Hessian matrix. In contrast, for the quadratic regularization assumption, the quadratic term is simply weighted by the Hessian matrix itself. The latter is a weaker condition and provides a tighter upper bound.\\n\\nWe would like to express our sincere gratitude once again for your valuable feedback\\uff01\"}", "{\"title\": \"Rebuttal Three\", \"comment\": \"Q5: I feel the paper is written in a rush and not well polished. There are lots of mistakes in grammar. For example, it should be smoothness assumption and strong convexity assumption in line 144; line 162-163 is not well written English; In Theorem 4.3, 4.5, it should be \\\"let objective be quadratic\\\" and \\\"let x be update\\\", etc.\", \"a5\": \"We sincerely appreciate your thorough review of our paper. We have made the requested revisions accordingly.\", \"references\": \"[1]Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647, 2011.\\n\\n[2]Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D Lee, Danqi Chen, and Sanjeev Arora. Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems, 36:53038\\u201353075, 2023.\\n\\n[3]Yanjun Zhao, Sizhe Dang, Haishan Ye, Guang Dai, Yi Qian, and Ivor W Tsang. Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer. arXiv preprint arXiv:2402.15173, 2024.\\n\\n[4]Guo W, Long J, Zeng Y, et al. Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity[J]. arXiv preprint arXiv:2406.02913, 2024.\\n\\n[5]Chen Y, Zhang Y, Cao L, et al. Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures[J]. arXiv preprint arXiv:2410.07698, 2024.\\n\\n[6]Filip Hanzely, Konstantin Mishchenko, and Peter Richt\\u00b4arik. Sega: Variance reduction via gradient sketching. Advances in Neural Information Processing Systems, 31, 2018.\\n\\n[7]Yilong Wang, Haishan Ye, Guang Dai, and Ivor Tsang. Can gaussian sketching converge faster on a preconditioned landscape? In Forty-first International Conference on Machine Learning, 2024.\\n\\n[8]Pengyun Yue, Long Yang, Cong Fang, and Zhouchen Lin. Zeroth-order optimization with weak dimension dependency. In The Thirty Sixth Annual Conference on Learning Theory, pp. 4429\\u20134472. PMLR, 2023.\\n\\n[9]Zhang L, Thekumparampil K K, Oh S, et al. DPZero: dimension-independent and differentially private zeroth-order optimization[C]//International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023. 2023.\\n\\n[10]Stich S U. Unified optimal analysis of the (stochastic) gradient method[J]. arXiv preprint arXiv:1907.04232, 2019.\"}", "{\"summary\": \"This paper provides a separation result between zeroth-order stochastic gradient descent and zeroth-order stochastic finite-difference method under a certain quadratic regularity assumption. The results essentially extends similar results in the deterministic setting to the stochastic finite-sum setting.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper address an important problem of providing separation results between two competing algorithms.\", \"weaknesses\": \"The main idea in the paper is (i) tr(M) \\u226a d \\u03bb_max(M) and (ii) one algorithm has tr(M) and the other has d \\u03bb_max(M), the complexity of the former algorithm is better than the latter. Without a formal lower bound for the latter, such a conclusion cannot be made.\\n\\nEven ignoring this, similar results have been obtained in the deterministic setting previously and extension to the stochastic finite-sum setting is not significant and raise up to the level of ICLR acceptance.\", \"questions\": \"please see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Styw,\\n\\nWe have taken into account all the comments you have made.\\n\\nIf you agree that we managed to address all issues, please consider raising your grade to support our work. If you believe this is not the case, please let us know so that we have a chance to respond.\\n\\nWith Respect,\\n\\nAuthors\"}" ] }
BdPbmgJ2jo
High-dimensional Asymptotics of VAEs: Threshold of Posterior Collapse and Dataset-Size Dependence of Rate-Distortion Curve
[ "Yuma Ichikawa", "Koji Hukushima" ]
In variational autoencoders (VAEs), the variational posterior often aligns closely with the prior, known as posterior collapse, which leads to poor representation learning quality. An adjustable hyperparameter beta has been introduced in VAE to address this issue. This study sharply evaluates the conditions under which the posterior collapse occurs with respect to beta and dataset size by analyzing a minimal VAE in a high-dimensional limit. Additionally, this setting enables the evaluation of the rate-distortion curve in the VAE. This result shows that, unlike typical regularization parameters, VAEs face "inevitable posterior collapse" beyond a certain beta threshold, regardless of dataset size. The dataset-size dependence of the derived rate-distortion curve also suggests that relatively large datasets are required to achieve a rate-distortion curve with high rates. These results robustly explain generalization behavior across various real datasets with highly non-linear VAEs.
[ "statistical physics", "replica method", "variational autoencoder", "exact asymptotics" ]
Reject
https://openreview.net/pdf?id=BdPbmgJ2jo
https://openreview.net/forum?id=BdPbmgJ2jo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVElSR7aLp", "xlJXHvuZ9f", "tnZSWD95Xi", "ricZRS3wP3", "qKPwhaFnz9", "piIgSwf4zZ", "oZWVcERAWj", "hKfJydCkUk", "eeqqxH4dX6", "be2vGcldl0", "ZZTnEi6QMi", "T0eYwqOeG6", "RsdWYxehj4", "QVA7d91yRV", "OVwzjyLycM", "NtPOfBmL62", "NnTAReUBzx", "HxamtDjiZ1", "FFqwNq27W3", "DVkqP954Ao", "Bb5XdRjZCC", "8PDwJKt4Du", "6ED6PLTmrD", "5eHWA0r8Uq", "3zfe8kJUBR", "3yW78egrpQ", "1rkbdnPeoM" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732167155108, 1732162406374, 1730741053081, 1732795622024, 1732162760592, 1737523730632, 1732452462873, 1732172348447, 1732451114092, 1732545782086, 1733186793426, 1730514633408, 1732684227499, 1733220127752, 1732795534332, 1732166740401, 1733106254678, 1732795798040, 1732165210431, 1732803568932, 1732405447534, 1730460215510, 1732172133755, 1730678467198, 1733186723539, 1734742457831, 1733186763138 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_NKDk" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_LZGQ" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_1vWD" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_NKDk" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_Xf1j" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_LZGQ" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_Xf1j" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_1vWD" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Reviewer_LZGQ" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ], [ "ICLR.cc/2025/Conference/Submission5872/Area_Chair_ksd6" ], [ "ICLR.cc/2025/Conference/Submission5872/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (2)\", \"comment\": \"## **Usage of Replica Method**\\n\\nThe replica method provides significant advantages for analyzing high-dimensional learning models.\\nTraditional PAC-bound analyses of generalization error often assume that the data are iid and focus on worst-case scenarios.\\nThese approaches fail to explain modern phenomena, such as the double-descent effect, in which increasing model capacity improves generalization after the interpolation peak.\\nFurthermore, replica analyses enable sharp evaluations of the dataset-size dependence of generalization error by incorporating data structures and architectural features into the analysis [1,2,3,4,5].\\nIndeed, as in other replica analyses of unsupervised learning discussed in RELATED WORK (High-dimensional asymptotics from the replica method), we assume a spiked data structure, which enables us to characterize phenomena such as posterior collapse and double descent.\\nThis flexibility and precision are key strengths of the replica method.\\n\\n## **Clarifications on Metrics (Eq. 9)**\\n\\nWe appreciate your question about the distinction between signal recovery error and the distortion metric. These metrics serve different purposes, as clarified in the revised manuscript (Lines 222\\u2013224):\\n\\n- **Distortion (Reconstruction Error)**: : Measures how well the data can be reconstructed after compression into and decoding from the latent space, reflecting the fidelity of the VAE\\u2019s encoder-decoder process.\\n- **Signal Recovery Error**: Focuses on the decoder alone and evaluates how well the latent variables $c$ are decoded into the data space. It provides a measure of how closely the data generated by the VAE matches the true data distribution, independent of the encoder.\\n\\n## **overlearning**\\n\\nTo avoid confusion, we have replaced *overlearning* with *overfitting* in the revised manuscript, aligning with standard terminology.\\n\\nBy incorporating your feedback on the replica method, simple settings, optimal $\\\\beta_{\\\\mathrm{VAE}}$, and distinctions between metrics, we believe the revised manuscript more effectively conveys our contributions and their broader implications for the machine learning community. We respectfully request that you reconsider the scores for Soundness and Contribution based on these revisions.\"}", "{\"title\": \"Response (1)\", \"comment\": \"We sincerely appreciate your detailed and constructive feedback. We are encouraged by your acknowledgment of our work's relevance to the ICLR community, especially in contributing to the theoretical understanding of the dataset-size dependence of RD curves. Below, we address your comments in detail.\\n\\n## **Relation to [1, 2]**\\n\\nIf we are not mistaken, there appears to be a misunderstanding regarding the results presented in Section 6.5.\\nSection 6.5 of our manuscript investigates how the signal recovery error depends on $\\\\beta_{\\\\mathrm{VAE}}$, showing that the error demonstrates qualitatively consistent behavior across varying network capacities. This consistency confirms the inevitable posterior collapse and highlights consistent trends in optimal $\\\\beta_{\\\\mathrm{VAE}}$ corrections for finite $\\\\alpha$.\\nNote that the RD curve analysis in Section 6.4 is purely theoretical and independent of network capacity considerations.\\n\\nIn the following, we explore the relationship between references [1] and [2].\\nFor reference [1], although variations in network capacity influence RD curve behavior, the qualitative trends we predict remain consistent.\\nFor example, Figure 4 in [1] illustrates RD curves moving closer to the rate and distortion axes as the dataset size increases, which aligns with our theoretical prediction.\\nSimilarly, the gradient of $-1$ observed around $\\\\beta_{\\\\mathrm{VAE}}=1$ supports this consistency. Additionally, Figure 4 in [1] provides numerical evidence that larger datasets are necessary in high-rate regions, aligning with our theoretical predictions.\\nWe acknowledge that further analysis of the relationship between network capacity and RD behavior is significant for future research.\\nIn reference [2], the PAC-bound analysis offers a worst-case scenario, and the correspondence between their results and ours is not immediately evident.\\n\\n- [1] Bozkurt Alican et al., Rate-regularization and generalization in VAEs, arXiv preprint arXiv:1911.04594 (2019).\\n- [2] Cherief-Abdellatif Badr-Eddine et al., On PAC-Bayesian reconstruction guarantees for VAEs, AISTATS2022.\\n\\n## **Core Message and Practical Implications**\\n\\nWe appreciate the feedback regarding the clarification of our core message.\\n**Our primary contribution lies in the theoretical characterization of VAEs under high-dimensional asymptotics, where $d \\\\to +\\\\infty$ and $n \\\\to +\\\\infty$ with a fixed ratio $\\\\alpha = d/n$**.\\nThis approach has gained attention in learning theory because this regime captures intriguing phenomena such as double descent and the advantages of low-dimensional manifold structures [3, 4, 5], which cannot be explained by traditional PAC-bound methods.\\nRecent studies have applied this framework to denoising autoencoders [6] and standard autoencoders [7]. In this work, we extend the framework to VAEs and derive a general formula (Claim 5.2) to analyze the dataset-size dependence of generalization performance and posterior collapse.\\n\\nA key practical insight from our analysis is the importance of understanding and tuning \\n$\\\\beta_{\\\\mathrm{VAE}}$.\\n**As noted by Reviewer Xf1j, \\\"Figure 2 also shows a long plateau in the reconstruction error for large values of $\\\\beta$. This is backed by Claim 6.1 (in the large $\\\\beta$ limit) and lines 428-430 provide concrete guidance to practitioners about the risks of a large $\\\\beta$ when training.\\\"** \\nWe will revise the manuscript to highlight these insights and their relevance to real-world applications, summarizing **the key engineering takeaways in the updated CONCLUSION section (Line 516-529).**\\n\\n\\n- [3] Lenka Zdeborova, Insights from exactly solvable high-dimensional models, ICLR2023\\n- [4] Cory Stephenson et al., On the geometry of generalization and memorization in deep neural networks, ICLR2021\\n- [5] Federica Gerace et al., Generalisation error in learning with random features and the hidden manifold model, ICML 2020\\n- [6] Hugo Cui and Lenka Zdeborova, High-dimensional Asymptotics of Denoising Autoencoders, NeurIPS2023\\n- [7] Maria Refinetti and Sebastian Goldt, The dynamics of representation learning in shallow, non-linear autoencoders, ICML2022\\n\\n## **The regime $\\\\alpha < 1$**\\n\\nThe regime $\\\\alpha < 1$, where the dimensionality of the data exceeds the sample size, is uncommon in standard benchmark datasets but plays a significant role in high-resolution or low-sample-size scenarios.\\nThis regime is essential for understanding overparameterization and its relationship to the phenomenon of double descent.\\n**Although our analysis encompasses the regime $\\\\alpha < 1$, it is not restricted to this range; Claim 4.2 allows for a comprehensive characterization of signal recovery error and RD behavior over any $\\\\alpha$, ensuring its broad applicability.**\"}", "{\"summary\": \"This paper studies the RD curves in VAEs from a function of dataset size and dimensionality. The authors suggest that the RD curves as a function of data complexity $\\\\alpha$ (# data points / dim of data) and $$\\\\beta, can be divided to three categories of overfitting, learning, and underfitting; In high $\\\\alpha$ regime, smaller $\\\\beta$ is needed in order to avoid over-regularizing the model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"To the best of my knowledge, this is the first paper that studied RD curves in VAEs as a function of dataset size and data dimensions. This topic I think is a valuable topic of study and will indeed be of interest to the ICLR community.\", \"The theory in the paper, to the best of my understanding, is sound.\", \"The paper for the most reads well.\"], \"weaknesses\": [\"There is no study of the network capacity in this work. While I understand that this is theoretical work, the authors do make a claim that the same results hold for more complex networks. However, there are prior works that suggest that RD curves for different network capacities behave differently [1,2]. Could the authors comment on this?\", \"It is also not clear to me what is the message of the paper. It ofcourse makes sense that when you don't have a lotta data in high dimensions, you want to incorporate prior knowledge (such as regularization). Similarly, when you have a lotta data, you don't need a lotta regularization as it is evident from all the recent DGMs. Furthermore, the $\\\\alpha < 1$ is hardly interesting as it is almost never the case. So practically, what does this mean for people employing VAEs? What is the core message here?\", \"I do not think the experiments are strong enough to back the claims made in this paper. First, $\\\\alpha$ should have been studied as a function of both $n$ and $d$ separately. Here, $d$ was kept fixed. Furthermore, the data choice here is extremely specific. I understand the design choice, but some controlled experiments on real-world datasets are also necessary before showing Figure 3 left with those specific values.\", \"[1] Bozkurt, Alican, et al. \\\"Rate-regularization and generalization in VAEs.\\\" arXiv preprint arXiv:1911.04594 (2019).\", \"[2] Ch\\u00e9rief-Abdellatif, Badr-Eddine, et al. \\\"On PAC-Bayesian reconstruction guarantees for VAEs.\\\" International conference on artificial intelligence and statistics. PMLR, 2022.\", \"**Minor comments**\", \"\\\"Notations \\\" should not be place in Related work I would say\", \"I would strongly advise to avoid using $D$ for the variances in $q$ and use $\\\\sigma^2$ instead as it is the most common symbol in the literature for this.\"], \"questions\": [\"Can you comment on how much the analysis is effected by the fact that $D$ is fixed?\", \"Are the RD curves computed for the training set or test set? These two curves can be widely different.\", \"Do the authors mean overfitting by \\\"overlearning\\\"? If yes, I would say replace it with overfitting to avoid confusion :)\", \"this is not clear to me but how did the authors at the values for Figure 3 Left? This is not seem to match other figures.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your valuable feedback and for taking the time to review our rebuttal. We are pleased to hear that the concerns raised have been resolved, and we appreciate your constructive suggestions.\\n\\n**In response to your insightful comments, we have extended one of our key claims to the case of $k > 1$, as detailed in Lines 456\\u2013460 of the revised manuscript. This extension enhances the practical implications of our findings and better connects our theoretical contributions to real-world engineering applications.** \\nAdditionally, we recognize the importance of clearly articulating the relevance of our results for engineering applications. As a step in this direction, we have added concise interpretations in Section 6 for each result, and we plan to further elaborate on these implications in the camera-ready version.\\n\\nWe believe these improvements significantly strengthen the clarity and impact of our work. Considering these updates, we kindly ask you to reconsider your score, as we hope the revised version aligns more closely with your expectations for both theoretical contributions and their applicability.\"}", "{\"title\": \"Response (2)\", \"comment\": \"## **Experiment**\\n\\n**The primary contribution of this paper is the derivation of a general formula, presented in Claim 4.2, for analyzing the dependence of sample complexity on VAEs**.\\nTo validate our theoretical predictions, we conducted numerical experiments using the widely studied MNIST and CIFAR10 datasets. These datasets are commonly used in previous studies, cited in the main text (Lines 499\\u2013501 in Section 6.5, Line 196-201 in Section 4) and [7], which support the Gaussian universality hypothesis [7]. **This hypothesis suggests that the data generation process of a Gaussian model can explain fundamental phenomena observed in real-world data.**\\nWhile we recognize the significance of verifying theoretical consistency across more comprehensive datasets, this lies beyond the scope of the current work. Our focus is on establishing a foundational theoretical framework, which we believe is essential for guiding future empirical validations. \\n\\n## **Minor Comments**\\n\\nWe have moved the ``Notations'' section to precede Related Work for improved clarity. Using $D$ for encoder variance follows conventions in prior studies summarized in Linear VAEs in RELATED WORK and is standard in theoretical analyses of linear VAEs.\\n\\n## **Fixed $D$**\\n\\nWhile $D$ was fixed to isolate the effects of $\\\\beta_{\\\\mathrm{VAE}}$, it can also be treated as an optimizable parameter, as demonstrated in [8]. \\n\\n- [8] N. Barkai and H. Sompolinsky, Statistical Mechanics of the maximum-likelihood density estimation, Physical Review E 50.3 (1994): 1766.\\n\\n## **RD Curve Evaluation**\\n\\nThe RD curves shown in Figure 4 represent theoretical results based on Claim 6.2 rather than being derived from empirical test data.\\n\\n## **Overlearning**\\n\\nTo avoid confusion, we have replaced *overlearning* with *overfitting* throughout the manuscript, as suggested.\\n\\n## **Figure 3 Explanation**\\n\\nFigure 3 presents a phase diagram demonstrating the detailed state of linear VAEs based on the parameters $Q$ and $m$, providing a comprehensive perspective on the generalization error trends shown in Figure 2.\\nFor instance, in Figure 2 (middle), at $\\\\beta_{\\\\mathrm{VAE}} = 1.0$ with $\\\\alpha = 2$, the signal recovery error begins to decrease, corresponding to the *Learning Phase* in the phase diagram. \\nIn contrast, at $\\\\beta_{\\\\mathrm{VAE}} = 1.8$ and $\\\\alpha = 1.5$, the signal recovery error does not improve, aligning with the *Regularized Phase* in the phase diagram. Detailed quantitative distinctions between these phases are discussed in Section 6.2.\\n\\n\\nWe hope our responses have sufficiently addressed your questions and clarified the contributions of our work. We kindly request that you reconsider your evaluation of our submission.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I would like to thank the authors for their detailed rebuttal and clarifications. My questions are resolved. I will keep my score.\\n\\nI believe to take this work further with current contributions (which are valuable) might require a bit more rewriting at the higher level. This will help to make the `core message` clearer, as also mentioned by Reviewer NKDk.\"}", "{\"title\": \"Response (2)\", \"comment\": \"## **Minimal Experiments**\\n\\nThe primary contribution of our paper is the derivation of a general formula, as presented in Claim 4.2, to analyze the sample complexity dependence of VAEs. Previous studies, mentioned in the main text (Line 495-497 in Section 6.5) and [7], that examine sample complexity $\\\\alpha$ dependence typically focus only on MNIST and CIFAR10 datasets and support the Gaussian universality [3] for deterministic autoencoders, which suggests that real data phenomena can be explained by the data generation process of a Gaussian model. **Accordingly, we employed this setting and consideration for our numerical experiments.** \\nWhile it is essential for the field to verify theoretical consistency across more comprehensive datasets, our main contribution lies in the theoretical analysis. Although our numerical experiments are limited, a more exhaustive empirical validation is part of future work.\\n\\n- [3] Maria Refinetti and Sebastian Goldt, The dynamics of representation learning in shallow, non-linear autoencoders, ICML2022\\n\\n## **Identifiability of Ground Truth Model** \\n\\nWe would appreciate further clarification regarding the definition of identifiability in this context, along with relevant references and an explanation of why it may pose a problem in our setting. The spiked covariance model we employ is widely used in theoretical studies of unsupervised learning discussed in RELATED WORK (High-dimensional asymptotics from replica method ), and denoising autoencoders [2] , autoencoders [3]. Its validity and relevance have been established in numerous prior works. Regarding the scaling issue with $\\\\sqrt{\\\\theta}$, we assumed a normalized model to avoid such ambiguities, ensuring that our results remain valid and interpretable. We will add a note in the manuscript to clarify this assumption.\\n\\n## **Relationship with Transitional Regimes**\\n\\nAs noted earlier, **we validated our theoretical predictions for $d=5{,}000$ and observed consistency with empirical results**. While these findings suggest robustness in transitional regimes, determining the precise applicability of our analysis to smaller $d$ remains an open question and an intriguing direction for future work.\\n\\n## **Datasets with Complex Spectrums**\\n\\nThe spiked covariance model is a standard framework in theoretical analyses of unsupervised learning discussed in RELATED WORK (High-dimensional asymptotics from replica method ), and denoising autoencoders [2], autoencoders [3]. \\n**Prior studies have demonstrated Gaussian universality, showing that Gaussian models can effectively explain the behavior of complex real-world datasets, as discussed in Line 197-201 and Line 494-498.**\\nIt has also been reported that Gaussian Universality exists in autoencoders where the VAE has been made deterministic [3].\\nHowever, these previous studies also used datasets such as MNIST, FashionMNIST, CIFAR10, and ImageNet, leaving it uncertain whether Gaussian Universality holds in cases with excessively complex spectrums.\\n\\n## **Clarification on Equation (9)**\\n\\nTo address your question about Equation (9), we have revised the manuscript to explicitly state that the expectation is taken over the data distribution. We hope this clarification resolves any ambiguity regarding the notation.\\n\\nBy addressing your concerns about model simplicity, transitional regimes, and dataset complexity, we believe we have strengthened the paper and clarified its contributions. We kindly request that you reconsider the scores for Soundness and Contribution in light of our revisions.\"}", "{\"comment\": \"Thank you for these comments.\\nThe identifiability of a statistical model is central when you try to estimate the parameters (see any basic statistics course). Without identifiability, this task is ill posed. Although it appears in the related works, I would have prefered to see at least a comment on this model.\\nIn any case, I feel this work is interesting but still at the beginning of its potential impact with many other aspects to analyse. I keep my score unchanged.\"}", "{\"title\": \"Respond to the Rebuttal\", \"comment\": \"Thank you for your response. I've read the rebuttal as well as the other reviews. The consensus does seem to be that there is a gap between some of the claims and the experiments, as well as the significance of the contributions.\\n\\nMy questions regarding the RD curves and fixed D has been addressed. However, the core issue remains the same. The contributions still seem misguided to me. As I mentioned, figure 3 (left) I believe does not add much to our understanding of VAEs as it matches the general ML intuition. Figure 3 (right) does make sense but this is also not surprising that for easier problems, you get better rate and distortions. Furthermore, the study is undermined by the practical verified difference between training and test RD values, as well as the different shapes of RD curves for different network capabilities. Sorry in advance if I'm misunderstanding but I do not follow the argument of \\\"long plateau in the reconstruction\\\" for high $\\\\beta$ values. Higher $\\\\beta$ leads to worse reconstructions, which is what Figure (2) middle showing. What am I missing here?\\n\\nOverall, while I think there is value to the theoretical findings of the paper, I keep my score.\"}", "{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer 1vWD,\\n\\nWe greatly appreciate the time and effort you have dedicated to reviewing our work. As the deadline approaches with only one day remaining, we sincerely request your feedback on our rebuttal. Please inform us if any aspect of our explanation remains unclear.\\n\\nWe would greatly appreciate your confirmation on whether your concerns have been adequately addressed. If the issues are resolved, we would appreciate your reevaluating this study. We will respond promptly before the discussion deadline if further clarification is required.\\n\\nBest,\\n\\nThe authors\"}", "{\"summary\": \"This paper aims to analyze the solution learnt by a $\\\\beta$-VAE w.r.t. (1) the parameter $\\\\beta$, and (2) the training dataset size. The authors work in the high-dimensional asymptotic setting, and aim to characterize certain phenomena about the quality of the learnt solution by the VAE.\\n\\nTo do this, the authors theoretically analyze a linear VAE model (Eq (5)), in the high-dimensional asymptotic regime where $n,d \\\\rightarrow \\\\infty$ and $\\\\frac{n}{d} = \\\\alpha$ (sample complexity) stays finite, using the replica method as a heuristic to get around intractable calculations. This is presented in Section 5.\\n\\nThe asymptotic formulae are empirically verified in Section 6.1 and 6.2, on a synthetic data model of the spiked covariance matrix (Eq (4)). This is then used to draw interesting observations about the learning process and the quality of the learnt VAE solution. Figure 2 in particular shows many of the findings.\\n\\nThe authors then empirically show some of the findings (from the linear VAE setting) hold true for non-linear VAEs also, trained on real-world datasets like MNIST and FashionMNIST. This is presented in Section 6.5.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Technical strengths**:\", \"The paper sharply characterizes high-dimensional asymptotics for learning the linear VAE (Eq (5)) under the spiked covariance model (Eq (4)) with the regularized $\\\\beta$-VAE objective (Eq (6)).\", \"This is used to show interesting observations about the VAE learning process in Section 6.1 and 6.2. In particular, (1) Figure 2 shows a double-descent phenomenon w.r.to the sample complexity $\\\\alpha$, with the reconstruction error (Eq (9)) peaking at $\\\\alpha = 1$, and (2) Figure 2 also shows a long plateau in the reconstruction error for large values of $\\\\beta$. This is backed by Claim 6.1 (in the large $\\\\alpha$ limit) and lines 428-430 provide concrete guidance to practitioners about the risks of a large $\\\\beta$ when training.\", \"Section 6.5 (and Figure 5) shows this on real-world datasets MNIST and FashionMNIST also, where the insight can be used to practically choose the \\\"optimal\\\" value of $\\\\beta$ approximately equal to the noise ratio $\\\\hat{\\\\eta}$, which can be estimated using the training dataset.\", \"**Presentation strengths**:\", \"The paper is largely well-written and easy to follow. The authors include relevant explanations in most places. For example, the choice of the spiked covariance model as the synthetic data generating process was backed by evidence in Figure 1 of MNIST following something similar.\"], \"weaknesses\": [\"**Technical Weaknesses**:\", \"The main weakness is the fact that the theoretical results are not exact, since they have been developed using the replica method, which is a heuristic to get around intractable calculations.\", \"The authors work in the simple setting of $k = k^\\\\star = 1$. If I understand correctly, this means the true latent space is $1$-dimensional. It would have been nice to see the synthetic experiments with $k^\\\\star$ varying, say in $[1, 2, 4]$. In particular, what would the trend of $\\\\varepsilon_g$ w.r.to $k^\\\\star$ look like?\", \"Some of the claims can be better substantiated. For example, in the context of Figure 2, it would have been nice to see a plot of $\\\\varepsilon_g$ w.r.to $\\\\alpha$ for the optimal $\\\\beta$ choice. Is that perhaps monotonically decreasing? (This is similar to the double-descent observations in literature, where using the optimal parameter leads to a monotonically decreasing curve instead of double-descent).\", \"**Minor notes on the typos I found**\", \"Line 146, $H$ is probably the \\\"negative log-likelihood\\\" instead of just \\\"likelihood\\\". I stress this because it is important whether we want to minimize or maximize $H$.\", \"Line 189, \\\"Spectrum\\\" of the covariance matrix, instead of \\\"Spectral\\\". This typo is present in many places throughout the paper (for eg, Fig 1(b)), would appreciate if it can be cleaned up.\", \"Line 214, \\\"Note that\\\" instead of \\\"Noted that\\\".\", \"Line 224, $\\\\lambda \\\\in \\\\mathbb{R}_{+}$ maybe instead of $\\\\lambda \\\\in \\\\mathbb{R}$? I *assume* practitioners use a non-negative regularization parameter.\"], \"questions\": [\"What is the main challenge that the replica method allows you to get around? Would be nice to provide some insight into this. Or perhaps a toy example of the usage of replica method demonstrating what are its benefits and why is it used in this particular context.\", \"What is the main reason to introduce the metric $\\\\varepsilon_g$ in Eq (9)? How is it different than the distortion $D$?\", \"In Figure 3, what does \\\"overlearning\\\" mean? From the description in section 6.2, it seems it is the same as overfitting? If so, would be good to name it that way instead of introducing a new term.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"We sincerely appreciate your valuable feedback and the opportunity to address your concerns.\\n\\nOne of the central claims of our study is the existence of \\\"posterior collapse\\\" that cannot be avoided by increasing the dataset size. \\nIn response to your comment, we have verified whether this phenomenon holds for any given $k = k^{\\\\ast}$. \\nRevised manuscript shows that the \\\"inevitable posterior collapse\\\" occurs for arbitrary $k = k^{\\\\ast}$. Furthermore, the threshold condition is consistent with the case of $k = k^{\\\\ast} = 1$, where $\\\\beta_{\\\\mathrm{VAE}} = \\\\rho + \\\\eta$. \\n**This result has been incorporated into the revised version of our manuscript (Lines 457--460), with detailed proof provided in the Appendix (Lines 1134--1167).**\\n\\nAdditionally, we would like to clarify the experimental settings for MNIST, FashionMNIST, and CIFAR10.\", \"as_detailed_in_appendix_e\": \"EXPERIMENT DETAILS, we did not use models with a one-dimensional latent variable. Specifically, for MNIST and FashionMNIST, we utilized models with two-dimensional latent variables, while for CIFAR10, models with 128-dimensional latent variables were employed.\\n**Even with these configurations, our experiments consistently revealed regions where the FID score does not improve with increasing dataset size, aligning well with the predictions of our theoretical analysis.**\\n\\nWe hope these clarifications address your concerns effectively. We believe this strengthens the connection between our theoretical insights and practical applications of VAEs in real-world engineering contexts.\\n\\nThank you again for your constructive feedback and support.\"}", "{\"title\": \"Response\", \"comment\": \"Dear Reviewer NKDk, Xf1j, 1vWD,\\n\\nThank you for the time and effort you have dedicated to reviewing our work. As the deadline approaches with only few hours remaining, we kindly request your feedback on our rebuttal.\\n\\nBest,\\nThe authors\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your thoughtful and detailed feedback.\\nWe appreciate the opportunity to clarify the points you raised and address your concerns regarding the theoretical and experimental contributions of our work.\\n\\n## **On the Gap Between Theoretical Analysis and Numerical Experiments**\\n\\nWe are somewhat uncertain about your comment and would greatly appreciate it if you could clarify which specific aspects of the gap between the numerical experiments and theoretical analysis you are referring to. \\nWe would like to ensure that there is no gap regarding the alignment between our theoretical analysis and numerical experiments. \\nAs demonstrated in Section 6.5, the signal recovery error behavior derived theoretically matches the qualitative trends observed in our numerical experiments. \\nFurthermore, as you referenced [1], we note that the qualitative behavior of RD curves in [1] aligns with our theoretical results.\\n\\nIf your concern lies in the lack of empirical verification of RD curves in real-world scenarios similar to [1], we are open to including additional numerical validation in the Camera Ready version. \\nWhile we acknowledge the importance of extending our analysis to more complex deep learning models, such a task remains inherently challenging in current learning theory, even for supervised learning. \\n**As a first step, our focus on the data-dependence of RD curves and signal recovery error using a minimal model (Linear VAE) provides a foundational contribution that we believe is crucial for advancing this line of research.** We hope you will consider the value of this incremental yet significant step toward understanding the principles governing VAEs.\\n\\n- [1] Bozkurt Alican et al., Rate-regularization and generalization in VAEs, arXiv preprint arXiv:1911.04594 (2019).\\n\\n## **On the Importance of Figure 3**\\n\\nWe strongly believe that Figure 3 plays a central role in elucidating the behavior of Linear VAEs and represents the core contribution of our paper. \\nThis phase diagram offers a complete description of Linear VAE behavior in the sample complexity $\\\\alpha$ and $\\\\beta_{\\\\mathrm{VAE}}$ space. Importantly, it goes beyond prior numerical studies, which largely emphasize posterior collapse is related to $R \\\\approx 0$, by revealing that regions inducing posterior collapse correspond to distinct phases\\u2014 the *Overfitting Phase* and the *Regularized Phase*, as explained in Section 6.2.\\n\\nMoreover, Figure 3 identifies the sharp boundaries between these phases in the $\\\\beta_{\\\\mathrm{VAE}}$\\u2013$\\\\alpha$ space, providing insights that we believe are non-trivial. For instance, the origins of posterior collapse differ fundamentally between these two phases, which offers a deeper understanding of the phenomenon. This structured characterization advances the discussion around VAEs beyond empirical observations, contributing new insights that we hope the community will find valuable.\\n\\n## **On the Long Plateau Phenomenon**\\n\\nWe apologize for any confusion regarding the \\\"long plateau\\\" discussion. **The term \\\"long plateau\\\" is specifically used for the signal recovery error $\\\\varepsilon_{g}$, not the reconstruction error (distortion)**. The signal recovery error, as explained in Section 6.2, measures the discrepancies between the decoder's learned and true data distributions, thus distinguishing it from distortion.\\n\\nAs shown in Figure 2 (Middle), for large $\\\\beta_{\\\\mathrm{VAE}}$ values (e.g., $\\\\beta_{\\\\mathrm{VAE}} = 1.8, 2.0$), $\\\\varepsilon_{g}$ exhibits a long plateau at $\\\\varepsilon_{g} \\\\approx 1$. \\nFurthermore, in Figure 3, we identify that as $\\\\alpha \\\\to \\\\infty$, the boundary between the *Regularized Phase* and the *Learning Phase* asymptotically approaches $\\\\beta_{\\\\mathrm{VAE}} = \\\\rho + \\\\eta$.\", \"this_asymptotic_behavior_explains_the_long_plateau_phenomenon\": \"as $\\\\beta_{\\\\mathrm{VAE}}$ approaches this boundary from below, the plateau length increases indefinitely.\\n**This finding provides not only a theoretical understanding of posterior collapse for large $\\\\beta_{\\\\mathrm{VAE}}$, but also engineering insights for designing VAEs, as highlighted in Section 7 (Lines 518\\u2013532).**\\n\\n\\nTo better address the relevance of our findings to engineering applications, we have generalized one of our claims in the revised version (Lines 456\\u2013460). \\nWhile we understand your emphasis on practical applications, we ask you to consider our contributions in the context of the ICLR community\\u2019s growing interest in theoretical perspectives on representation learning. Our work aims to provide a long-term theoretical foundation for such applications, and we hope this alignment with the community's objectives will lead you to reevaluate your score.\\n\\nYour constructive comments have been invaluable in refining our paper, and we hope this response provides clarity and adequately addresses your concerns. Thank you for your thoughtful feedback.\"}", "{\"title\": \"Response (1)\", \"comment\": \"We sincerely thank you for your thoughtful and detailed review.\\nWe are especially grateful for your recognition of our contributions, including identifying the double-descent phenomenon, the inevitable posterior collapse, and their practical implications.\\nIn the following, we address your comments and questions in detail.\\n\\n## **Replica Method**\\n\\nAs noted in the RELATED WORK section, the replica method is a well-established tool for theoretical analyses in high-dimensional statistics and has been successfully used to explain various phenomena in machine learning [1\\u20133], including denoising autoencoders [4] and autoencoders [5].\\nRecent studies have increasingly demonstrated its mathematical rigor, as discussed in the revised Related Work section (High-Dimensional Asymptotics from the Replica Method, Line 113-117).\\n\\nIn our study, the replica method was essential for deriving sharp predictions regarding the dataset-size dependence of the signal recovery error and RD curve behavior. **These theoretical predictions were validated through numerical experiments with $d=5{,}000$**. As shown in Figure 2, the strong agreement between the numerical results for finite $d=5{,}000$ and the predictions from the replica method provides strong evidence of its validity in this context. This alignment also highlights the robustness of the analysis in the high-dimensional limit where $d \\\\to +\\\\infty$ and $n \\\\to +\\\\infty$ with a fixed ratio $\\\\alpha = d/n$ for the regime where $d$ and $n$ are finite.\\n\\n- [1] Lenka Zdeborova, Insights from exactly solvable high-dimensional models, ICLR2023\\n- [2] Cory Stephenson et al., On the geometry of generalization and memorization in deep neural networks, ICLR2021\\n- [3] Federica Gerace et al., Generalisation error in learning with random features and the hidden manifold model, ICML 2020\\n- [4] Hugo Cui and Lenka Zdeborova, High-dimensional Asymptotics of Denoising Autoencoders, NeurIPS2023\\n- [5] Maria Refinetti and Sebastian Goldt, The dynamics of representation learning in shallow, non-linear autoencoders, ICML2022\\n\\n## **On the Simple Setting**\\n\\nWe appreciate your observation regarding the simplicity of our setting ($k=1$).\\n**Our primary contribution is the derivation of a general formula (Claim 4.2), enabling a sharp analysis of how various metrics in linear VAEs- such as rate $R$, distortion $D$, signal recovery error $\\\\varepsilon_{g}$, $D_{\\\\mathrm{KL}}[p(x) \\\\| p_{\\\\mathrm{data}}(x)]$**.\\nWhile we focus on the simplest case with $k=k^{\\\\ast}=1$, this minimal case still captures critical phenomena such as **nevitable posterior collapse** and the **double-descent behavior**. \\nSimilar minimal settings with $k=1$, have been the focus of analyses on denoising autoencoders [4].\\nIn situations where $k^{\\\\ast} > 1$, as you pointed out, a phenomenon similar to the *progressive learning* of strong spikes, observed in the analysis of autoencoders [5], may emerge. A detailed analysis of this scenario is left as part of our future work.\\n\\nAdditionally, if you could share specific references for [1], [2], and [4], it would significantly enhance our understanding.\\n\\n## **Optimal $\\\\beta$**\\n\\nAs mentioned in Line 376-405 of the manuscript, the optimal $\\\\beta_{\\\\mathrm{VAE}}$ depends on the sample complexity $\\\\alpha$, which behaves quantitatively differently from the ridge regularization strength $\\\\lambda \\\\in \\\\mathbb{R}_{+}$.\\n\\nFigure 2 (Left) demonstrates the effect of varying the regularization strength $\\\\lambda$, while Figure 2 (Middle) shows the effect of varying $\\\\beta_{\\\\mathrm{VAE}}$.\\n**This $\\\\alpha$-dependence of optimal $\\\\beta_{\\\\mathrm{VAE}}$ is a key insight for practitioners**.\\nWhen $\\\\alpha \\\\to \\\\infty$, setting the optimal $\\\\beta_{\\\\mathrm{VAE}}=\\\\eta$ still results in the double-descent phenomenon.\\n\\n## **Minor notes**\\n\\nWe sincerely appreciate your attention to detail and have made the necessary corrections in the revised manuscript.\"}", "{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer NKDk, Xf1j, 1vWD,\\n\\nWe greatly appreciate the time and effort you have dedicated to reviewing our work.\\nAs the deadline approaches with only few hours remaining, we sincerely request your feedback on our rebuttal.\\nPlease inform us if any aspect of our explanation remains unclear.\\n\\nWe would greatly appreciate your confirmation on whether your concerns have been adequately addressed.\\nIf the issues are resolved, we would appreciate your reevaluating this study.\\nWe will respond promptly before the discussion deadline if further clarification is required.\\n\\nBest,\\n\\nThe authors\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response and valuable feedback.\\nWe appreciate the opportunity to clarify and address the points you raised. Below, we respond to your concerns in detail and outline the steps we have taken to improve our submission based on your comments.\\n\\n## **Regarding Identifiability and Ill-Posedness**\\n\\nWe understand that you are raising concerns about the *identifiability* of the statistical model in our problem setting. \\nHowever, we would greatly appreciate it if you could elaborate on the specific reasons why you believe our problem setting might be ill-posed.\\n\\nFor instance, while it is true that in cases where $k^{\\\\ast}<k$, multiple candidate models of Linear VAEs can reproduce the statistical properties of the data, would any of these models still result in successful training? \\nThis suggests that the lack of strict *identifiability* may not hinder the overall utility of the setting. \\nIf there is a specific scenario or example where this ambiguity leads to a meaningful failure, we would be eager to analyze it and incorporate the findings into the Camera-Ready version.\\n\\nFurthermore, **we are unclear about your reference to dividing by $\\\\sqrt{\\\\theta}$.** If you could provide more details or clarify this point, we would be happy to include a corresponding explanation or additional analysis in the revised version.\\n\\nFinally, in the specific setting where $k=k^{\\\\ast}=1$, *identifiability issues* might not arise. We believe that the results we demonstrated\\u2014such as the inevitable posterior collapse and the double descent phenomenon in VAEs\\u2014are meaningful contributions that remain valid regardless of identifiability concerns in higher dimensions.\\n\\nWe appreciate that you acknowledged our response to your initial comment on the intermediate regime and that it has adequately addressed your concern. **Building upon this, we have further extended one of our claims to the $k>1$ case, as noted in Lines 456\\u2013460 of the revised manuscript.** This addition represents our efforts to strengthen the paper based on your valuable feedback, and we hope this extension provides further clarity and rigor to our contributions.\\n\\nIn light of the clarifications above, the additional analyses we conducted, and the revisions we have made to the manuscript, we kindly request that you reconsider your score. We sincerely hope that these efforts address your concerns.\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"We sincerely appreciate your thoughtful review and constructive suggestions.\\nYour comments have been instrumental in helping us identify opportunities to improve the clarity, coherence, and practical relevance of our manuscript. Below, we provide detailed responses to each of your points.\\n\\n## **Weakness: Coherence of Findings**\\n\\nIf we understood your concern correctly, it relates to the possibility that the results in Section 6.5, where we compare theoretical predictions with experiments on real-world data and nonlinear VAEs, might appear loosely connected.\\nYou are correct that our theoretical results do not guarantee exact alignment with real-world data or more complex models. **However, our main contribution of this paper is the derivation of a general formula, presented in Claim 4.2, for analyzing the dependence of sample complexity $\\\\alpha$ on VAEs**.\\n\\nThe previous studies in the high-dimensional limit where $d \\\\to +\\\\infty$ and $n \\\\to +\\\\infty$ with a fixed ratio $\\\\alpha = d/n$, cited in the main text (Lines 499\\u2013501 in Section 6.5, Line 196-201 in Section 4) and [1], that examines the dependence of sample complexity $\\\\alpha$ typically focus only on MNIST and CIFAR10 datasets. These studies support the Gaussian universality [1], which suggests that the data generation process of a Gaussian model can explain real-data phenomena.\\nAccordingly, we employed this setting and considerations for our numerical experiments on MNSIT, FashionMNIST, and CIFAR10. \\nWhile the studies in the high-dimensional limit where $d \\\\to +\\\\infty$ and $n \\\\to +\\\\infty$ with a fixed ratio $\\\\alpha = d/n$ needs to verify theoretical consistency across more comprehensive datasets, this lies beyond the scope of the current work. Our main contribution is theoretical analysis, and we consider more exhaustive empirical validation a significant step in future work.\\n\\n- [1] Maria Refinetti and Sebastian Goldt, The dynamics of representation learning in shallow, non-linear autoencoders, ICML2022\\n\\n## **Connecting the Theoretical Findings**\\n\\nWe agree that posterior collapse for high $\\\\beta_{\\\\mathrm{VAE}}$ might not be surprising.\\n**However, as Reviewer Xf1j noted, \\\"Figure 2 also shows a long plateau in the reconstruction error for large values of $\\\\beta$ . This is backed by Claim 6.1 (in the large $\\\\beta$ limit) and lines 428-430 provide concrete guidance to practitioners about the risks of a large $\\\\beta$ when training.\\\"**\\nIn response to your suggestion, **we revised CONCLUSION section (Line 516-529) to explicitly summarize the practical implications of our findings**, emphasizing the tuning of $\\\\beta_{\\\\mathrm{VAE}}$ for engineering applications of VAEs. This revision links our theoretical insights to real-world applications, bridging the gap between abstract findings and practical implementation.\\n\\n## **Signal Recovery Error vs. Reconstruction Error**\\n\\nWe have clarified this in the revised manuscript (Lines 244\\u2013250).\", \"to_summarize\": [\"**Distortion (Reconstruction Error)**: Measures how well the data can be reconstructed after compression into and decoding from the latent space, reflecting the fidelity of the VAE\\u2019s encoder-decoder process.\", \"**Signal Recovery Error**: Focuses on the decoder alone and evaluates how well the latent variables $c$ are decoded into the data space. It provides a measure of how closely the data generated by the VAE matches the true data distribution, independent of the encoder.\", \"We believe these revisions address your concerns and significantly improve the coherence, clarity, and practical impact of our paper. We kindly ask you to reconsider your evaluation of our work in light of these improvements.\"]}", "{\"comment\": \"Thank you for the updates. Even though I still believe the current manuscript can be written better and clearer, the contributions are indeed valuable. I will change my score from 6 -> 8.\\n\\n\\nBut I do understand the concerns from other reviewers, and I hope they can be addressed well.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for the detailed clarifications you provided.\\n\\nRegarding my comment on the $k = k^\\\\star = 1$ (simple setting), I think there is a small misunderstanding. I did not mean [1,2,4] as references to other papers. What I meant was you should also consider providing experimental results for $k = k^\\\\star = 2,4$ also instead of just fixing latent dimension to $1$.\\n\\nOverall, I will retain my score.\"}", "{\"summary\": \"This paper explores the influence of the \\u03b2 hyperparameter in a \\u03b2-VAE, particularly regarding its impact on the posterior collapse phenomenon and the Rate-Distortion (RD) curve, from an information-theoretic perspective. Using a Spiked Covariance Model (SCM), the authors identify three phases (regularized, learning, overlearning) based on \\u03b2 and sample complexity \\u03b1 = n/d . The results suggest that high values of \\u03b2 inevitably lead to posterior collapse, regardless of data volume, and propose that the optimal \\u03b2 correlates with noise\\nstrength \\u02c6\\u03b7 in high-complexity settings. Experiments on MNIST and Fashion-MNIST datasets partially validate these claims.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022 Information-Theoretic Perspective: A thoughtful analysis of \\u03b2-VAE\\u2019s Rate-Distortion properties.\\n\\u2022 Insight on Hyperparameter Tuning: Suggests an optimal \\u03b2 based on sample complexity, a novel angle.\\n\\u2022 Focus on an Underexplored Hyperparameter: Thorough analysis of \\u03b2\\u2019s role in posterior collapse.\", \"weaknesses\": \"\\u2022 Over-Simplified Model and Limited Scope: Most analysis centers on a linear VAE with low latent dimensions (e.g., dimension 1), which reduces the generalizability to real-world settings. Minimal insight into non-asymptotic or intermediate regimes.\\n\\u2022 Limited Novelty: The core claim that \\u201chigh \\u03b2 is detrimental\\u201d is already known. While it validates an intuition, the work lacks a new method or paradigm. Moreover, it is unclear that the posterior collapse phenomenon occurs \\u201doften\\u201d as stated in the abstract, especially when the latent dimension increases.\\n\\u2022 Minimal Experiments: Only FID scores on two datasets (MNIST, Fashion-MNIST), without broader validation on diverse or complex data,\\nand lacks sensitivity analysis (for small changes on the value).\\n\\u2022 Identifiability of Ground Truth Model: As is, the ground truth model does not seem to be identifiable (it is when dividing by \\u221a\\u03b8). This hinders the reliability of the analysis.\", \"questions\": \"1. Could you explore the performance of this model in transitional or non-asymptotic regimes to reflect real-world data scenarios?\\n2. Is your analysis still valid for datasets with complex spectrums?\\n3. Could you clarify the derivation in Equation (9), especially the expectation notation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (1)\", \"comment\": \"We sincerely thank you for your constructive feedback.\\nBelow, we address each of the weaknesses and questions you raised to clarify our contributions and propose improvements to the manuscript.\\n\\n## **Over-Simplified Model and Limited Scope**\\n\\nWe appreciate your concern regarding the simplicity of our model and its scope. \\nAs discussed in the RELATED WORK section, Linear VAE is a well-established and effective model for gaining insight into more complex VAEs. \\nIndeed, **[1] provided theoretical insights into Linear VAEs, which have led to the development of algorithms for deep models.**\\nHowever, despite their foundational importance, the dataset-size dependence of the RD curve and generalization performance in Linear VAEs has not been addressed before. \\n**Our primary contribution is the analysis of sample complexity dependence ($\\\\alpha$) in Linear VAEs, as noted by Reviewer NKDk, *\\\"this is the first paper that studied RD curves in VAEs as a function of dataset size and data dimensions.\\\"**\\nOur findings are not only theoretically novel but also practically relevant. \\nCONCLUSION section (Line 516-529) in our revised manuscript emphasizes guidance for $\\\\beta_{\\\\mathrm{VAE}}$-tuning derived from our analysis. This robustness is supported by experiments in Section 6.5 on MNIST, Fashion-MNIST, and CIFAR-10 (Appendix F), where the predicted behavior of $\\\\beta_{\\\\mathrm{VAE}}$ qualitatively aligns with real-world data.\\n\\n## **Regarding the simple setting $(k=1)$** \\n\\nOur primary contribution is the derivation of a general formula (Claim 5.2), enabling a sharp analysis of the dataset-size dependence of various metrics in linear VAEs- such as rate $R$, distortion $D$, signal recovery error $\\\\varepsilon_{g}$, $D_{\\\\mathrm{KL}}[p(x) \\\\| p_{\\\\mathrm{data}}(x)]$.\\nWhile we focus on the simplest case with $k=k^{\\\\ast}=1$, this minimal case still captures critical phenomena such as **inevitable posterior collapse** and the **double-descent behavior**. Similar minimal settings with $k=1$ have been the focus of analyses on denoising autoencoders [2].\\nWhile extending the analysis to $k>1$ from the general formula (Claim 5.2) would yield deeper insights into disentanglement, we leave this exploration as future work.\\n\\n- [1] Juhan Bae et al., Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve, ICLR2023\\n- [2] Hugo Cui and Lenka Zdeborova, High-dimensional Asymptotics of Denoising Autoencoders, NeurIPS2023\\n\\n## **Minimal Insight into Non-Asymptotic or Intermediate Regimes**\\n\\nThank you for this insightful observation. Investigating the relevance of asymptotic behavior in intermediate regimes is indeed important. \\nTo address this, **we validated our theoretical predictions using numerical experiments on a Linear VAE with $d=5{,}000$, showing strong agreement with our asymptotic analysis (Figure 2)**. Additionally, qualitative consistency with real-world datasets such as MNIST, Fashion MNIST, and CIFAR10 further supports the robustness of our findings.\\nWhile our results suggest applicability to non-asymptotic regimes, the extent of this applicability\\u2014particularly for smaller $d$\\u2014is an interesting question for future work.\\n\\n## **Limited Novelty**\\n\\nWe agree that posterior collapse for high $\\\\beta_{\\\\mathrm{VAE}}$ might not be surprising.\\n**However, as Reviewer Xf1j noted, \\\"Figure 2 also shows a long plateau in the reconstruction error for large values of $\\\\beta$ . This is backed by Claim 6.1 (in the large $\\\\beta$ limit) and lines 428-430 provide concrete guidance to practitioners about the risks of a large $\\\\beta$ when training.\\\"**\\nAdditionally, identifying a double-descent phenomenon in VAEs, akin to supervised learning, represents a novel theoretical contribution that enhances our understanding of learning dynamics. Indeed, **As Reviewer NKDk highlighted, \\\"This topic is a valuable topic of study and will indeed be of interest to the ICLR community.\\\"**\\n\\nIn response to your suggestion, we have **revised CONCLUSION section (Line 516-529)** to explicitly summarize the practical implications of our findings, focusing on the practical tuning of $\\\\beta_{\\\\mathrm{VAE}}$ for engineering applications of VAEs. This revision connects our theoretical insights to their real-world relevance, bridging the gap between abstract findings and practical usability.\"}", "{\"summary\": \"The paper investigates several aspects of beta-linear-VAEs, including posterior collapse, the effect of different values for the beta parameters, the effect of training dataset size.\\nIt also introduced two summary statistics, and using these statistics, the authors derived a phase diagram for VAE learning in terms of the beta values and the relative scale of the training dataset size.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper studied an important aspect of VAEs and how the different parameters and choices can affect the performance.\", \"The empirical findings of the relation between generalisation error and the sample complexity as well as the beta parameter is interesting.\"], \"weaknesses\": \"The paper discussed a list of different behaviours of VAEs, but it feels like they are rather loosely connected findings (i.e., the subsections in Section 6).\\n\\nThe findings themselves are interesting, but it is not surprising that changing one variable, such as beta or the number of training data, will lead to various changes in aspects like RD curves, posterior collapse. \\n\\nTherefore, I believe a more coherent story is important to connect the dots and make these findings more insightful.\", \"questions\": \"1. The signal recovery error feels like a definition of reconstruction error, and a distortion metric. What\\u2019s the difference between the signal recovery error and the distortion (D) of the RD curve in the paper?\\n2. Typo: Page 8 line 397: \\u201csummary statics\\u201d -> \\u201csummary statistics\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer NKDk\\n\\nWe greatly appreciate the time and effort you have dedicated to reviewing our work. As the deadline approaches with only one day remaining, we sincerely request your feedback on our rebuttal. Please inform us if any aspect of our explanation remains unclear.\\n\\nWe would greatly appreciate your confirmation on whether your concerns have been adequately addressed. If the issues are resolved, we would appreciate your reevaluating this study. We will respond promptly before the discussion deadline if further clarification is required.\\n\\nBest,\\n\\nThe authors\"}", "{\"metareview\": \"In the paper, the authors rigorously examine the factors contributing to posterior collapse in variational autoencoders (VAEs), focusing on the influence of the hyperparameter beta and data size in VAEs.\\n\\nWhile there is a consensus among the reviewers that the theories are sound and of interest to ICLR, there are major concerns about the impact and scope of the study, including (1) the model being studied is linear VAE with low latent dimensions. While it is understandable that the simple setting is useful for obtaining useful theoretical insights, it is unclear how the theoretical findings can be generalized to the general settings of VAEs. (2) Several claims in the paper, such as the bad effect of the high value of the hyperparameter beta, had been studied before, which limits the novelty of the current theories in the paper. (3) The experiments are quite poor and minimal, which limits the scope of the theories in the real-world settings.\\n\\nGiven the above major concerns, I recommend rejecting the paper at the current stage. I believe that the paper will be stronger after incorporating the feedback and suggestions of the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta-review.\"}", "{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer Xf1j\\n\\nWe greatly appreciate the time and effort you have dedicated to reviewing our work. As the deadline approaches with only one day remaining, we sincerely request your feedback on our rebuttal. Please inform us if any aspect of our explanation remains unclear.\\n\\nWe would greatly appreciate your confirmation on whether your concerns have been adequately addressed. If the issues are resolved, we would appreciate your reevaluating this study. We will respond promptly before the discussion deadline if further clarification is required.\\n\\nBest,\\n\\nThe authors\"}" ] }
Bd2wAQZxJW
Protein Sequence Domain Annotation using Language Models
[ "Arpan Sarkar", "Kumaresh Krishnan", "Sean R Eddy" ]
Protein function inference relies on annotating protein domains via sequence similarity, often modeled through profile Hidden Markov Models (profile HMMs), which capture evolutionary diversity within related domains. However, profile HMMs make strong simplifying independence assumptions when modeling residues in a sequence. Here, we introduce PSALM (Protein Sequence Annotation using Language Models), a hierarchical approach that relaxes these assumptions and uses representations of protein sequences learned by protein language models to enable high-sensitivity, high-specificity residue-level protein sequence annotation. We also develop the Multi-Domain Protein Homology Benchmark (MDPH-Bench), a benchmark for protein sequence domain annotation, where training and test sequences have been rigorously split to share no similarity between any of their domains at a given threshold of sequence identity. Prior benchmarks, which split one domain family at a time, do not support methods for annotating multi-domain proteins, where training and test sequences need to have multiple domains from different families. We validate PSALM's performance on MDPH-Bench and highlight PSALM as a promising alternative to HMMER, a state-of-the-art profile HMM-based method, for protein sequence annotation.
[ "Protein homology benchmark", "protein language models", "protein sequence annotation", "homology search", "protein machine learning", "protein function prediction" ]
https://openreview.net/pdf?id=Bd2wAQZxJW
https://openreview.net/forum?id=Bd2wAQZxJW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "bjuTMMYmGS", "adOMDPmsEC", "M3tjSBIQ7R", "M1eLf2pxuH", "AV4yZ6IwvL" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730663070533, 1730715681971, 1730172193844, 1732635328811, 1730645829840 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3799/Reviewer_yrNk" ], [ "ICLR.cc/2025/Conference/Submission3799/Reviewer_W8wY" ], [ "ICLR.cc/2025/Conference/Submission3799/Reviewer_WVA9" ], [ "ICLR.cc/2025/Conference/Submission3799/Authors" ], [ "ICLR.cc/2025/Conference/Submission3799/Reviewer_uAsi" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents PSALM (Protein Sequence Annotation using Language Models), a novel approach that leverages embeddings from the pretrained ESM-2 language model to improve protein domain annotation at a residue level. PSALM introduces a hierarchical clan-family prediction structure, which first assigns a broader clan label before refining it to a specific family label for each residue. This setup aims to enhance sensitivity and specificity in detecting protein domains, especially in challenging multi-domain proteins and distantly related homologs.\\n\\nTo evaluate PSALM, the authors created MDPH-Bench, a benchmark designed for testing multi-domain protein annotation with strict percent identity (PID) splits between training and test sets. This benchmark allows for performance evaluation across different levels of evolutionary similarity. The paper compares PSALM to HMMER, a standard HMM-based tool for domain annotation, and demonstrates that PSALM achieves higher sensitivity and specificity, particularly in challenging, low-PID ranges.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper introduces a novel application of pLMs (ESM-2 embeddings), in a hierarchical clan-family prediction framework for domain annotation, especially for multi-domain and low-similarity sequences.\\n\\nAdditionally, the introduction of MDPH-Bench adds a unique contribution that can be valuable for evaluating future models in this area.\\nThe paper demonstrates technical rigor, with detailed evaluations across different PID ranges to show the model\\u2019s strengths in various contexts. The authors also include ablation studies to highlight the specific contribution of ESM-2 embeddings and clan-level learning, adding depth to the analysis.\\n\\nThe work has practical implications for protein domain annotation, an important task in bioinformatics. By addressing limitations of HMM-based methods, PSALM and MDPH-Bench add tools that could support both research and applied work in areas like functional genomics and evolutionary studies.\", \"weaknesses\": \"I have mixed feelings about this paper, as it introduces several interesting ideas. However, due to issues with clarity, not being fully self-contained, and its potential lack of fit for this venue, I am inclined to not give it an \\\"accept\\\" score. If these comments are addressed, I would be willing to raise my score.\\n\\nThe paper is aimed at a bioinformatics audience with substantial familiarity with domain annotation and protein language models. For ICLR, a machine learning-focused conference with a broad readership, the presentation lacks accessibility.\\nThe paper needs to be self-contained, providing clear explanations of key terms and methods to ensure that a general ML audience can understand it without additional background knowledge. Foundational concepts, such as HMMER\\u2019s \\u201csimplifying assumptions,\\u201d are not explained in enough detail, sometimes missing entirely, leaving gaps that could hinder comprehension. HMMER is used both as a baseline and ground truth annotation and this could lead to confusion, especially for readers outside of bioinformatics. This distinction should be made clearer. Even for bioinformatics, this is a narrow area, and it should be properly explained.\\nWhile PSALM shows a creative application of ESM-2 embeddings for domain annotation, the paper does not contribute new methods or findings to representation learning itself, which is central to ICLR\\u2019s focus. \\n\\nAdditionally, the paper\\u2019s high computational requirements are not addressed with any comparisons in terms of runtime or resource efficiency, which would be essential to assess its practicality for large-scale applications. Given PSALM's reliance on ESM-2 embeddings and BiLSTM layers, understanding how its performance gains weigh against the increased computational cost would make the model\\u2019s contributions clearer for readers considering its application in real-world settings.\\n\\n**Other comments:**\\n\\n- Some figures and tables, especially Figure 2 which is the main figure explaining the method, are not referenced in the main text. Figure 2 can be used in the method explanation to more accurately lead the reader in understanding the method. \\n- There is a repeated sentence in the 2nd paragraph of the introduction.\\n- In the introduction, the authors cite some references when discussing protein function prediction. While these are valid references, they seem outdated considering the recent advancements in this field. \\n- In the results section, the authors mention that \\\"biologists prefer domain-level annotation for many reasons.\\\" While this may be accurate, statements like this should be properly referenced, especially given the interdisciplinary audience at this venue.\", \"questions\": \"\\u2022\\tThe use of a BiLSTM for PSALM\\u2019s architecture feels somewhat under-motivated. The authors mention that this choice was made deliberately to introduce as few changes as possible from profile HMMs, allowing observed improvements to be attributed primarily to the PLM rather than architectural differences. However, HMMs are inherently linear models, whereas BiLSTMs are nonlinear, which could introduce significant architectural differences that may influence the results. Additionally, given that ESM-2 embeddings are already position-aware, wouldn\\u2019t a simpler architecture, such as a feed-forward neural network for clan and family prediction, offer a more direct evaluation of the embeddings' impact? Could the authors clarify the rationale for choosing a BiLSTM over a simpler model and discuss any experiments or considerations around this choice?\\n\\n\\u2022\\tThe authors discuss \\\"scalability\\\" by evaluating performance across different ESM-2 model sizes, increasing the architecture's number of trainable parameters. However, scalability typically refers to a model's ability to handle larger datasets or tasks efficiently, without a proportional increase in computational resources. In this case, expanding model size enhances model capacity rather than scalability, as it increases computational demand rather than demonstrating efficient scaling.\\nCould the authors clarify this terminology and, if appropriate, provide resource requirements and prediction times for each model size on different test set sizes? This would help evaluate PSALM\\u2019s practicality in real-world applications and give a clearer sense of its performance trade-offs across model configurations.\\n\\n\\u2022\\tIn Table 4, HMMER shows an unusual performance pattern, performing worse in high-PID test sets and achieving its best results in remote homolog test sets (lower PID). This is counterintuitive, as one would typically expect HMMER to perform better with higher PID values, given that profile HMMs are generally more effective on closely related sequences. Could the authors elaborate on this outcome and provide intuition or hypotheses as to why HMMER might exhibit better performance on lower PID test sets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study introduces PSALM (Protein Sequence Annotation using Language Models) for protein domain annotation and develops the Multi-Domain Protein Homology Benchmark (MDPH-Bench) as a benchmark for this task. The proposed method was validated on MDPH-Bench, and the authors present PSALM as a promising alternative to HMMER for protein sequence annotation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The layout is clear, and the presentation is well-organized.\", \"weaknesses\": \"1. It appears that the authors may not be fully acquainted with the protein domain annotation task. The paper would benefit from a more comprehensive literature review and a discussion of additional baseline methods.\\n2. Protein domain annotation is crucial for understanding protein function. In addition to HMMER, there are many existing methods, such as LSTM-based and structure-based approaches. The authors should review these methods and provide a comparative analysis.\\n3. Since protein domain annotation is a classical task, the detailed problem formulation may not be necessary.\", \"questions\": \"As shown in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The described study includes the training of a residue-level domain prediction model, PSALM, built on the pre-trained ESM-2 pLM. In addition, the authors provide a benchmark dataset resource for future evaluation of other models in this task, MDPH-Bench. The model itself leverages the hierarchical classification of PFAM domains which are characterized at the clan and family level. The PSALM architecture consists of two prediction tasks per protein residue: 1) the probability of belonging to a clan and 2) the probability of a belonging to a family. The MDPH-Bench is a curated set of domains from PFAM that restricts the pairwise sequence percent identity to <25% with some domain and sequence exclusion criteria.\\n\\nThe main contribution of the work is a residue level domain scanner that provides probability over a set of 560K domains. It can predict multiple domains in a sequence, like HMM-based methods, and unlike prior work described. The application provides a useful tool for researchers that allows for leveraging a pre-trained pLM for a sequence-based search. The experiments demonstrate that at low PID, PSALM can identify domains that current approaches cannot, and the authors provide two examples of sequences annotated by their approach and potential advantages compared to an HMM-based method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is generally clear in presentation and easy to follow. It addresses a clear need in the community and provides an additional resource in the form of a benchmark dataset for the task.\", \"weaknesses\": \"While the application is described as novel, the utilization of pre-trained pLMs for transfer learning of sequence-based tasks and annotation is not novel and modeling approach and architecture are not novel either as cited in the manuscript.\\n\\nOf most concern, is the inclusion of clan level information in training makes this modeling approach dependent on alignment based annotation and limits the ultimate comparison to HMM-based ground truth annotation of family. The assignment of annotation by profile-HMM is a statistical comparison and dependent on the size of the database searched (e-value) with a pre-defined threshold, in this case < 0.001. When profile-HMM libraries are \\\"trained\\\" they are not provided clan level information but in the case of PSALM that information is provided to the model at training. It would be more accurate to compare PSALM against a database of clans and then assign ground truth families using a profile-HMM search against families in that clan. The authors attempt to address this point with the experiment where they trained PSALM_F without clan level information, however the results here do no show superior performance for the two-task PSALM base architecture (see Table 5) and PID 0-40% for fixed FPR of 0.01 (Table 4). Further discussion of the inclusion of clan in training the family model is warranted. \\n\\nFinally, there is no evaluation of the model outside of the per-residue performance, which diminishes the significance of this work. In Section 6, it is unclear how these two sequences were chosen for evaluation. Across the whole test set, how many domains were predicted where InterPro did not predict them? Additionally, it would be useful to see what domains the model is recovering that HMMs do not. Also, what position are the residues that the model does recover but the HMMs do not?\", \"questions\": \"1) In Figure 2, why does it appear that the true clan vector (z) is being passed into the second LSTM, based on Equation (3) is it not z_hat that is passed to the second LSTM?\\n2) I struggled to understand how the MDPH-Bench was constructed. It would be useful to have a flow diagram that explains the inclusion/exclusion criteria for sequences and then the division into PID sets. \\n3) How was the PID of 25% decided? It would be good to see some evidence for why this is the threshold. Maybe the authors could try some text based method to show that the overlap in domain annotation decreases with PID threshold. \\n4) Can you justify why the FPR is fixed at 0.01 for the experiment with results reported in Table 4? It looks like HMMER achieves optimal F1 with FPR in the 0.03-0.06 range (Table 3) and that HMMER* does have superior performance in that range (Table 5).\\n5) Figure 3B shows some considerable mixed annotation of the region between the two N terminal domains, is this a common phenomenon?\\n6) By selecting the clan and domain using argmax(), it allows for classification when probabilities are low if there are multiple classes with density, have the authors thought about this? For example, in Figure 3B there are some residues with probability < 0.5 for the assignment.\", \"minor_comments\": \"-duplicate language in the introduction (ll.42-48)\\n-the acronym PID is not explicitly defined (l.237)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers and chairs for their time and their thoughtful comments and suggestions on how best to improve our work. In light of these reviews, we have decided to withdraw this manuscript from ICLR.\"}", "{\"summary\": \"The paper describes PSALM (Protein Sequence Annotation using Language Models), a hierarchical approach that is an alternative approach to profile HMMs (pHMM), the current state-of-the-art for protein domain-based homology detection. PSALM uses representations of protein sequences learned by protein language models to do residue-level protein sequence annotation, a common and important task for biologists. The authors also develop a benchmarking dataset for remote, multi-domain, protein homology detection tasks (MDPH-Bench). Other approaches, both sequence alignment based (pHMM) and convolutional neural network models (ProtENN) identify one domain at a time for a particular sequence, whereas PSALM can predict multiple domains in a sequence. Performance of PSALM is meaningfully stronger in the 0-20% PID category, which is where the most difficult to identify homologous sequences are found. In the other categories PSLAM and HMMER are doing on par with each other with very small differences. The PSALM annotation examples shown are in the higher PID categories, not in the lower, and the authors note that one example was removed from the Uniprot database, why was this?\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"PSALM uses representations of protein sequences learned by protein language models to do residue-level protein sequence annotation, a common and important task for biologists.\\n\\nThe authors also develop a benchmarking dataset for remote, multi-domain, protein homology detection tasks (MDPH-Bench). Other approaches, both sequence alignment based (pHMM) and convolutional neural network models (ProtENN) identify one domain at a time for a particular sequence, whereas PSALM can predict multiple domains in a sequence. \\n\\nPerformance of PSALM is meaningfully stronger in the 0-20% PID category, which is where the most difficult to identify homologous sequences are found.\", \"weaknesses\": \"The novel annotation examples shown are in the higher PID categories, not in the lower where the PSALM does far better than HMMs\\n\\nThe performance on categories higher than 0-20% PID is generally very similar between HMM and PSALM. Given that HMMs are more 'explainable' than PSALM (they are based on multiple sequence alignments), why would one use PSALM on these higher PID classes?\\n\\nThe authors note that one example was removed from the UniProt database, why was this? Why not use an example for a protein sequence that is in this database, which was used for their training?\", \"questions\": \"Why didn't you show annotation examples from the 0-20% PID group? This class of sequences seems to be where PSALM can annotate things that HMMs can't annotate.\\n\\nWhy do you think PSALM has similar performance for the higher PID groups to the HMMs? Are HMMs identifying all/most of the 'information' needed for annotation? Can you use a combination of PSALM/HMM to do better in these groups? \\n\\nWhy is your example for Uniprot not in the database any longer/why not use another example?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BcYt84rcKq
Fourier Sliced-Wasserstein Embedding for Multisets and Measures
[ "Tal Amir", "Nadav Dym" ]
We present the _Fourier Sliced-Wasserstein (FSW) embedding_—a novel method to embed multisets and measures over $\mathbb{R}^d$ into Euclidean space. Our proposed embedding approximately preserves the sliced Wasserstein distance on distributions, thereby yielding geometrically meaningful representations that better capture the structure of the input. Moreover, it is injective on measures and _bi-Lipschitz_ on multisets—a significant advantage over prevalent methods based on sum- or max-pooling, which are provably not bi-Lipschitz, and, in many cases, not even injective. The required output dimension for these guarantees is near-optimal: roughly $2 N d$, where $N$ is the maximal input multiset size. Furthermore, we prove that it is _impossible_ to embed distributions over $\mathbb{R}^d$ into Euclidean space in a bi-Lipschitz manner. Thus, the metric properties of our embedding are, in a sense, the best possible. Through numerical experiments, we demonstrate that our method yields superior multiset representations that improve performance in practical learning tasks. Specifically, we show that (a) a simple combination of the FSW embedding with an MLP achieves state-of-the-art performance in learning the (non-sliced) Wasserstein distance; and (b) replacing max-pooling with the FSW embedding makes PointNet significantly more robust to parameter reduction, with only minor performance degradation even after a 40-fold reduction.
[ "Sliced Wasserstein distance", "Euclidean embedding", "multiset embedding", "bi-Lipschitz", "permutation invariant" ]
Accept (Poster)
https://openreview.net/pdf?id=BcYt84rcKq
https://openreview.net/forum?id=BcYt84rcKq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tZpaS1IOOh", "nkyzy6KHJZ", "k3K9r1XiaW", "iVU2Sfr4Vk", "Txv9eXoRxe", "RpWXRytT4K", "PVCP4i5tks", "M8SGq2mKvx", "Lj1TVi07qw", "GacppfgBCr", "BgfHtFoRbn", "6Y7MWhkAt4", "2PqI44fpvx", "2L2fEvZCFH", "1ntABBDNgM", "01NLCCW0DZ" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1733182683240, 1730068178874, 1731597068898, 1731761498657, 1737523450789, 1731597156168, 1730559315098, 1732478304389, 1733084192508, 1730804514968, 1731623977803, 1730676021351, 1731623941172, 1731514132510, 1732612729409, 1734771141766 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_styv" ], [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_styv" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_gmod" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_Ew1o" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_kJBi" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Authors" ], [ "ICLR.cc/2025/Conference/Submission1401/Reviewer_Ew1o" ], [ "ICLR.cc/2025/Conference/Submission1401/Area_Chair_oTZQ" ] ], "structured_content_str": [ "{\"comment\": \"The reviewer thank the authors' response. I will keep my rate (accept).\"}", "{\"summary\": \"**Summary:**\\nThe paper introduces the \\\"Fourier Sliced Wasserstein (FSW) embedding\\\" for data in \\\\(\\\\mathbb{R}^d\\\\).\\n\\n**Theoretical Contributions:** \\n1. The authors prove that the embedding preserves or approximates the sliced Wasserstein distance. \\n2. They also demonstrate that the embedding technique is injective and bi-Lipschitz.\\n\\n**Numerical Experiments:** \\n1. The authors evaluate the approximation error of the proposed Fourier Sliced Wasserstein embedding. \\n2. They showcase an application of FSW for approximating the Wasserstein distance using a Multi-Layer Perceptron (MLP).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The combination of the Fourier/cosine transform and the sliced Wasserstein distance (see Eq. (6)) is a novel approach.\\n2. Theoretical properties for this new technique with respect to the uniform distribution, along with its empirical approximation, are proposed (see Theorem 3.2, Corollary 3.3).\\n3. Injectivity and bi-Lipschitz properties of the embedding have been investigated.\", \"weaknesses\": \"1. I recommend adding a section to introduce baseline methods. For example, explaining how Sinkhorn [Cuturi, 2013] can be used to train a neural network as a Wasserstein distance estimator. Currently, the experimental setup (E1, E2, Phi, Leaky-ReLU) appears tailored only to the proposed method in this paper.\\n2. It would be beneficial to introduce a real-data application of the proposed Sliced Wasserstein distance embedding technique to illustrate its practical utility.\\n3. I\\u2019m unclear on why 'bi-Lipschitz' is considered a crucial property. Could you provide an example to clarify? For instance, in which applications would the lack of a bi-Lipschitz property cause issues, and where having this property could offer distinct advantages?\", \"questions\": \"1. Regarding point (2), is \\\"Multisets\\\" simply another term for \\\"discrete distributions\\\"?\\n2. Could you clarify what \\\"E2\\\" refers to in lines 473-474?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer kJBi\", \"comment\": \"We thank the reviewer for the comments. We have uploaded a revised version of the manuscript, where the comments have been addressed and incorporated.\\n\\n**Response to summary:**\", \"we_would_like_to_highlight_an_additional_theoretical_contribution_in_our_paper\": \"the impossibility result of Theorem 4.4, which proves that it is impossible to embed discrete distributions into any finite-dimensional Euclidean space in a bi-Lipschitz manner. This saves the community further efforts in that direction, and essesntially shows that an embedding with substantially better analytical properties than the FSW does not exist.\\n\\n**Response to weaknesses:**\\n\\n1. **On the inclusion of $\\\\mathcal{W}_{\\\\infty}$:** We included the definition of the $p$-Wasserstein distance with $p = \\\\infty$ to allow our results to be stated across all $p \\\\in [0, \\\\infty]$. Our bi-Lipschitzness guarantee and impossibility result apply uniformly for all $p$ in this range.\\n2. **Complexity of Wasserstein when $d=1$**: The complexity in this case is the computational complexity of the sort function, which is $\\\\mathcal{O}(n \\\\log n)$. This is stated in lines 223-224.\\n3. **Definition of STD:** STD here is the Standard Deviation. We clarified this in the revised manuscript, l. 341-342.\\n4. **Proof ideas in the main text:** We added an overview of the proof ideas of Theorems 4.1, 4.2 and 4.4 to the revised manuscript. Thank you for this comment.\\n\\n**Response to question on the practical motivation:**\\n\\nTo illustrate the advantage of our approach for practical applications, consider a learning task on multisets handled by traditional architectures based on sum- or max-pooling. With these methods, certain pairs of input multisets may appear numerically identical, meaning the architecture will not be able to distinguish between them\\u2014even if they represent different underlying data. In contrast, our approach, due to its bi-Lipschitzness guarantee, can distinguish between these multisets in a way that reflects their actual differences. This practical advantage is evidenced in our experimental results shown in Table 2.\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your thoughtful comments on our submission.\\n\\nWe have responded to all of your comments and uploaded a revised version of the manuscript, carefully addressing your feedback. Changes are highlighted in blue for your convenience.\\n\\nWe would greatly appreciate it if you could confirm whether our responses and the revised manuscript adequately address your concerns. This will help us ensure that we have addressed all your feedback within the discussion period.\\n\\nThank you for your time and consideration.\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer gmod\", \"comment\": \"Thank you :)\"}", "{\"summary\": \"The paper seeks to establish a mapping from multisets and measures over $ \\\\mathbb{R}^d $ into Euclidean space, ensuring that the sliced Wasserstein distance corresponds to the distance between their mappings in the target space. The authors propose a mapping that is bi-Lipschitz for multisets and injective for measures. Additionally, they demonstrate that a bi-Lipschitz map for measures does not exist.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-structured, and its message is clear. The proofs provided are rigorous and exceptionally clear. This particular problem is quite interesting. I really enjoyed reading the paper.\", \"weaknesses\": \"I don't see any weaknesses. Therefore, I recommend accepting it.\", \"questions\": \"I haven't any question.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear Reviewers,\\n\\nThank you again for your thoughtful comments on our submission. As mentioned in our previous post,\\nwe have responded to all of your comments and uploaded a revised version of the manuscript, carefully addressing your feedback. Changes are highlighted in blue for your convenience.\\n\\nAs the discussion period comes to a close, we would greatly appreciate it if you could confirm whether our responses and the revised manuscript adequately address your concerns. \\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear Reviewer kJBi,\\n\\nWe wanted to follow up regarding our response to your review. In our rebuttal, we addressed all the points raised, including the practical motivation for our approach, proof ideas, and other requested clarifications.\\n\\nIf you find that our revision and explanations have resolved your concerns, we would greatly appreciate it if you could consider revisiting and potentially adjusting your score.\\n\\nThank you for your review and feedback. Please don\\u2019t hesitate to reach out if there are any remaining points we can clarify further.\\n\\nBest regards,\\nThe Authors\"}", "{\"summary\": \"This paper considers Fourier slicing embedding both for a collection of probability distributions and multisets over $\\\\mathbb{R}^d$ and supported at $n$ points. The embedding consists of a projection sample on a 1-dimensional vector on the sphere then calculates a cosine transform of the projected quantile function. Under a specific probability distribution of the frequency, the authors prove that the expectation of the estimation error between the embedded measures is exactly the sliced Wasserstein distance. A second part of the theoretical results consists of proving the injectivity of the embedding under the assumption that the dimension embedding $m \\\\geq 2n(d+1) +1$. Numerical experiments are conducted on point cloud classification.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow. Proofs are rigorous.\", \"Proposing the sliced embedding Wasserstein (SEW) through a cosine transform of the projected quantile function. Sampling the quantile function via cosine transform is novel.\", \"Injectivity and bi-Lipschitz properties of FSEW on the collection of multisets.\", \"Numerical experiments showcase better Wasserstein approximation on simulated datasets and three real datasets than NProductNet, WPCE, NSDeepSets, and Sinkhorn.\"], \"weaknesses\": [\"Several approaches for the derivative of sliced Wasserstein distance like, distributional sliced Wasserstein (Nguen et al, ICLR'21), max-sliced Wasserstein, etc ... Could you highlight the difference between FSW and the SOTA derivative of sliced Wasserstein?\"], \"questions\": \"See Weaknes section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer styv (Part 2 of 2)\", \"comment\": \"**Weakness 3: Why bi-Lipschitzness is important**\\n\\nThe lack of bi-Lipschitzness, which plagues most prevalent multiset architectures to date, practically implies that there inevitably exist pairs of different input multisets that appear numerically identical to the architecture. This poses a problem, for example, in classification tasks where such pairs need to be assigned different labels. Any multiset architecture based on sum- or max-pooling is provably affected by this problem [FWT]. \\n\\nAchieving a bi-Lipschitz embedding for multisets has been recognized as an important goal in previous works. Our work is the first to fully achieve this goal. Below is a selection of previous works that underscore the importance of bi-Lipschitzness:\\n\\n> \\\"The question of which metric spaces admit a bilipschitz embedding into some (finite-dimensional) Euclidean space is natural, and has received a lot of attention in recent work. The results obtained so far indicate that there is no simple answer to this question.\\\"\\n>\\n> \\u2014 Lang, Urs, and Conrad Plaut. \\\"Bilipschitz embeddings of metric spaces into space forms.\\\" _Geometriae Dedicata 87_ (2001): 285-307.\\n\\n\\n> \\\"Since the late 1990\\u2019s, it has become apparent that designing efficient approximate nearest neighbor algorithms, at least for high-dimensional data, is closely related to the task of designing _low-distortion embeddings_. A _bi-Lipschitz embedding_ between two metric spaces $(X,d\\\\_X)$ $(X',d\\\\_{X'})$ is a mapping $f : X \\\\to X'$ such that ... where the parameter $D \\\\geq 1$ called [_sic_] the _distortion_ of $f$.\\\"\\n>\\n> \\u2014 Indyk, Piotr, and Assaf Naor. \\\"Nearest-neighbor-preserving embeddings.\\\" _ACM Transactions on Algorithms (TALG)_ 3.3 (2007): 31-es.\\n\\n> \\\"The second negative result is that while moments of MLPs with analytic activations can be injective, they can never be stable in the bi-Lipschitz sense. This points to a possible advantage of injective multiset functions that are not based on moments, but rather on sorting or max-filters.\\\"\\n>\\n> \\u2014 Amir, T., Gortler, S., Avni, I., Ravina, R., & Dym, N. (2023). \\\"Neural injective functions for multisets, measures and graphs via a finite witness theorem.\\\" _Advances in Neural Information Processing Systems (NeurIPS)_ 37 (2023)\\n\\n\\n> \\u201cWe propose developing fine-grained expressivity results, namely metric equivalencies between explicit graph metrics and feature metrics for GNNs on graphs with features. An ideal result would derive a bi-Lipschitz correspondence between such metrics.\\u201d\\n> \\n> \\u2014 Christopher Morris, Nadav Dym, Haggai Maron, \\u0130smail \\u0130lkan Ceylan, Fabrizio Frasca, Ron Levie, Derek Lim, Michael Bronstein, Martin Grohe, and Stefanie Jegelka. \\\"Future Directions in Foundations of Graph Machine Learning.\\\" _International Conference on Machine Learning (ICML)_ (2024)\\n\\n**Question 1: Regarding point (2), is \\\"Multisets\\\" simply another term for \\\"discrete distributions\\\"?**\\n\\n_Multisets_ and _discrete distributions_ refer to different concepts. Multisets are essentially sets that account for repetitions. For instance, $\\\\\\\\{b,a,b\\\\\\\\} = \\\\\\\\{a,b,b\\\\\\\\} \\\\neq \\\\\\\\{a,b\\\\\\\\}$. Discrete distributions, on the other hand, are probability distributions with finite support. However, multisets can be idenitified with the subset of of discrete distributions with uniform weights, as discussed beginning at line 183.\\n\\n**Question 2: Could you clarify what \\\"E2\\\" refers to in lines 473-474?**\\n\\n$E_1$ and $E_2$ are two independent instances of the FSW embedding, with different input and output dimensions. $E_1$ maps distributions over $\\\\mathbb{R}^d$ into $\\\\mathbb{R}^{m_1}$, and $E_2$ maps distributions over $\\\\mathbb{R}^{m_1}$ into $\\\\mathbb{R}^{m_2}$, with $m_1$, $m_2$ being architecture hyperparameters. This is detailed in the manuscript (lines 481-485 in the revised version).\\n\\n**References:**\\n\\n[Chen] Chen, S., Wang, Y. \\\"Neural approximation of Wasserstein distance via a universal architecture for symmetric and factorwise group invariant functions.\\\" _Advances in Neural Information Processing Systems (NeurIPS)_ 37 (2023).\\n\\n[FWT] Amir, T., Gortler, S., Avni, I., Ravina, R., & Dym, N. (2023). \\\"Neural injective functions for multisets, measures and graphs via a finite witness theorem.\\\" _Advances in Neural Information Processing Systems (NeurIPS)_ 37 (2023).\"}", "{\"summary\": \"This paper presents a novel approach to high-dimensional dataset embedding. The authors provided theoretical performance guarantees and numerical study to show the superior performance of the framework.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The theoretical contribution seems to be sound, with explicitly stated technical assumptions and results. Numerical study is solid.\", \"weaknesses\": \"1. The authors denovted much space to describe the p-wasserstein and infinity-type Wasserstein distance. Why it is necessary to introduce infinity-type Wasserstein distance?\\n2. In line 222, the authors mentioned that in the special case of d=1, Wasserstien can be computed significantly fast. So what is the complexity rate?\\n3. In line 344, what is the definition of STD???\\n4. The authors should provide proof ideas for the main technical results in the main content.\", \"questions\": \"I am new to this field. Could the authors elaborate more on the practical motivation and applications of this approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer styv (Part 1 of 2)\", \"comment\": \"**Response to summary:**\\n\\nWe would like to highlight that, in addition to the theoretical guarantees for our embedding, we present the impossibility result stated in Theorem 4.4. This result proves that it is impossible to embed discrete distributions into any finite-dimensional Euclidean space in a bi-Lipschitz manner. This saves the community further efforts in that direction, and essesntially shows that an embedding with substantially better analytical properties than the FSW does not exist.\\n\\n \\n\\n**Weakness 1: Introducing baseling methods**\\n\\nThank you for your suggestion. We added to the revised manuscript a paragraph describing all the baseline methods (lines 498-504).\\n\\nWe would like to stress that the methods presented in Table 2 are those introduced in [Chen] and earlier papers, and were not implemented by us. Specifically: $\\\\mathcal{N}\\\\_{\\\\textup{ProductNet}}$, $\\\\mathcal{N}\\\\_{\\\\textup{SDeepSets}}$ and WPCE use their own architectures, and Sinkhorn is an approximation algorithm specifically designed to approximate the $p$-Wasserstein distance.\\n\\nOur architecture based on $E_1$, $E_2$, $\\\\Phi$, described in line 485, achieves state of the art results with a simple combination of our embedding and one MLP. In comparison, $\\\\mathcal{N}{\\\\_\\\\textup{ProductNet}}$ produces inferior results using three MLPs.\\n\\nThe only method other than ours that we tested with our architecture was PSWE, which is designed to compute Sliced-Wasserstein preserving embeddings.\\n\\nLastly, the reason why Sinkhorn cannot be incorporated into our architecture is due to its own inherent limitation: it takes _pairs_ of input distributions and estimates their distances, rather than computing a distance-preserving embedding for individual distributions. This significantly limits its applicability to practical learning tasks, as we further discuss in our response to Reviewer Ew1o. We also added a brief explanation in line 307.\\n\\n**Weakness 2: Real-data application of our method**\\n\\nThe experiment presented in Table 2 illustrates the utility of our embedding in a learning task on real-world data (ModelNet-40). We note that our paper includes all experiments from the NeurIPS-accepted work by Chen and Wang [Chen], as well as theoretical results that in our opinion merit acceptance in their own right.\\n\\n_To be continued..._\"}", "{\"title\": \"Response to Reviewer Ew1o\", \"comment\": \"There is a fundamental difference between our embedding approach and approaches such as Distributional Sliced Wasserstein and Max-Sliced Wasserstein. Our approach takes one input distribution at a time and computes an _embedding_, whereas the aforementioned approaches take two input distributions and estimate a _distance_ between them. Pairwise methods have two disadvantages in comparison with embeddings: (i) higher computational complexity when computing multiple pairwise disances, and (ii) limited applicability to real-world learning problems.\\n\\nIn terms of applicability, a pairwise method cannot be directly applied to common learning tasks, such as object classification, where the inputs are typically individual distributions. In contrast, an embedding is readily applicable to such tasks, as demonstrated in our experiments.\\n\\nIn terms of computation, pairwise methods to estimate sliced optimal transport distances typically require $\\\\tilde{\\\\mathcal{O}}(mnd)$ time (neglecting logarithmic factors), where $m$ is the number of slices, $n$ is the maximal number of support points, and $d$ is the ambient dimension of the support. Thus, computing all pairwise distances for a set of $k$ distributions would take $\\\\tilde{\\\\mathcal{O}}(k^2 mnd)$ time. In contrast, computing our embedding takes $\\\\tilde{\\\\mathcal{O}}(mnd)$ time for each input distribution, and pairwise distances can then be computed in the Euclidean space $\\\\mathbb{R}^m$, resulting in a considerably lower total complexity of $\\\\tilde{\\\\mathcal{O}}(k mnd + k^2 m)$. This approach is therefore significantly more scalable for large datasets where pairwise distance computations are required.\\n\\nWe appreciate the reviewer's comment and will clarify this in the paper.\"}", "{\"comment\": \"I thank the authors for their answers to my concern. I am keeping my score the same.\"}", "{\"metareview\": \"This paper proposes a novel approach to high-dimensional dataset embedding. Most reviewers found\\nthat the paper is of interest and provide relevant contributions to the field.\", \"additional_comments_on_reviewer_discussion\": \"They were few discussions beyond the rebuttals as most authors are happy about the paper.\"}" ] }
Bc15z5RrLo
MixNAM: Advancing Neural Additive Models with Mixture of Experts
[ "Guangzhi Xiong", "Sanchit Sinha", "Aidong Zhang" ]
Additive models, such as Neural Additive Models (NAMs), are recognized for their transparency, providing clear insights into the impact of individual features on outcomes. However, they traditionally rely on point estimations and are constrained by their additive nature, limiting their ability to capture the complexity and variability inherent in real-world data. This variability often presents as different influences from the same feature value in various samples, adding complexity to prediction models. To address these limitations, we introduce MixNAM, an innovative framework that enriches NAMs by integrating a mixture of experts, where each expert encodes a different aspect of this variability in predictions from each feature. This integration allows MixNAM to capture the variability in feature contributions through comprehensive distribution estimations and to include feature interactions during expert routing, thus significantly boosting performance. Our empirical evaluation demonstrates that MixNAM surpasses traditional additive models in performance and is comparable to complex black-box approaches. Additionally, it improves the depth and comprehensiveness of feature attribution, setting a new benchmark for balancing interpretability with performance in machine learning. Moreover, the flexibility in MixNAM configuration facilitates the navigation of its trade-offs between accuracy and interpretability, enhancing adaptability to various data scenarios.
[ "Interpretable Machine Learning", "Neural Additive Model", "Explainable Artificial Intelligence" ]
Reject
https://openreview.net/pdf?id=Bc15z5RrLo
https://openreview.net/forum?id=Bc15z5RrLo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vxAnFZQMa8", "um8zVzfUkp", "ueXyULadrU", "tr60dmfqW0", "tEb66k8pEu", "jfQOs8JjaH", "j7FuofwhIe", "hEJB54Zu8z", "g39aT1O8Wz", "YPaGKQeOHU", "XxfJSI79RW", "XFhPHjWyCR", "KotgqPz0yJ", "H9ur1ZF2sL", "H9dgin5XwL", "BqKyKQKzzi", "3C5GkrNoQ4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732080809738, 1732079466698, 1733214979181, 1732080159958, 1732081064089, 1730260325467, 1732080921676, 1730729188127, 1730695741225, 1732081047809, 1734922584916, 1730375558296, 1732079887773, 1733203605350, 1732315731131, 1732080893782, 1737523656840 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_MzpU" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_xFNJ" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_a4Jz" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_NxyM" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Area_Chair_YVJj" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_MzpU" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Reviewer_NxyM" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Submission4708/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer NxyM\", \"comment\": \"Thank you for providing thoughtful and constructive feedback. Following your suggestions, we conducted additional simulation studies and added the results to the updated manuscript. Here are our detailed responses to your questions.\\n\\n### Weaknesses:\\n\\n> Although MixNAM achieves interpretability with improved accuracy, it relies on a dynamic routing mechanism and multiple experts, which might increase computational requirements. It would be better to include analysis of computational cost and assess the tradeoff of the computational cost and the increased accuracy.\\n\\nThank you for the suggestion! We have analyzed the additional computational cost introduced by the mixture of experts in Appendix J (previously Appendix I). Our analysis shows that there is a quadratic cost increase with respect to the number of features in MixNAM due to the feature interaction modeling within the routing system, which leads to the observed improvement in accuracy.\\n\\n> The paper primarily focuses on tabular data, which raises questions about the generalizability of the framework\\u2019s effectiveness to other domains, such as image or text data.\\n\\nIn the existing literature on additive models, evaluations primarily focus on tabular data, while applications to other modalities, such as text or images, are rare [1,2,3]. This is because the core capability of additive models lies in interpreting features through shape plots, which illustrate how predictions change as feature values monotonically increase or decrease. Additive models are difficult to evaluate on raw text or image data, where features and their monotonic relationships are challenging to define.\\n\\nFor example, NBM [2] tested its performance on image data by using concept bottleneck models to preprocess images into tabular data. However, the effectiveness of the overall system was limited by the quality of the extracted concepts, which may not fully capture the information in the original images [4,5]. Therefore, we follow the mainstream in additive model research and evaluate MixNAM using tabular data. We have discussed the potential generalization of MixNAM to other modalities as a future direction in Appendix L (previously Appendix K).\\n\\n[1] Agarwal R, et al. Neural additive models: Interpretable machine learning with neural nets. NeurIPS 2021.\\n\\n[2] Radenovic F, et al. Neural basis models for interpretability. NeurIPS 2022.\\n\\n[3] Chang CH, et al. NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning. ICLR 2022.\\n\\n[4] Koh PW, et al. Concept bottleneck models. ICML 2020.\\n\\n[5] Margeloiu A, et al. Do concept bottleneck models learn as intended?. Arxiv 2021.\\n\\n### Questions:\\n\\n> For datasets with highly sparse features, does the routing mechanism maintain its efficiency, or does it introduce sparsity issues that affect performance?\\n\\nWe have added a new simulation study to investigate MixNAM\\u2019s performance on data with sparse features. Using the original simulation settings, we generated multimodal data with two features $x_1$ and $x_2$. In this new study, we manually controlled the proportion of data points with $x_2=1$ to represent 25%, 5%, and 1% of the dataset. The visual results for NAM and MixNAM are shown in Figure 6 in the updated manuscript.\\n\\nThe results demonstrate that NAM tends to focus on the majority group when the signal for $x_2=1$ is sparse. In contrast, MixNAM successfully identifies both modalities in the data distribution and learns the sparse distribution better with an increasing number of experts.\\n\\n> The simulation study includes only two random variables. Could the study include more experiments on high dimensional variables to better demonstrate MixNAM's effectiveness?\\n\\n> How does MixNAM handle extreme cases of feature variability, where the impact of a feature varies widely across different samples?\\n\\nWe added a new simulation study to explore how MixNAM performs with high-dimensional variables and features with extreme variability in their output contributions. To investigate this, we simulated a data distribution using the following pattern:\\n$$y=\\\\varepsilon + \\\\sum_{i=2}^{NC+1}\\\\frac{T}{NC}x_i\\\\sin(4\\\\pi x_1),$$\\nwhere the feature $x_1$ is sampled from $U(0, 1)$ and $x_2,\\\\cdots,x_{NC+1}$ are the $NC$ categorical features sampled from \\\\{-1, +1\\\\} with equal probability. $T$ is a scaling factor that amplifies variability across different modalities. For this study, we set $T = 64$ and evaluated both NAM and MixNAM across scenarios with $NC = 1, 2, 4, 8, 16$.\\n\\nAs illustrated in Figure 7 of the updated manuscript, the results demonstrate that MixNAM effectively captures the multimodal contributions of $x_1$ to the output, showing consistent performance as the number of features and modalities increases. In contrast, NAM struggles to account for multimodality due to its inherent limitation of feature additivity. The results highlight the robustness and scalability of MixNAM in handling multimodal data with high variability.\"}", "{\"title\": \"Response to Reviewer a4Jz (Part 1)\", \"comment\": \"We appreciate your detailed and valuable comments and would like to take this opportunity to clarify our contribution and address the points you have raised. Below are our responses to your questions.\\n\\n### Weaknesses:\\n> The discussion on related work lacks clarity [...]\\n\\nBy \\\"prior distributions\\\", we mean that existing NAM research with uncertainty estimation introduce explicit assumptions about the distributions they model. For example, [1] \\\"impose a zero-mean Gaussian prior distribution over the parameters of each feature network\\\". NAMLSS [2] is tailored to model a pre-determined distribution, \\\"e.g. a normal distribution\\\".\", \"traditional_additive_models_cannot_effectively_capture_feature_interactions_due_to_their_architecture\": \"$$\\\\hat{y}=w_0+f_1(x_1)+\\\\cdots+f_n(x_i),$$\\nwhere each feature's contribution is modeled independently. While they incorporate prior distributions to increase uncertainty and complexity of the model prediction, their additive structure prevents interactions from being represented.\\n\\nBy \\\"simplistic assumptions about output distributions,\\\" we refer to the additive constraints inherent in these architectures mentioned above. Our work aims to exceed these additive constraints, introducing a framework that models complex distributions while maintaining interpretability, balancing the accuracy and interpretability for real-world data analysis.\\n\\n[1] Improving neural additive models with bayesian principles. ICML 2024.\\n\\n[2] Neural additive models for location scale and shape: A framework for interpretable neural regression beyond the mean. AISTATS 2024.\\n\\n> The type of uncertainty or variability captured by the proposed method is not clearly defined [...]\\n\\nThank you for the thoughtful comment! We agree it is valuable to formally define the \\\"variability\\\" in a mathematical way, which will help clarify and highlight the contribution of this work. Our proposed MixNAM seeks to capture variability **not in the randomness of predictions**, but **in the way other features influence the contribution of a specific feature $x_i$ to the final output $\\\\hat{\\ud835\\udc66}$**. This approach allows for the modeling of feature interactions and goes beyond traditional additive models, which inherently assume no interactions between features.\\n\\nConsider the mapping from the input features $x_1,\\\\cdots,x_n$ to the predicted output:\\n\\n$$\\\\hat{y} = F(x_1, x_2, \\\\cdots, x_n),$$\\n\\nwhere $F$ represents the underlying predictive function. For a fixed value of $x_i=a$ the variability of the contribution of $x_i$ to $\\\\hat{y}$ can be formally defined as:\\n\\n$$variability_{x_i=a} = Var_{x_1,\\\\cdots,x_n}[F(x_1, x_2, \\\\cdots, x_n|x_i=a) - E_{x_i}(F(x_1, x_2, \\\\cdots, x_n))].$$\\n\\nThis definition measures how the contributions of $x_i=a$ deviate due to interactions with other features, reflecting variability caused by feature dependencies.\\n\\nFor traditional additive models, the variability defined above reduces to zero because the contributions of each feature are independent and do not interact. In models where interactions exist, this term captures the extent to which other features $x_j (j\\\\neq i)$ influence the contribution of $x_i$.\\n\\nFor example, in our simulated unimodal data $y=\\\\sin(4\\\\pi x_1)+x_2$, the variability of $x_1$ at $x_1=a$ is:\\n\\n$$Var_{x_2}[\\\\sin(4\\\\pi a)+x_2-E_{x_1}(\\\\sin(4\\\\pi x_1))-x_2] = Var_{x_2}[\\\\sin(4\\\\pi a)]=0.$$\\n\\nFor the multimodal data where $y=x_2\\\\sin(4\\\\pi x_1)+x_2$, the variability of $x_1$ at $x_1=a$ is:\\n\\n$$Var_{x_2}[x_2\\\\sin(4\\\\pi a)+x_2-E_{x_1}(x_2\\\\sin(4\\\\pi x_1))-x_2] = Var_{x_2}[x_2\\\\sin(4\\\\pi a)]=\\\\sin^2(4\\\\pi a)$$\\n\\nBy modeling such uncertainty/variability with MoE, we are trying to close the gap between interpretable but poor-performing additive models and powerful but uninterpretable black-box models/true data distributions.\"}", "{\"comment\": \"Thank you authors for the detailed response and experiments. Most of my questions have been mostly solved so I keep my score positive.\"}", "{\"title\": \"Response to Reviewer a4Jz (Part 3)\", \"comment\": \"### Additional Comments\\n\\n> In expression (12), the denominator equals zero.\\n\\nThanks for pointing it out. The denominator should be `upper(o_i|x_i)-lower(o_i|x_i)`.\\n\\n> In Figure 4, why is NAM unable to capture multimodality? Were proper hyperparameters used?\\n\\nYes, we tuned the hyperparameters of NAM to optimize its ability to fit multimodal data. However, NAM is inherently unable to capture multimodality because it provides only a single deterministic output value for each feature value. In contrast, the true underlying distribution for the multimodal data can yield multiple possible outputs for the same feature value.\\n\\n> Please clearly indicate what the values after the +/- symbol represent (standard errors?).\\n\\nYes, the values after the \\\"+/-\\\" symbol represent the standard deviations of the metric scores across different runs. Following previous research, we tested models on the Housing and Year datasets using 10 different random seeds. For other datasets, evaluations were conducted using five-fold cross-validation. More details about the implementation settings can be found in Appendix B.\\n\\n> Label the y-axis in Figure 3 for clarity.\\n\\nThe y-axis represents the contribution of each feature to the final prediction for a given instance. We have updated the manuscript to explicitly label the y-axis as \\\"Output Contribution\\\" for clarity.\"}", "{\"title\": \"Response to Reviewer xFNJ (Part 2)\", \"comment\": \"> The multimodal experiments are all based on small-scale simulated datasets, the authors should benchmark its method on larger-scale multimodal benchmarks.\\n\\nOur current simulation analysis is designed as a qualitative demonstration of how MixNAM captures multimodal distributions that traditional additive models fail to model. Traditional additive models inherently produce deterministic outputs for each feature value, which limits their ability to handle multimodal data.\\n\\nFor larger-scale multimodal benchmarks, we have added an additional simulation study in Section E.2 of our updated manuscript. This study evaluates MixNAM\\u2019s performance as the number of features, samples, and modalities increases. Specifically, we simulated a data distribution using the following pattern:\\n$$y=\\\\varepsilon + \\\\sum_{i=2}^{NC+1}\\\\frac{T}{NC}x_i\\\\sin(4\\\\pi x_1),$$\\nwhere the feature $x_1$ is sampled from $U(0, 1)$ and $x_2,\\\\cdots,x_{NC+1}$ are the $NC$ categorical features sampled from \\\\{-1, +1\\\\} with equal probability. $T$ is a scaling factor that amplifies variability across different modalities. For this study, we set $T = 64$ and evaluated both NAM and MixNAM across scenarios with $NC = 1, 2, 4, 8, 16$. The number of simulated samples is set dynamically as $2000\\\\times NC$.\\n\\nThe results, as presented in Figure 7 of the updated manuscript, demonstrate that MixNAM effectively captures the multimodal contributions of $x_1$ to the output, showing consistent performance as the number of features and modalities increases. It highlights the robustness and scalability of MixNAM in handling multimodal data with high variability.\\n\\n### Questions:\\n> How does the proposed method handle load imbalance when training MoE models? Is the expert variation penalty similar to the load imbalance loss function?\\n\\nWe did not explicitly use a load imbalance loss function to address this issue. Instead, we included expert dropout, as mentioned in line 228, to prevent the model from over-relying on specific experts.\"}", "{\"summary\": \"This paper introduces an enhancement to Neural Additive Models (NAMs) by incorporating a Mixture of Experts (MoE) framework, enabling the model to capture feature variability and complex feature interactions. This approach allows each feature to be modeled through multiple expert predictions, which are dynamically selected based on relevance, thus addressing traditional NAM limitations in handling real-world data complexity. This new framework enhances additive models' utility, offering advanced predictive accuracy and transparency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The paper combines NAMs\\u2019 interpretability with MoE\\u2019s capacity for multi-aspect feature representation, contributing a new approach to interpretable machine learning.\\n\\n2. MixNAM\\u2019s ability to capture variability in feature impacts is highly relevant for real-world applications that require transparent models with high predictive power. The proposed method also allows users to balance accuracy with interpretability.\", \"weaknesses\": \"1. As described in line 164, each expert $E_{ik}$ is implemented as a linear layer, but normally, MLPs are used as base models for experts, the authors should justify why choosing linear model as experts.\\n\\n2. The novelty of this paper is quite limited, as they just apply regular MoE architecture on top of each feature's embedding and weighted combine them to obtain the outputs, even so, the authors did not provide a detailed rationale for choosing such a method. \\n\\n3. In line 336, it seems that the uncertainty intervals are formulated by a group of expert predictions, this is not a natural way of producing prediction intervals, as the uncertainty should come from the model parameters or the data itself.\\n\\n4. The multimodal experiments are all based on small-scale simulated datasets, the authors should benchmark its method on larger-scale multimodal benchmarks.\", \"questions\": \"How does the proposed method handle load imbalance when training MoE models? Is the expert variation penalty similar to the load imbalance loss function?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer MzpU (Part 2)\", \"comment\": \"### Questions:\\n\\n> How was the hyper-parameter \\\\lambda selected? Was it cross-validated against prediction accuracy? It would be great to provide some heuristics to select it to balance the accuracy and interpretability.\\n\\nWe analyzed different $\\\\lambda$ values on the Housing dataset with respect to accuracy, additivity, and tightness (Table 3), as well as visual outputs (Table 2). These results showed that $\\\\lambda = 0.1$ represents a \\\"sweet spot,\\\" enabling MixNAM to significantly outperform strictly additive models while preserving interpretability through faithful shape plots with tight estimated bounds. Experiments on other datasets further validated the robustness of $\\\\lambda = 0.1$, which consistently achieved improved accuracy (Table 1) and retained interpretability (Figures 9-12).\\n\\n> It would be great to compare the interpretation of MixNAM with any post-hoc feature attribution (e.g., SHAP) on XGBoost, which is a popular choice to gain interpretability on tabular data. What is the difference between these two options? A practitioner may like to know when they should use an interpretable MixNAM and when they should use XGBoost with post-hoc interpretation methods.\\n\\nThank you for this great suggestion! Post-hoc feature attribution methods, such as LIME [1] and SHAP [2], provide local explanations, describing the contributions of features for individual predictions. In contrast, MixNAM provides a transparent and global understanding of how features influence predictions across the dataset. MixNAM advances traditional additive models by overcoming performance constraints while maintaining intrinsic interpretability.\\n\\nIn practice, MixNAM is ideal when both global feature explanations and high model accuracy are required, while XGBoost with post-hoc methods may suffice if localized interpretability alone is adequate.\\n\\n[1] \\\"Why Should I Trust You?\\\": Explaining the Predictions of Any Classifier. KDD 2016.\\n\\n[2] A Unified Approach to Interpreting Model Predictions. NeurIPS 2017.\"}", "{\"summary\": \"Additive models like Neural Additive Models (NAMs) are valued for their transparency, clearly showing how individual features impact outcomes. However, their reliance on point estimates and additive structure limits their ability to capture complex, variable feature influences in real-world data. To address these limitations, MixNAM is introduced as a framework that enriches NAMs through a mixture of experts, each capturing different aspects of feature variability. This approach allows MixNAM to model diverse feature contributions, and interactions, and allow distribution estimations. Empirical results show that MixNAM not only outperforms traditional additive models but also approaches the performance of complex black-box methods while providing detailed feature attributions. Its flexible configuration further allows for balancing accuracy and interpretability, adapting to various data scenarios effectively.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, the paper is clear and well-written.\", \"The paper attempts to contribute to the important and relevant topic of uncertainty quantification in neural additive models with mixtures of experts\", \"The proposed method is evaluated on both simulated and real-world data, using multiple criteria, including model additivity, bound tightness, and prediction accuracy.\"], \"weaknesses\": [\"The discussion on related work lacks clarity. For instance, the authors state that \\\"these approaches are still limited by their inevitable assumptions of prior distributions and the additive nature of the entire models, which restrict their ability to accurately reflect complex underlying distributions and interactions.\\\" However, it is unclear what is meant by \\\"prior distributions.\\\" Why can\\u2019t these models capture interactions effectively? Why are they considered less flexible? Additionally, the authors mention that \\\"these models generally rely on a simplistic assumption about output distributions.\\\" More specific details are needed to understand the limitations that the proposed approach aims to address.\", \"The type of uncertainty or variability captured by the proposed method is not clearly defined. The authors use various terms, such as \\\"different aspect of the variability,\\\" \\\"more comprehensive insight into the output distribution,\\\" \\\"captures a broad range of possible outcomes,\\\" \\\"detailed variance,\\\" and \\\"more comprehensive insight into the output distribution.\\\" However, this language lacks rigor and does not clearly communicate the specific type of uncertainty intended to be captured. Furthermore, in a regression context where \\\\(L\\\\) represents the MSE, it is unclear how the proposed method would capture \\\"uncertainty\\\" by minimizing MSE (instead of proper scoring rules).\", \"Although the authors consider different values for \\\\(K\\\\) and \\\\(C\\\\) in Table 8 in the appendix, the roles of these hyperparameters are not well explained. The results appear relatively consistent, and the impact of using a single expert is missing from the analysis. Additionally, there is no study of the values for \\\\(\\\\gamma\\\\) and \\\\(\\\\lambda\\\\) in equation (8), particularly with extreme values (gamma = 0).\", \"As demonstrated by the authors in Appendix E, the proposed method can essentially be viewed as a generalized additive model with specific normalization. The added complexity in the approach is not well justified when compared to existing methods.\", \"Only six relatively \\\"old\\\" datasets are used in this study. A broader selection of available tabular datasets would provide a stronger and more comprehensive evaluation. Refer to L\\u00e9o Grinsztajn et al. \\u201cWhy do tree-based models still outperform deep learning on tabular data?\\u201d (July 2022) and Pieter Gijsbers et al. \\u201cAn Open Source AutoML Benchmark\\u201d (July 2019) for relevant data sources.\", \"The authors write, \\\"The bounds represent the maximum and minimum potential outputs for a feature.\\\" It would be helpful to clarify what is meant by \\\"possible\\\" in this context. Possible according to what criteria? These bounds are directly affected by the number of experts. How is that important?\", \"Additional Comments\", \"In expression (12), the denominator equals zero.\", \"In Figure 4, why is NAM unable to capture multimodality? Were proper hyperparameters used?\", \"Please clearly indicate what the values after the +/- symbol represent (standard errors?).\", \"Label the y-axis in Figure 3 for clarity.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces MixNAM, an extension of Neural Additive Models (NAMs) designed to enhance both accuracy and interpretability. MixNAM incorporates a mixture of experts, each capturing different aspects of feature variability, to address the limitations of traditional NAMs, which struggle to represent complex data patterns. Experiments show that MixNAM improves accuracy over traditional additive models while achieving performance comparable to black-box models, achieving a balance between interpretability and predictive power.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper introduce MoE within the NAM framework to overcoming the limitations of traditional additive models. This architecture allows MixNAM to capture data complexity more effectively than NAMs.\", \"The paper includes experiments across various datasets, including both real-world and simulation data, to demonstrate MixNAM's improved accuracy and interpretability compared to both traditional additive models and more complex black-box approaches.\"], \"weaknesses\": [\"Although MixNAM achieves interpretability with improved accuracy, it relies on a dynamic routing mechanism and multiple experts, which might increase computational requirements. It would be better to include analysis of computational cost and assess the tradeoff of the computational cost and the increased accuracy.\", \"The paper primarily focuses on tabular data, which raises questions about the generalizability of the framework\\u2019s effectiveness to other domains, such as image or text data.\"], \"questions\": [\"For datasets with highly sparse features, does the routing mechanism maintain its efficiency, or does it introduce sparsity issues that affect performance?\", \"The simulation study includes only two random variables. Could the study include more experiments on high dimensional variables to better demonstrate MixNAM's effectiveness?\", \"How does MixNAM handle extreme cases of feature variability, where the impact of a feature varies widely across different samples?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xFNJ (Part 1)\", \"comment\": \"Thank you for your thoughtful feedback. Your expertise in Mixture of Experts (MoE) is evident and greatly appreciated. However, we feel that our core contribution\\u2014advancing additive models by integrating MoE to balance interpretability and performance\\u2014may not be recognized. Below, we provide detailed responses to your concerns and clarify the unique motivations and contributions of MixNAM.\\n\\n### Weaknesses:\\n\\n> As described in line 164, each expert $E_{ik}$ is implemented as a linear layer, but normally, MLPs are used as base models for experts, the authors should justify why choosing linear model as experts.\\n\\nWe respectfully disagree that the use of linear layers should be considered a weakness. Instead, it is a critical design choice to improve the efficiency of the architecture. While it would be natural to implement $C$ separate MLPs for the $C$ experts of each feature, this approach would scale the number of parameters by a factor of $C$. To address this, we share parameters among experts by implementing a single MLP followed by $C$ linear layers, as shown in Figure 1. This design significantly reduces the parameter overhead while preserving flexibility in the expert outputs. The experimental results in Table 1 validate the effectiveness of this design, demonstrating improved performance compared to traditional additive models.\\n\\n> The novelty of this paper is quite limited, as they just apply regular MoE architecture on top of each feature's embedding and weighted combine them to obtain the outputs, even so, the authors did not provide a detailed rationale for choosing such a method.\\n\\n**The focus of this research is not to apply the MoE framework arbitrarily but to advance the capabilities of additive models**. The primary motivation for this work stems from the limitations of existing additive models, which typically produce less accurate predictions than black-box models. As shown in Table 1, additive models (marked with \\\"FA\\\") consistently underperform compared to non-additive models (e.g., MLP, XGBoost).\\n\\nOur objective is to improve the performance of additive models while preserving their interpretability. By introducing a mixture of experts, we relax the strict additivity constraint by allowing feature interactions in the expert routing mechanism, while maintaining an additive structure in the overall prediction. Empirical results (Table 1) show that this approach significantly enhances performance, achieving results comparable to complex black-box models. At the same time, MixNAM retains interpretability by enabling visualizations of feature contributions (Figures 3, 9-12). This makes MixNAM a valuable alternative to unexplainable methods like XGBoost.\\n\\n> In line 336, it seems that the uncertainty intervals are formulated by a group of expert predictions, this is not a natural way of producing prediction intervals, as the uncertainty should come from the model parameters or the data itself.\\n\\nThe uncertainty/variability addressed in this study does not refer to randomness in model predictions. Instead, we focus on the variability in how a feature's contribution, $x_i = a$, is influenced by other features, $x_j$ for $j \\\\neq i$. Formally, this variability is defined as:\\n$$variability_{x_i=a} = Var_{x_1,\\\\cdots,x_n}[F(x_1, x_2, \\\\cdots, x_n|x_i=a) - E_{x_i}(F(x_1, x_2, \\\\cdots, x_n))].$$\\nThis term is zero in additive models, where features do not interact, but nonzero in scenarios with feature interactions. By modeling such variability with MoE, we aim to bridge the gap between interpretable but less accurate additive models and powerful but opaque black-box models.\"}", "{\"metareview\": \"Summarization\\n\\nThe paper addresses the limitations of Neural Additive Models (NAMs), which, while interpretable and simple, fail to capture the complex relationships in data. Specifically, NAMs assume that the influence of a feature is uniform across all instances, ignoring variations in importance caused by feature values or contextual differences. To solve this, the paper proposes a Mixed Neural Additive Model that integrates multiple feature-specific experts. Each expert captures a distinct aspect of feature influence, and their outputs are dynamically combined using a routing mechanism. This design improves flexibility while retaining interpretability.\\n\\nExperiments show that the proposed method outperforms traditional additive models and achieves results comparable to non-additive models. Additionally, it provides visualizations of feature contribution distributions. The paper said, that unlike existing solutions that rely on restrictive assumptions or compromise the additive structure, the proposed method gives a better balance between performance and interoperability.\\n\\nStrengths\\n\\nThe main strengths have been twofolds 1) the proposed method is novel to capture the feature variety overcoming the shortage of traditional neural additive model; 2) the demonstrated experimental results show the effectiveness of the proposed method especially in balancing the interpretability and performance. \\n\\nWeaknesses\\n\\nOverall, I think there are three major concerns of the paper 1) the paper did not demonstrate sufficient technical novelty. As pointed out by two of the reviewers who gave 3, the paper seems to be a combination of the \\u201cmix of experts\\u201d and the \\u201cnatural additive model\\u201d, with a weighted combination of experts. 2)The presentation is vague, especially on some of the key aspects, such as the motivation of the paper, and the contribution of the paper. Some terms are not mathematically well-defined. 3) More datasets and comparisons are required to fully verify the effectiveness of the proposed method. \\n\\nRecommendation\\n\\nOverall, while I agree with reviewers that the paper explores an important direction to consider the feature correlations and balance between interpretability and performance \\u2013 the current version does not fully satisfy the standard to get published as concerns raised by reviewers regarding the technical contribution, empirical evaluation and clarity in terminology and contribution have not been fully addressed.\", \"additional_comments_on_reviewer_discussion\": \"The paper has got 4 reviews with diverse ratings (3, 3, 6, 6). Two of them give a six (marginally above the threshold) \\u2013 they have expressed satisfaction with the response of authors but did not increase the score (actually they have both explicitly indicated they will keep the score), which shows that although the questions they raised have been solved in some satisfiable extent to them, they still thought the paper is not good enough to be given a fully accept. The other two reviewers only give a rate of 3 (clear reject). It is a pity these two reviewers did not reply or involved in discussions. Their major concerns are listed in the weakness part above. From my own judgement of reading the rebuttal, the rebuttal argues that the purpose of the paper is not as limited as the reviewer stated \\u2013 but that does not change the nature of the technical contribution. To address the question of novelty, arguments such as the challenge of directly applying MoE are required \\u2013 which is missing from the rebuttal. The reviewers have asked for additional experiments such as on large-scale real datasets or more datasets in existing literature, while the rebuttal only provided additional results on simulated datasets \\u2013 which is understandable during the limited-time rebuttal phase, but the concern is not fully addressed. Regarding the vague definitions of the terms motivation or contribution, the rebuttal has provided their mathematical formulation, while their mathematical formulations have not been clearly shown to be improved by the proposed method.\"}", "{\"summary\": \"The paper introduces MixNAM which improves neural additive model (NAMs) by combining with a mixture of experts (MoE) to capture the feature variability due to its interaction with other features. MixNAM uses multiple experts for each feature and uses a dynamic routing mechanism to assess and combine the relevance of different experts. This improves the prediction performance and gives the upper and lower bound of the feature attribution. The empirical evaluation demonstrates that MixNAM outperforms traditional additive models that ignore interactions and achieves comparable performance to complex black-box models while providing feature attributions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of combining MoE with NAM is novel and can improve the flexibility of NAM to capture feature interactions and maintain interpretability. The claim is supported by extensive experiments and benchmarks against various baselines, in terms of prediction performance and interpretability.\\n\\n2. MixNAM provides a point estimator of feature attribution but a range of it, which reflects the feature interaction.\", \"weaknesses\": \"1. If we look at Table 1, MixNAM is only significantly better than other interpretable methods (with FA) on the Housing and Year dataset by taking the standard error into consideration.\\n\\n2. The model is benchmarked only on tabular datasets due to the limited flexibility of NAM on structured datasets, even though it's still beneficial to discuss its potential usage on structured datasets, as they are the primary use cases of neural networks.\\n\\n3. The performance and interoperability of MixNAM are sensitive to the penalty parameter \\\\lambda. The paper would benefit from a more in-depth discussion on how to choose it.\", \"questions\": \"1. How was the hyper-parameter \\\\lambda selected? Was it cross-validated against prediction accuracy? It would be great to provide some heuristics to select it to balance the accuracy and interpretability.\\n\\n2. It would be great to compare the interpretation of MixNAM with any post-hoc feature attribution (e.g., SHAP) on XGBoost, which is a popular choice to gain interpretability on tabular data. What is the difference between these two options? A practitioner may like to know when they should use an interpretable MixNAM and when they should use XGBoost with post-hoc interpretation methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns of ethics.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer a4Jz (Part 2)\", \"comment\": \"> [...] the roles of these hyperparameters are not well explained. The results appear relatively consistent, and the impact of using a single expert is missing from the analysis. Additionally, there is no study of the values for (\\\\gamma) and (\\\\lambda) in equation (8) [...]\\n\\nThe roles of $C$ (\\\"the total number of experts\\\") and $K$ (\\\"the number of activated experts\\\") are clearly described in the main paper and in the appendix (lines 184, 213, 1004). These are well-established concepts in the Mixture of Experts (MoE) literature [1,2].\\n\\nWe respectfully disagree with the conclusion that the results are \\\"relatively consistent.\\\" As detailed in Table 8, configurations with optimal values of $K$ and $C$ significantly outperform settings with smaller numbers of activated experts (e.g., $K=2$). This demonstrates the importance of selecting appropriate values for these hyperparameters. Furthermore, using a single expert ($K=1$) corresponds to the behavior of traditional additive models, whose results are thoroughly presented and discussed in Table 1.\\n\\nRegarding $\\\\lambda$, we have provided a detailed analysis of its effect on performance and interpretability in Tables 2 and 3. The results demonstrate how varying $\\\\lambda$ allows us to balance accuracy and interpretability, with higher values emphasizing interpretability at the cost of performance.\\n\\nFor $\\\\gamma$, we opted not to focus on this parameter in our analysis, as it is not a novel component introduced by MixNAM. Instead, $\\\\gamma$ is a well-established feature in prior additive model research [2,3]. It was included to ensure a fair comparison with existing methods rather than as a focal point of our study.\\n\\n[1] Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. ICLR 2017.\\n\\n[2] Mixtral of experts. Arxiv 2024.\\n\\n[3] Neural Additive Models: Interpretable Machine Learning with Neural Nets. NeurIPS 2021.\\n\\n[4] Neural Basis Models for Interpretability. NeurIPS 2022.\\n\\n> As demonstrated by the authors in Appendix E, the proposed method can essentially be viewed as a generalized additive model with specific normalization. The added complexity in the approach is not well justified when compared to existing methods.\\n\\nOur analysis in Appendix E demonstrates that the relevance estimation in the routing mechanism of MixNAM, specifically described by Formula (6), follows a normalized Generalized Additive Model (GAM). However, it is important to clarify that this does not imply that MixNAM as a whole is equivalent to a normalized GAM. MixNAM extends beyond the scope of traditional additive models by incorporating a Mixture of Experts (MoE) framework, which dynamically captures feature interactions and variability through expert routing mechanisms.\\n\\n> Only six relatively \\\"old\\\" datasets are used in this study. A broader selection of available tabular datasets would provide a stronger and more comprehensive evaluation. Refer to L\\u00e9o Grinsztajn et al. \\u201cWhy do tree-based models still outperform deep learning on tabular data?\\u201d (July 2022) and Pieter Gijsbers et al. \\u201cAn Open Source AutoML Benchmark\\u201d (July 2019) for relevant data sources.\\n\\nThe datasets selected for this study are widely recognized benchmarks, encompassing a broad range of scenarios across regression and classification tasks. These datasets, including Housing, MIMIC-II, MIMIC-III, Income, Credit, and Year, provide diverse challenges in terms of complexity, feature types, and task objectives.\\n\\nWhile we acknowledge that additional datasets could be explored, we believe the results presented in Table 1 sufficiently demonstrate the effectiveness of MixNAM. The model consistently outperforms traditional additive models across all datasets, highlighting its capability to balance interpretability and performance.\\n\\n> The authors write, \\\"The bounds represent the maximum and minimum potential outputs for a feature.\\\" It would be helpful to clarify what is meant by \\\"possible\\\" in this context. Possible according to what criteria? These bounds are directly affected by the number of experts. How is that important?\\n\\nThe term \\\"possible\\\" in this context refers to the range of potential contributions of a given feature value $x_i = a$ to the final output. Specifically, this contribution is defined as: \\n$$F(x_1, x_2, \\\\cdots, x_n|x_i=a) - E_{x_i}(F(x_1, x_2, \\\\cdots, x_n)),$$ which depends on the values of other features $x_j$ ($j \\\\neq i$). The bounds are determined by the maximum and minimum values of this contribution term over all possible configurations of the other features. These bounds provide a rigorous characterization of the variability in feature contributions.\\n\\nIt is important to note that the number of experts does not affect the bound estimation itself but influences the precision of the output predictions within the determined bounds. By using multiple experts, MixNAM captures a more nuanced representation of variability within the range defined by the bounds.\"}", "{\"comment\": \"Thank the authors for their detailed response. My questions have been mostly solved. I will keep my original positive score.\"}", "{\"title\": \"Summary Response to All Reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your thoughtful reviews and constructive feedback on our manuscript. We have carefully addressed your comments and made corresponding revisions to the paper. Below, we summarize the major concerns raised and how they have been addressed in the updated manuscript:\\n\\n- **Motivation of MixNAM** (Reviewers a4Jz, xFNJ): As described in the Introduction, the primary motivation for this work arises from the limitations of existing additive models, which often produce less accurate predictions than black-box models. Traditional additive models are inherently unable to capture multimodality, as they provide only a single deterministic output value for each input feature value. This limitation is clearly demonstrated through the simulation studies presented in Section 4.4.\\n- **Mathematical definition of variability captured by MixNAM** (Reviewer a4Jz, xFNJ): Beyond the conceptual explanation provided in the Introduction, we have added a formal mathematical definition of \\\"variability\\\" in Appendix E.2. This definition measures how other features influence the contribution of a given feature $x_i$ to the final output $\\\\hat{\\ud835\\udc66}$. By modeling such variability with MixNAM, we are trying to bridge the gap between interpretable but lower-performing additive models the more powerful but opaque black-box models.\\n- **Discussion of post-hoc feature attribution methods** (Reviewer MzpU): We have included a new paragraph in Appendix D to highlight the uniqueness of MixNAM compared to post-hoc feature attribution methods such as LIME and SHAP. Unlike these methods, MixNAM provides a transparent, global understanding of feature contributions directly through its model design, avoiding the limitations of approximations inherent in post-hoc approaches. \\n- **Additional experiments on highly sparse data** (Reviewer NxyM): We have conducted an additional simulation study, the results of which are presented in Appendix E.1. These experiments demonstrate MixNAM\\u2019s ability to identify multimodality even in extreme cases of high sparsity.\\n- **Additional experiments on larger-scale multimodal data** (Reviewers NxyM, xFNJ): We have also included the results of another simulated study in Appendix E.2. This study showcases MixNAM\\u2019s capability to handle larger-scale datasets with pronounced multimodality and variability in feature contributions.\\n\\nWe hope these revisions address your concerns and enhance the overall quality of the paper. We would also appreciate if you could consider updating your scores based on the clarified contributions and the revised manuscript. Please let us know if you have any further concerns or suggestions.\\n\\nThank you once again for your time, effort, and valuable feedback!\\n\\nWarm regards,\\nAuthors\"}", "{\"title\": \"Response to Reviewer MzpU (Part 1)\", \"comment\": \"We sincerely appreciate your valuable feedback and thoughtful suggestions. Here are our detailed responses to your questions and conerns.\\n\\n### Weaknesses:\\n> If we look at Table 1, MixNAM is only significantly better than other interpretable methods (with FA) on the Housing and Year dataset by taking the standard error into consideration.\\n\\nThis discrepancy arises because we performed experiments with 10 different seeds for the Housing and Year datasets, while for other datasets, we followed previous research [1,2] and used five-fold cross-validation. Cross-validation introduces larger standard deviations due to differences in data splits. Although statistical significance may appear less clear due to these variations, Table 1 demonstrates that MixNAM performs on par with complex models without feature attribution (FA), which are more expressive and powerful than traditional additive models.\\n\\n[1] NODE-GAM: neural generalized additive model for interpretable deep learning. ICLR 2022.\\n\\n[2] Neural basis models for interpretability. NeurIPS 2022.\\n\\n> The model is benchmarked only on tabular datasets due to the limited flexibility of NAM on structured datasets, even though it's still beneficial to discuss its potential usage on structured datasets, as they are the primary use cases of neural networks.\\n\\nWe assume you meant \\\"unstructured data\\\" rather than \\\"structured data.\\\" Please correct us if this interpretation is wrong.\\n\\nIn the existing literature on additive models, evaluations primarily focus on tabular data, while applications to other modalities, such as text or images, are rare [1,2,3]. This is because the core capability of additive models lies in interpreting features through shape plots, which illustrate how predictions change as feature values monotonically increase or decrease. Additive models are difficult to evaluate on raw text or image data, where features and their monotonic relationships are challenging to define.\\n\\nFor example, NBM [2] tested its performance on image data by using concept bottleneck models to preprocess images into tabular data. However, the effectiveness of the overall system was limited by the quality of the extracted concepts, which may not fully capture the information in the original images [4,5]. Therefore, we follow the mainstream in additive model research and evaluate MixNAM using tabular data. We have discussed the potential generalization of MixNAM to other modalities as a future direction in Appendix L (Appendix K).\\n\\n[1] Agarwal R, et al. Neural additive models: Interpretable machine learning with neural nets. NeurIPS 2021.\\n\\n[2] Radenovic F, et al. Neural basis models for interpretability. NeurIPS 2022.\\n\\n[3] Chang CH, et al. NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning. ICLR 2022.\\n\\n[4] Koh PW, et al. Concept bottleneck models. ICML 2020.\\n\\n[5] Margeloiu A, et al. Do concept bottleneck models learn as intended?. 2021.\\n\\n> The performance and interoperability of MixNAM are sensitive to the penalty parameter \\\\lambda. The paper would benefit from a more in-depth discussion on how to choose it.\\n\\nTable 2 and Table 3 present our further analysis of the $\\\\lambda$ selection from both qualitative and quantitative perspectives. They illustrate how $\\\\lambda$ effectively balances interpretability and performance in MixNAM. Based on our analysis of $\\\\lambda$ on the Housing dataset (Section 4.3), we selected $\\\\lambda=0.1$ as the default value in our main experiments, which turns out to perform robustly on all datasets with improved accuracy (Table 1) and retained interpretability (Figures 9-12).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
BbZy8nI1si
Learning Molecular Representation in a Cell
[ "Gang Liu", "Srijit Seal", "John Arevalo", "Zhenwen Liang", "Anne E Carpenter", "Meng Jiang", "Shantanu Singh" ]
Predicting drug efficacy and safety in vivo requires information on biological responses (e.g., cell morphology and gene expression) to small molecule perturbations. However, current molecular representation learning methods do not provide a comprehensive view of cell states under these perturbations and struggle to remove noise, hindering model generalization. We introduce the Information Alignment (InfoAlign) approach to learn molecular representations through the information bottleneck method in cells. We integrate molecules and cellular response data as nodes into a context graph, connecting them with weighted edges based on chemical, biological, and computational criteria. For each molecule in a training batch, InfoAlign optimizes the encoder's latent representation with a minimality objective to discard redundant structural information. A sufficiency objective decodes the representation to align with different feature spaces from the molecule's neighborhood in the context graph. We demonstrate that the proposed sufficiency objective for alignment is tighter than existing encoder-based contrastive methods. Empirically, we validate representations from InfoAlign in two downstream applications: molecular property prediction against up to 27 baseline methods across four datasets, plus zero-shot molecule-morphology matching. The code and model are available at https://github.com/liugangcode/InfoAlign.
[ "Molecular Representation Learning", "Drug Discovery", "Cell Morphology", "Gene Expression" ]
Accept (Poster)
https://openreview.net/pdf?id=BbZy8nI1si
https://openreview.net/forum?id=BbZy8nI1si
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xvmVV4asLF", "wSFYdSzmMb", "sDTlBuDCsy", "rXONhC0bZD", "pCr7QOM0NH", "oT9Qo4UF1b", "lU6jz72U6L", "j7TrWlmHHP", "icMnxv97pl", "hO6BzvnZtC", "fe4CCVwz9L", "ezdpTsJkdp", "dq7xn7jEvM", "V6ue8vlmb5", "UzOeC8OU8M", "U1CTQvcZ3C", "KsR5yjMqGt", "HfDEcEYs8F", "HNqhjgNPC6", "HFzTa4EfsD", "F25Mg26Q8H", "CU06tNqwlv", "Ayh7nk0U3v", "9suPocIruS", "82M7YIZWDa", "7FvkzYecPX", "6OVTh3GFxP", "5yYMxB7UU9", "5sJWPka4Aq", "4awvGMrXaW", "2yEnHnFanf" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1732115601675, 1732079981164, 1732454745076, 1732332460190, 1731955745278, 1730747202291, 1732307835877, 1732550226982, 1732116015982, 1731992252874, 1732550023102, 1731956058206, 1732081354256, 1732056410094, 1731955245279, 1731954858110, 1730330280966, 1732008829321, 1731955456245, 1732538252980, 1729684523247, 1732166039284, 1732017694072, 1734741807147, 1732001434063, 1732222104542, 1732186837169, 1731953968812, 1737523675640, 1730496187793, 1731954147375 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_hxs4" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_hxs4" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_ALhV" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_VNuN" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_VNuN" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_Sxqu" ], [ "ICLR.cc/2025/Conference/Submission4989/Area_Chair_fdaj" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4989/Reviewer_ALhV" ], [ "ICLR.cc/2025/Conference/Submission4989/Authors" ] ], "structured_content_str": [ "{\"title\": \"Kindly request a closer review of the contribution regarding representation learning instead of full-tuning\", \"comment\": \"Thank you for your thoughtful comments and questions. To clarify, we would like to briefly reiterate the key points previously discussed:\\n\\n- The primary focus of the paper is representation learning, with an emphasis on the quality of molecular representations.\\n- We have ensured fair evaluation by using consistent fine-tuning methods with MLP task predictors.\\n\\nWe are happy to continue the discussion on full fine-tuning, at the same time, we wish to maintain the focus on representation learning. To summarize the distinction:\\n\\n- Full fine-tuning optimizes the encoder parameters for downstream tasks, which may lead to biased evaluation due to the use of stronger encoders. \\n- In contrast, representation learning aims to achieve high-quality representations even with simpler encoders, such as the GIN used in this paper.\\n\\n**Fully fine-tuning and representations can be studied separately**. Advancements in both areas can contribute to the field. Real-world virtual screening is more complex than relying on a single representation or a fully fine-tuned model. Ensemble methods that combine both fine-tuning and representations offer advantages in both areas. **However, understanding the rationale behind each component is essential. This paper focuses on representation learning with consistent, fair evaluations. In this context, InfoAlign has shown strong performance through extensive experiments in Table 1/2 and Figure 3, including 27 baselines.**\\n\\nAs requested by the reviewer, we provide new results **extending** beyond the main contribution of representation learning, focusing on fully fine-tuning. The results show that the simpler GIN encoder in InfoAlign outperforms the Transformer used in UniMol, even in fine-tuning scenarios, highlighting the superiority of InfoAlign as a pretraining method.\"}", "{\"comment\": [\"The conformation can be directly computed from the SMILES, and in molecular representation learning, we do not consider this as information from another modality. Moreover, Uni-mol only requires atomic coordinates and does not need any additional information.\", \"The application scenario for molecular property prediction is to provide preliminary screening or priority ranking for downstream experiments in the wet lab, which can take days or even months to complete, especially when these properties are related to biology. No one cares whether your model takes one second or one minute to make predictions, as this is negligible compared to the costs associated with the subsequent wet experiments in the wet lab.\"]}", "{\"comment\": \"Thank you for the clarifications. I would like to keep the current scores.\"}", "{\"title\": \"Thank you for your support and for raising the score\", \"comment\": \"We appreciate the reviewer\\u2019s recognition of our rebuttal and insightful observations. The context graph, particularly edges based on computational criteria, could be improved in future work. For example, incorporating molecular properties, rather than just structural similarities, could address scenarios like activity cliffs, where molecules with similar features may differ in properties.\\n\\nRegarding model robustness, we aim to extract concise representations based on the information bottleneck principle, as illustrated in Figure 1. The loss function (Eq. 3) effectively extracts minimal sufficient information from two different modalities. In contrast to previous contrastive methods, which lack a term for minimal information and provide looser bounds on sufficient information, incorporating additional modalities could further improve the generalization of representations.\\n\\nWe are encouraged that your concerns have been addressed. Thank you again for your support and for raising the score!\"}", "{\"title\": \"Author Response [3/3]\", \"comment\": \"## Q1: Results in table 1\\n\\nThank you for the question. There are 32 tasks for Broad6K, and the value 3.1 means that one task (3.125% = 1/32 * 100) is frequently predicted with high AUC (>90%). The task ID is 274_752. Using metadata from Broad6K, this task involves an MLPCN LGR2 assay, which aims to identify compounds targeting the LGR2 GPCR protein. This assay is an antagonist of the LGR2 target which will then compromise survival for pests like ticks or mosquitoes.\\n\\nRegarding resolution, we observe that 3.1 occurs frequently with a threshold >85%, and the current resolution is sufficient to reflect the high AUC for this specific task.\\n\\n## Q4: Weighting of the paper\\n\\nThanks for your suggestions. Currently, the main text consists of 2.8 pages for background (Introduction, Related Work, problem definition), 3 pages for methods (1.7 pages for methods, 0.5 pages for theoretical outcomes, and 0.8 pages for model implementation), and 4.2 pages for experimental results.\\n\\nFor the theory, Section 4.3 presents the key theoretical outcomes in less than half a page, with further details in Appendix B (formerly Appendix A).\\n\\nWe have added more details on model construction, with references to the appendix for additional information. The code provides more clarity than text descriptions alone. We have included the code with checkpoints for easy reproducibility in the supplementary materials and will add the link in the camera-ready version.\\n\\nWe have updated Figures 3, 4, 5, and Table 4 for better clarity. We also included random walk analysis in Appendix D.5 and relocated some related content to the appendix to accommodate page limitations.\\n\\nThe current page allocation for background, methods, and experiments is 2.8:3:4.2, which we believe is well-balanced, with a emphasis on experimental results. We appreciate your comment and welcome any further discussion on adjusting the paper\\u2019s content for better presentation.\\n\\n## Q5: Figure 3\\n\\nWe have updated the caption for improved clarity. Figure 3 compares the relative performance of two groups of models: (1) representations using single-modal information (Single Rep.) and (2) molecular representations from multi-modal alignment methods (Aligned Rep.). The top bar compares models using single-modal information from the best baselines in Molecular Structure, Cell Morphology, and Gene Expression. The bottom bar compares three models that use multimodal information.\\n\\n## Q6: Table 3 right part\\n\\nWe have separated the figure and table, resulting in the current Table 3 and Figure 4. We also updated the distribution figure to be a histogram and added x-axis labels. We appreciate your suggestion, and with these changes, we believe Figure 4 now better illustrates the observations in Section 6.2.2.\\n\\n## Q7: Figure 4 (a)\\n\\nIn Figure 4(a) (now Figure 5 (a)), $\\\\beta$ refers to the hyperparameter from the second term in Eq (3), not the learning rate. The figure shows how pretraining losses change with varying regularization strengths, which may not be as clearly represented in tables.\\n\\nFrom the figure, we observe that pretraining loss may serve as an indicator for selecting $\\\\beta$. Specifically, for $\\\\beta = 1e-9$ and $\\\\beta = 1e-12$, lower pre-training losses correspond to better downstream performance.\\n\\nWe have updated the figure to highlight the pretraining losses for $\\\\beta = 1e-9$ and $\\\\beta = 1e-12$ in the revision to address your concerns.\\n\\n## Q8: Figure 4 (a): walk length\\n\\nThanks for your comments. Each result is based on ten runs, with the points representing the mean value and error bars showing one standard deviation. We have updated the figure and caption for clearer representation.\\n\\nFigure 4(b) (now Figure 5(b)) is primarily intended to support our claim of robust performance across different hyperparameter choices for $L$, as confirmed by the reviewers. As discussed in our response to W3/Q9, we do not observe any properties indicating that length 8 is an outlier. While the error bar may not fully capture the variance, we believe the variation at length 8 does not affect the comparison and observation with the best baseline.\\n\\nFurther discussions on the other questions about the figure are provided in the responses to W3/Q9.\"}", "{\"summary\": \"The paper introduces a novel approach called Information Alignment (InfoAlign) for learning molecular representations by integrating molecular structure, cell morphology, and gene expression data. The method leverages the information bottleneck principle to optimize a molecular graph encoder and multiple MLP decoders, aiming to achieve minimal yet sufficient molecular representations.\\n\\nThe authors demonstrate the effectiveness of InfoAlign through extensive experiments on molecular property prediction and zero-shot molecule-morphology matching, showing superior performance compared to 27 baseline methods across four datasets.\\n\\nI find the paper of good quality in general. Besides, the problem it is tackling is meaningful but less explored. I suggest an acceptance to advocate this direction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The approach is well-motivated. Applying information bottleneck to this problem for learning minimal yet sufficient molecular representations seems a valid match of theory and real-world problem.\", \"The approach is comprehensively evaluated by comparisons across different methods and even paradigms. InfoAlign demonstrates improved accuracy over up to 27 baseline models across four datasets.\", \"The paper is well-organized and clearly presented.\", \"In addition to empirical evidence, the paper provides theoretical proofs to support the advantages of the proposed method.\", \"The provided supplementary material contains code, dataset and checkpoints. This suggests good reproducibility of the results in the paper.\"], \"weaknesses\": [\"As I mentioned in the summary, the problem is meaningful yet less tackled in the AI community. To my eyes, it is mostly due to the missing prerequisites of biological knowledge. The paper provides some explanation of the problem in the introduction, but it would be much more helpful if more context could be provided (maybe in the appendix).\", \"Line 215: The motivation for computing edge weights on random walk paths is unclear, and no empirical evidence is provided to support it. Since the context graph incorporates data from three modalities, the edges likely exhibit strong heterogeneity. Is there evidence suggesting that a cumulative product of edge weights effectively captures dependency or similarity between nodes?\", \"Line 303: The approach for avoiding noisy edges in computations lacks motivation and an ablation study. Providing more detail here would offer valuable insights for researchers interested in extending InfoAlign to new contexts.\"], \"questions\": [\"What are the minimum data requirements for cell morphology and gene expression to effectively apply InfoAlign? How does the method perform when these data are sparse or incomplete?\", \"Given the promising performance of structure-based pretrained GNNs, have the authors considered using representations from these pretrained models instead of Morgan fingerprints as molecular features? This might leverage the strengths of both approaches.\", \"How does the computational complexity of InfoAlign compare to existing methods, and what are the practical limitations in terms of dataset size and computational resources?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> W1: Random pre-specified graph\\n\\nThank you for a convincing rebuttal, demonstrating the utility of a valid graph. I found it interesting that graph is relatively resilient to random perturbation. My suspicion current entity graphs are incomplete and some of the conditions are spurious as they are observed under specific conditions. \\n\\n| Fine-tuning Method | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|-------------------------------|---------------|---------------|--------------|---------------|\\n| Representation from UniMol | 76.8\\u00b10.4 | 65.4\\u00b10.1 | 64.6\\u00b10.2 | 55.8\\u00b12.8 |\\n| 50% Random Edges | 77.48\\u00b10.71 | 62.52\\u00b10.16 | 63.76\\u00b10.32 | 71.53\\u00b12.51 |\\n\\nBiogen task seems particularly sensitive to false edges.\\n\\n> W4: Ablation studies on removing data\\n\\nModel robustness to removal of individual data points is interesting as well. Do the authors think this is due to the shared information between different modalities? \\n\\nIn my opinion this is an effective rebuttal and so I will raise my score. I've read the other reviewer's concerns regarding fine-tuning. I believe lack of fine-tuning is a fair criticism, but the authors do address it with an experiment demonstrating improvement over previous methods, albeit limited in scope. Relative strength of fingerprint methods is a known phenomenon in the field, however this does not invalidate the utility of alternative approaches.\\n\\nI think the paper presents a novel approach to an important problem and convincingly demonstrates improvements in quality of learned representations.\"}", "{\"title\": \"Thank you for raising the score\", \"comment\": \"Thank you for raising the score. We sincerely appreciate the reviewer\\u2019s thoughtful engagement and constructive feedback throughout the discussion. We are pleased that the reviewer\\u2019s concerns have been addressed and are grateful for the support of our work!\"}", "{\"title\": \"Regarding the reviewer\\u2019s new comments\", \"comment\": \"> The conformation can be directly computed from the SMILES, and in molecular representation learning, we do not consider this as information from another modality. Moreover, Uni-mol only requires atomic coordinates and does not need any additional information.\\n\\nWhile it is possible to generate a conformation from a SMILES string using tools such as RDKit, this process introduces additional assumptions and approximations. SMILES itself does not contain explicit 3D spatial information, so generating a conformation requires computational steps like energy minimization or force-field modeling, which depend on assumptions about molecular geometry.\\n\\n> The application scenario for molecular property prediction is to provide preliminary screening or priority ranking for downstream experiments in the wet lab, which can take days or even months to complete, especially when these properties are related to biology. No one cares whether your model takes one second or one minute to make predictions, as this is negligible compared to the costs associated with the subsequent wet experiments in the wet lab.\\n\\nIn real-world applications with millions of virtual screening candidates, inference time becomes critical. Even a 1-second inference time results in 11 days for one million candidates. If more accurate 3D information is required, methods like DFT calculations or experimental techniques (e.g., X-ray crystallography) can take much longer to generate accurate atom coordinates, potentially making the inference time unacceptable.\\n\\n> The supplementary experimental results in the authors' rebuttal have demonstrated that full-finetuning can significantly enhance the performance of the baselines. However, the paper does not present the performance of fully finetuned baselines. The presented performance of pretrained-GNNs is substantially underestimated. \\n\\nBased on results from ChEMBL2K and Broad6K, fully fine-tuning does not always lead to improvement. \\n\\nReal-world deployment of accurate virtual screening is complex, with ensemble methods offering advantages in both fine-tuning and representations. However, understanding each component's rationale is crucial, and this paper focuses on the representation learning aspect.\\n\\n> This means the paper lacks sufficient evidence to show that the proposed methods, which require more complex multi-modal data construction, demonstrate a clear advantage over simple single-molecule pretraining.\\n\\n**InfoAlign is a representation learning method, shown to outperform 27 different representations from various methods, as demonstrated in the original paper**\\n\\nAs an extension, we have added new experiments comparing the encoders from InfoAlign with UniMol in fully-tuned scenarios, where the InfoAlign model also shows consistently improved performance.\\n\\nRegarding data construction complexity, the contribution of this paper is not about generating new data points for cell morphology or gene expression. We have curated existing data for pretraining, which does not involve biological experiments.\\n\\nRegarding algorithmic complexity, the random walk approach is efficiently implemented using a sparse graph and introduces minimal computational complexity in both time ($\\\\mathcal{O}(k)$) and space ($\\\\mathcal{O}(M)$), where $k$ is the average degree and $M$ is the number of edges.\\n\\n> Additionally, the supplementary experimental ... pretrained-GNNs.\\n\\n**The new observations regarding full fine-tuning should not alter our current conclusions on representation learning.**\\n\\nWe apologize for any confusion. We chose UniMol for comparison with InfoAlign as it seems to be the reviewer\\u2019s preferred baseline. If the reviewer has other suggestions, we would be happy to discuss them further. However, we believe such discussions do no affect the main focus of the paper on representation learning.\\n\\n> According to Occam's razor ... challenging than that of single-molecule data.\\n\\n**For evaluating representation learning**, we agree with the principle of Occam's razor, as full fine-tuning may lead to biased evaluation due to stronger encoders. In contrast, representation learning aims to achieve high-quality representations even with simpler encoders.\\n\\nRegarding complexity, both data and algorithmic complexity are not the concerns, as discussed earlier.\\n\\n> Even if the authors were to update ... ICLR submission process.\\n\\nWe appreciate the reviewer\\u2019s suggestion and are happy to discuss InfoAlign\\u2019s potential in fully-tuned settings, as it could enrich the conversation. However, we respectfully note that such a discussion should not shift the main focus of the paper, which is on representation learning, including but not limited to the results. \\n\\nExploring InfoAlign in fully-tuned settings should be considered an extension of the current work, and we believe such discussions would require only minor modifications to the paper.\"}", "{\"title\": \"Further Questions\", \"comment\": \"1. **Decoding for Multiple Neighbors**\\n \\n Can I consider the decoding process as the procedure of autoregressively generating the sampled paths?\\n\\n2. **Finetuning**\\n \\n It does not make sense to freeze the encoders as the original paper of UniMol[1] utilized full finetuning on its downstream tasks (some of the datasets even have less data than those used in the article), so do other molecular pretraining works such as [2]. More importantly, it does not make sense that the Morgan fingerprint outperforms most of the molecular pretraining models, as shown in Table 1 (even outperforming all on ChemBL2k), as this phenomenon challenges the meaning of the entire field of molecular pretraining. The performance of these molecular pretraining models has been severely underestimated. The authors should demonstrate the performance of the fully finetuned version and also update the results in response to \\\"W2, Q4, feature concatenation\\\".\\n\\n\\n[1] Zhou G, Gao Z, Ding Q, et al. Uni-mol: A universal 3d molecular representation learning framework[J]. 2023.\\n\\n[2] Rong Y, Bian Y, Xu T, et al. Self-supervised graph transformer on large-scale molecular data[J]. Advances in neural information processing systems, 2020, 33: 12559-12571.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. We sincerely appreciate the reviewers' insightful comments provided in the rebuttal and believe we have addressed them point-by-point. If there are any remaining concerns that the reviewers feel have not been fully addressed, we would be grateful for the opportunity to discuss them further during the discussion phase.\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely appreciate the reviewer's thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. We have also revised the main text and appendix to incorporate the reviewer's valuable feedback, with all changes clearly highlighted in blue for ease of reference. Should any concerns remain, we remain fully committed to addressing them promptly and thoroughly.\\n\\n## W1, Q1: Definition of neighbors \\n\\nAs stated in Eq (3) ($\\\\sum_{v\\\\in\\\\mathcal{P}_x}$), the neighbor $v$ refers to the node in the random walk path. We have revised the text for better clarity.\\n\\n## W1, Q2 Decoding for multiple neighbors\\n\\nThanks for your question. If multiple neighbors share the same data type, such as cell morphology, the latent representation of the molecule is passed through a single decoder for that data type. Based on Eq. (3), the decoder's output is then aligned with the different features associated with $v_1, v_2, \\\\dots, v_n$, with weights assigned according to $\\\\alpha(v_i \\\\mid \\\\mathcal{P}_x)$ ($i=1, 2, \\\\dots, n$). \\n\\n## W2, Q3: Generation of gene expression and cell morphology\\n\\nThanks for your question. Most explanations are provided in the Introduction and Related Work sections. We apologize for not referencing them when introducing the dataset and have revised the paper (Section 6.1.1, Appendix D.1) to address your concern. Below is a brief summary:\\n\\nMolecules act as perturbations that yield perturbed cell states; those cell states can be measured in two ways relevant here: as gene expression values for a thousand or more genes [1] and/or microscopy Cell Painting images [2], which are represented by a thousand or more morphology features [3]. This is how a gene expression and morphology profile are associated with a given molecule. \\n\\nGenerating cell morphology and gene expression features requires extensive and costly experiments, so downstream tasks like ToxCast and Biogen3K may not always include these features.\\nAs described in Appendix D.1, ChEMBL2K is a subset overlapping with the existing JUMP-CP dataset [4], the largest cell painting dataset for cell morphology features. Relevant gene expression data for the molecules in ChEMBL2K are sourced from [6]. The available data for both types are detailed in Table 5 in the appendix.\\n\\n## W2, Q4: Feature concatenation\\n\\nThanks for your comments. We appreciate the helpful analysis and have now included it in Appendix C.3 and Table 7. The requested new results are provided in the table below:\\n\\n| Dataset | UniMol | UniMol + Other Features | InfoAlign |\\n|------------|------------|--------------------------|-------------|\\n| ChEMBL2K | 76.8\\u00b10.4 | 77.51\\u00b10.08 | 81.3\\u00b10.6 |\\n| Broad6K | 65.4\\u00b10.1 | 66.43\\u00b10.49 | 70.0\\u00b10.1 |\\n\\nWe observe that concatenating UniMol representations with cell morphology and gene expression features improves performance in prediction tasks. However, it still does not match the performance of InfoAlign, which achieves the best results by aligning molecular representations with cell morphology and gene expression features during pretraining, rather than in the downstream stage.\\n\\n## W2, Q4: Fine-tuning\\n\\nThanks for your comment. For all pre-trained methods, the encoder is frozen during fine-tuning, and only the MLP is trained for prediction. We have now explained this in Section 6.1 and Appendix C.3.\\n\\n## Reference\\n\\n[1] A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles. Cell. 2017.\\n\\n[2] Three million images and morphological profiles of cells treated with matched chemical and genetic perturbations. Nature Method. 2024.\\n\\n[3] Optimizing the Cell Painting assay for image-based profiling. Nature Protocols. 2023.\\n\\n[4] JUMP Cell Painting dataset: morphological impact of 136,000 chemical and genetic perturbations.\\n\\n[5] Drug-induced adverse events prediction with the LINCS L1000 data. Bioinformatics. 2016.\"}", "{\"comment\": [\"The supplementary experimental results in the authors' rebuttal have demonstrated that **full-finetuning can significantly enhance the performance of the baselines**. However, the paper **does not present the performance of fully finetuned baselines**. The presented performance of pretrained-GNNs is **substantially underestimated**. This means the paper lacks sufficient evidence to show that the proposed methods, which require more complex multi-modal data construction, demonstrate a clear advantage over simple single-molecule pretraining.\", \"Additionally, the supplementary experimental results indicate that after full finetuning, the proposed method only provides a **marginal performance improvement** on 3/4 of the datasets used in the paper compared to Uni-Mol, not to mention that Uni-Mol is not the most outstanding among all pretrained-GNNs. According to Occam's razor, which is agreed upon by both reviewers and authors, it is difficult to justify the necessity of the proposed complex multi-modal pretraining, especially when the collection of biological experimental data is much more challenging than that of single-molecule data.\", \"Even if the authors were to update the performance of all pretrained-GNNs, **nearly half of the experimental results would need to be modified**, and consequently, the experiment analysis would also require extensive revision. If the authors were to undertake such a large-scale rewrite of the paper, the review provided by the reviewer would no longer be applicable to this article, necessitating new reviewers and a new round of peer review, which would violate the ICLR submission process.\", \"For the above three reasons, I will downgrade my score to reject.\"]}", "{\"title\": \"New results\", \"comment\": \"Thank you for your suggestions. We have updated Appendix D.3 with Table 7, providing new experiments comparing fully fine-tuned UniMol, InfoAlign, and various representations. Here, we update the response in \\\"W2, Q4: Feature Concatenation.\\\" The new results are presented in the table below:\\n\\n| Fine-tuning Method | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|--------------------------------|-----------|----------|-----------|-----------|\\n| Representation from UniMol | 76.8\\u00b10.4 | 65.4\\u00b10.1 | 64.6\\u00b10.2 | 55.8\\u00b12.8 |\\n| Representation from UniMol and Other Features | 77.5\\u00b10.1 | 66.4\\u00b10.5 | NA | NA |\\n| Fully-tuned UniMol | 78.9\\u00b10.2 | 65.1\\u00b11.0 | 71.3\\u00b10.6 | 43.6\\u00b10.3 |\\n| Representation from InfoAlign | 81.3\\u00b10.6 | 70.0\\u00b10.1 | 66.4\\u00b11.1 | 49.4\\u00b10.2 |\\n| Fully-tuned InfoAlign | 80.1\\u00b10.9 | 69.2\\u00b10.7 | 72.0\\u00b10.5 | 42.8\\u00b11.1 |\\n\\nFrom the table, we observe that fully fine-tuning benefits both UniMol and InfoAlign on ToxCast and Biogen3K datasets. While fully fine-tuning improves the representations of UniMol on ChEMBL, InfoAlign representations, with only MLP decoder tuning, achieves the best performance. On Broad6K, fully fine-tuning is less effective for both InfoAlign and UniMol compared to tuning only the MLP. These results suggest that, if resources allow, fully fine-tuning should be preferred for better performance, especially for UniMol, which requires more time and resources due to the use of 3D molecular structures. If resources are limited, InfoAlign's representation provides a strong alternative without full fine-tuning.\\n\\nWe hope these new results help address the reviewer\\u2019s question:\\n\\n> \\\"If you have collected so much multimodal data and constructed complex pre-training tasks, but they still cannot surpass the work of others that use single-modality pretraining plus full fine-tuning, then why go through all this trouble to collect this data? I only need to use single-modality pre-training plus full fine-tuning, which is more in line with Occam's Razor.\\\"\\n\\nThe new results demonstrate the value of molecular pretraining with multimodal data from cellular responses, both for representation learning and full model fine-tuning. Additionally, we believe UniMol also highlights the value of collecting other multimodal data, such as molecular 3D structures, for improving pretraining.\\n\\n> \\\"Moreover, single-molecule data is easier to collect and more abundant than biological data, making it easier for people to scale up their models. So, if your method cannot outperform an approach that only uses molecular pre-training plus full fine-tuning, why would anyone choose to use your method, which lacks both scalability and performance?\\\"\\n\\nAs previously mentioned, the new results highlight the value of cellular response data. We also observe that InfoAlign is more efficient than UniMol during fully fine-tuning, as UniMol requires computing 3D molecular structures based on RDKit to obtain atomic coordinates. For example, on the ToxCast dataset, UniMol takes around 46-50 seconds per epoch, while InfoAlign only requires 2-3 seconds.\\n\\n> The provided references [2, 3] only demonstrate that fingerprints can indeed outperform simple GNNs such as GCN and GAT on some tasks, but they do not prove that fingerprints possess the capability to match pretrain-GNNs. Moreover, in terms of the magnitude of improvement, pretrain-GNNs can bring about a very significant enhancement [1], which is larger than the performance gap shown in [2]. Therefore, this does not constitute a valid reason for not comparing fully finetuned models.\\n\\nThank you for your insightful observations. We have provided new results comparing fully fine-tuned models. In our previous response, our goal was to show that fingerprints can serve as a good baseline. We believe our statement, \\\"Morgan fingerprints do not challenge the existence or progress of molecular pretraining,\\\" aligns with your observations.\\n\\nWe appreciate your constructive and insightful feedback. We hope these new results address your concern regarding the comparison of fully fine-tuned models. We are happy to address any remaining concerns.\"}", "{\"title\": \"Author Response [1/3]\", \"comment\": \"We are pleased that you found our work interesting and appreciate your thoughtful comments and suggestions for improving the paper. We have revised the paper accordingly and provided a point-by-point response. We believe after revision, the current structure of the paper is well-balanced. If any concerns remain, we welcome further discussion to address them. Thank you again for your constructive feedback.\\n\\n## W1, Q3: Model clarity\", \"the_model_setup_is_described_in_line_307_312\": \"we use a Graph Isomorphism Network (GIN) as the encoder and an MLP as the decoder. **All details about model configurations, including layers and hidden dimensions, are provided in the supplementary code, along with a pretrained checkpoint** for easy reproducibility. Below, we report the hyperparameters from the code:\\n| Parameter | Value |\\n|-----------------------------------------------------------------|--------------|\\n| hidden dimension | 300 |\\n| normalization layer | batch norm |\\n| number of layers | 5 |\\n| node-to-graph readout | sum |\\n| $\\\\beta$ (in Eq. (3) second term) | 1.0e-09 |\\n| $L$ (Walk length) | 4 |\", \"the_mlp_consists_of_three_layers\": \"an input dimension of 300, a hidden layer with a dimension of 4 \\u00d7 300, and an output layer corresponding to the feature/task dimension. This MLP architecture is generally applied on all representations.\\n\\nWe apologize for the oversight in referencing the code. We have updated Section 5 and Appendix B.3 with the relevant model information. In our attempt to stay anonymous, the complete code is available in the supplementary materials and we will include the code link in the camera-ready version. \\n\\n## W2: Impact of the MI bottleneck\\n\\nThis work is motivated by the principle of the information bottleneck, which leads to the optimization targets in Eq. (3) for molecular representation learning and pretraining. \\n\\n### The effectiveness of the bottleneck was evaluated by comparing InfoAlign with multi-modal contrastive alignment approaches.\\n\\nFor joint use of data in pretraining, Figure 1 and Lines 85\\u2013102 show that the proposed architecture is guided by the information bottleneck principle, which includes a single encoder with multiple decoders. Without this principle, jointly using two data types would be contrastive learning approaches, such as InfoCORE and CLOOME (Figure 1(a)). Comparing these baselines in Tables 1 and 2 thus allows us to assess the impact of the information bottleneck. We also note that methods for jointly integrating molecular, cell morphology, and gene expression data in contrastive learning remain underdeveloped.\\n\\n### We focus on applying the information bottleneck principle during the pretraining.\\n\\nJoint use of data in downstream tasks, such as ToxCast and Biogen3K (Table 2), is often not feasible due to the high cost of obtaining cell morphology and gene expression data from biological experiments, compared to extracting molecular structure features. This is why we apply the information bottleneck during pretraining, rather than in downstream tasks. We agree that training models with MI constraints in downstream tasks, instead of using joint features, could be a promising approach. However, this is beyond the scope of our current focus on pretraining and may be explored in future work.\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely thank the reviewer for their insightful suggestions. We provide point-by-point responses and have revised the text and appendix, highlighting changes in blue.\\n\\n## W1: Random pre-specified graph\\n\\nWe conducted the requested experiments, and the results are shown in the table below. We find that randomly replacing edges significantly impacts prediction performance. The replaced edges lack the biological, chemical, and computational meaning of those used to construct the context graph in the paper.\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|--------------------|--------------|--------------|--------------|--------------|\\n| No Random Edges (InfoAlign) | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n| 50% Random Edges | 77.48\\u00b10.71 | 62.52\\u00b10.16 | 63.76\\u00b10.32 | 71.53\\u00b12.51 |\\n| 100% Random Edges | 76.21\\u00b10.71 | 62.97\\u00b10.08 | 64.6\\u00b10.39 | 75.72\\u00b12.25 |\\n\\n## W2: Genetic perturbation data\\n\\nIn Table 4, we observe a performance drop when excluding gene expression data in loss functions. \\n\\nWe also conducted new ablation studies, presented in Appendix D.4, with results shown in the table below. Removing all nodes related to genetic perturbation data from the context graph further decreases the performance, confirming the importance of genetic perturbation data.\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|-----------------------------|--------------|--------------|--------------|--------------|\\n| w/o genetic perturbation data | 77.97\\u00b10.33 | 67.1\\u00b10.17 | 64.93\\u00b10.96 | 51.57\\u00b10.46 |\\n| InfoAlign | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n\\n## W3: GROVER performance on ToxCast\\n\\nThe GROVER performance on ToxCast is reported as 53.1. We follow the standard splitting from Open Graph Benchmarking [1], which differs from the splitting used in the original GROVER paper [2]. Our results are consistent with those reported in [3].\\n\\n## W4: Ablation studies on removing data\\n\\nWe have conducted the requested ablation studies and have clarified them in Appendix D.4. The results are also shown in the table below. The first two rows display the removal of cell morphology or gene expression-related nodes from the context graph. We observe a further performance drop when these data are removed.\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|-----------------------------|--------------|--------------|--------------|--------------|\\n| w/o cell-related nodes | 79.57\\u00b10.58 | 68.41\\u00b10.31 | 65.11\\u00b10.82 | 51.21\\u00b10.17 |\\n| w/o gene-related nodes | 77.97\\u00b10.33 | 67.1\\u00b10.17 | 64.93\\u00b10.96 | 51.57\\u00b10.46 |\\n| w/o cell-related loss | 80.7\\u00b10.6 | 68.6\\u00b10.1 | 65.5\\u00b11.1 | 51.7\\u00b11.1 |\\n| w/o gene-related loss | 78.3\\u00b10.5 | 68.6\\u00b10.2 | 64.7\\u00b11.0 | 50.3\\u00b10.5 |\\n| InfoAlign | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n\\n## Q1, Q4: Performance drop in ablation studies\\n\\n### The performance of InfoAlign relies on the GNN encoder and different decoders with optimization targets. \\n\\nAs shown in Tables 1/2 and Figure 3, GNN encoders can extract meaningful representations with proper loss designs. Ablation studies in Table 4 demonstrate that, even without one type of data, the remaining two types still form proper targets based on information bottleneck principles, supporting the pretraining of a good GNN encoder.\\n\\n### We do not remove the GNN encoders, which continue to extract meaningful representations from molecular structures.\\n\\nIn the ablation studies, the \\\"absence of molecular features\\\" refers to the removal of fingerprint vectors from the loss functions. In this case, cell morphology and gene expression can still optimize the GNN encoder for meaningful representation, as observed in previous work like InfoCORE [4].\\n\\n## Q2 Relevant literature\\n\\nWe found that they were all published or released this year, including the most recent NeurIPS 2024 [6]. We are happy to discuss them and have updated the related work and appendix accordingly.\", \"lines_136_138\": \"\\\"CLOOME, MIGA [5], and MoCoP, and MolPhenix [6] contrast cellular images with molecules.\\\"\", \"line_774\": \"\\\"Approximating the mutual information of high-dimensional variables is a challenging task [7]\\\"\\n\\n## Reference:\\n\\n[1] Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS. 2020\\n\\n[2] Self-Supervised Graph Transformer on Large-Scale Molecular Data. NeurIPS 2020.\\n\\n[3] Evaluating Self-Supervised Learning for Molecular Graph Embeddings. NeurIPS 2023.\\n\\n[4] Removing Biases from Molecular Representations via Information Maximization. ICLR 2024.\\n\\n[5] Cross-Modal Graph Contrastive Learning with Cellular Images. Advanced Science 2024.\\n\\n[6] How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval. NeurIPS 2024.\\n\\n[7] Approximating mutual information of high-dimensional variables using learned representations.\"}", "{\"summary\": [\"The authors present a new multi-domain method to learn the representation between drugs, genes and cells. In contrast to typical contrastive training setups the InfoAlign method removes the redundant information through a bottleneck using a well formulated mutual information method.\", \"They show a method to create walks over the cellular context graph representing the interactions between the modalities, and use this to populate the compute graph for their representation learning.\", \"This representation framework uses Morgan fingerprints as molecular node features, CellProfiler features for the cell node features, and L1000 features for the gene node features, and the connections are based on chemical perturbations and cosine similarities.\", \"The authors show results across a range of benchmarks, demonstrating good performance, achieving top ranking scores in all but 2 / 15 of the key datasets/criteria evaluated in Tables 1 and 2.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The novel construction of the problem mitigates a significant issue with current implementations using a standard contrastive learning approach, by combining the modalities into a single method there is no decoupling between the different representations, and the MI approach to filter the data seems an effective way to reduce the data required to train over, potentially increasing data quality and improving time to train.\", \"The visual presentation is good, with tables are figures clearly designed to convey meaning.\", \"The breadth of models evaluated against gives a really clear status against both several standard approaches one might take, and also against SOTA prior work. This comparison is really nice to see.\"], \"weaknesses\": [\"Model clarity - I found it hard reading this paper to extract exactly how the model was constructed to run these experiments. The method is clearly novel, and an interesting approach to the multi-domain problem, but after being strongly theoretically motivated the method / implementation details, or even a description of model size was lacking. If these details could be expanded on it would help place the model / method in the appropriate context.\", \"Impact of the MI bottleneck - The mutual information bottleneck is well theoretically motivated, however I didn\\u2019t feel the power of the bottleneck was evaluated in the paper? A different approach would be just using all the data and relying on data size / model size to outweigh data quality. I like the idea with MI, but would have liked to see an ablation with this feature included / not as that would help me evaluate the importance of the bottleneck vs the joined training setup?\", \"Impact of context graph - Similarly the impact of the context graph and different ways of including the context felt under-explored. Fig 4(b) I think shows that the random walk length had little impact, but how this is impacted by composition of the random walk path (are the walks just reconstructing the most likely combinations that would form a triplet anyway?) and what happens if instead of walking the relevant combinations are just grouped together. This feels like an important baseline.\", \"The presentation of the research questions felt more like a report than a paper, I would have liked to see some more motivation / explanation of each rather than assuming knowledge on the readers part.\"], \"questions\": [\"Questions / suggestions:\", \"I would ask for some more clarity on the results presented in table 1, specifically for the Broad6k results. Many columns have a +/- 0 error on a reoccurring 3.1 result. I suspect this means that there are a few tasks that are always identified with a high AUC, but given the lack of discussion of these results it\\u2019s hard to interpret. It might be worth using a different set of thresholds for the Broad6K to get more resolution?\", \"It would also help in section 6 for each of the research questions to have a couple of sentences explaining what each question is and why we are asking them, otherwise the results feel disjointed to the uninitiated reader and hard to connect.\", \"I found the explanation of the exact model architecture / parameters used to be lacking, and while pointed in the main text to the appendix I did not find enough information there on the size of/ structure of the MLPs / training regime to feel like the result could be replicated. This is of particular importance in Table 1 where comparing to other methods I can only guess at things like parameter efficiency etc.\", \"In general I feel like the weighting of the paper is very theoretical, no bad thing I really like the inclusion of the mutual information, but I wonder if more details could be moved to the appendix to free up more space for the experimental method, the model construction, and discussion of the results. In the current format of the paper I find a really interesting set of ideas, that I find very hard to understand how to weight the importance of / a method that I could follow to replicate certain components.\"], \"specific_formatting_suggestions\": [\"Fig. 3. I don\\u2019t really understand what this graph is conveying? The top bar shows the proportion of single representation tasks, but the bottom splits along the model types? I either need more detailed explanation, or maybe consider a better type of graph to convert this information?\", \"Table 3. (right) (This should be a separate figure - I understand it might make formatting harder but it is hard to reference) I personally don\\u2019t like these KDE type plots as they imply smooth functions from what is usually limited data, and make it almost impossible to draw quantified conclusions from, I would be much happier with a histogram if this is important information to convey. Additionally x-axis labels.\", \"Fig 4 (a) - this plot is almost unreadable with the overlapping lines / y-scale, and the meaning I think is being conveyed with the LR spans such a number of magnitudes in range that I would rather see both more granularity and perhaps these results presented in a table?\", \"Fig 4 (b) - I can guess what the plot means but without a key / description detailing elements like the error band (? 1 std dev I assume, based on what variation is unclear though), the points are connected, but given the discrete data this is misleading, and the huge discrepancy at length 8 suggests to me either the error band is under-estimating the variance, or there are properties of a random walk length 8 that are not discussed. Again this is a really interesting result as it's looking at the way that different parts of the compute graph contribute to the final result, but I'm left asking more questions with this figure than it answers.\", \"While I thin this statement on line 521 is correct, \\u201cwe observe in Figure 4b that downstream performance on ChEMBL2K is relatively robust across a wide range of walk lengths.\\u201d The inclusion of this plot raises questions that are not answered.\", \"On a similar point to Fig 4b, I would be very interested to see a description / plot of what the typical construction of the random walk graphs contain.\", \"Thank you again for a really interesting read, I like the approach with InfoAlign, and appreciate the large quantity of effort put into this work.\", \"I'd request a slight look again at the weighting of different sections in this paper, as I found it slightly unbalanced, and think as a reader I would find benefit from more detail in the method so I could reproduce if I wanted, and more detail in the discussion of the results to understand how to place the results in better context. i.e. which parts of the method are the most important.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"The paper focuses on molecular representations, but if such a representation cannot outperform the features obtained from full fine-tuning, then what is the significance of such a kind of representation? If you have collected so much multimodal data and constructed complex pre-training tasks, but they still cannot surpass the work of others that use single-modality pretraining plus full fine-tuning, then why go through all this trouble to collect this data? I only need to use single-modality pre-training plus full fine-tuning, which is more in line with Occam's Razor.\", \"Moreover, single-molecule data is easier to collect and more abundant than biological data, making it easier for people to scale up their models. So, if your method cannot defeat an approach that only uses molecular pre-training plus full fine-tuning, why would anyone choose to use your method, which lacks both scalability and performance?\"]}", "{\"title\": \"Author Response [2/3]\", \"comment\": \"## W3, Q9: Impact of context graph (construction of the random walk graphs)\\n\\nThanks for your insightful comments. We conducted new experiments and found that random walks provide diverse neighbors, improving the pretraining performance. We have included the new results in Appendix D.5 and added a reference to the appendix in Section 6.3.2 of the main text. \\n\\n### Random walk produces diverse neighbors\\n\\nWe cached the random walk results for 100 epochs and studied the number of unique nodes at varying walk lengths. In this table, we report the mean and standard deviation (STD) of unique nodes for all pre-training molecules at each walk length. \\n\\n| Walk Length | Unique Nodes (mean\\u00b1std) |\\n|-------------|-----------------------------|\\n| 2 | 8.39\\u00b123.51 |\\n| 3 | 13.31\\u00b141.02 |\\n| 4 | 18.41\\u00b159.83 |\\n| 5 | 23.20\\u00b177.74 |\\n| 6 | 28.12\\u00b196.31 |\\n| 8 | 37.30\\u00b1131.15 |\\n| 10 | 46.03\\u00b1164.46 |\\n| 12 | 54.35\\u00b1196.25 |\\n\\n(Note that the minimum number of unique nodes is 1 for isolated nodes)\\n\\nIf the composition of the random walk path were fixed, the number of unique nodes would be close to the walk length. However, we observed that the number of unique nodes is larger and varies, suggesting that diverse nodes are included in the random walk paths.\\n\\nWe further explored the Jaccard similarity of neighborhoods extracted for the same molecule under varying walk paths, averaging similarity scores across all pretraining molecules. The pairwise similarities for different walk lengths are shown in the table below. We observe that similarity decreases as the difference in walk lengths increases, but remains above 90%. This may explain the stable performance of InfoAlign in Figure 5(b) (original Figure 4 (b)). These results suggest that even with a walk length of 2, diverse neighbors can be obtained, likely due to the presence of high-degree nodes in the context graph.\\n\\n| $L$ | 2 | 3 | 4 | 5 | 6 | 8 | 10 | 12 |\\n|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| 2 | 100.0 | 92.5 | 92.0 | 91.8 | 91.7 | 91.5 | 91.4 | 91.3 |\\n| 3 | 92.5 | 100.0 | 93.2 | 93.0 | 92.8 | 92.6 | 92.5 | 92.3 |\\n| 4 | 92.0 | 93.2 | 100.0 | 93.4 | 93.3 | 93.1 | 92.9 | 92.8 |\\n| 5 | 91.8 | 93.0 | 93.4 | 100.0 | 93.4 | 93.3 | 93.2 | 93.1 |\\n| 6 | 91.7 | 92.8 | 93.3 | 93.4 | 100.0 | 93.5 | 93.4 | 93.3 |\\n| 8 | 91.5 | 92.6 | 93.1 | 93.3 | 93.5 | 100.0 | 93.6 | 93.6 |\\n| 10 | 91.4 | 92.5 | 92.9 | 93.2 | 93.4 | 93.6 | 100.0 | 93.7 |\\n| 12 | 91.3 | 92.3 | 92.8 | 93.1 | 93.3 | 93.6 | 93.7 | 100.0 |\\n\\n### Fixed neighbors underperform random walk-sampled neighbors.\\n\\nRegarding the ablation study on the importance of diverse neighbors with random walk sampling, we conducted additional experiments. During pretraining, we randomly selected 4 direct neighbors and fixed them, instead of performing a random walk with a walk length of 4. The results, presented in the table below, highlight the importance of diverse neighborhoods extracted by the random walk for improved performance.\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|---------------------|----------------|----------------|----------------|----------------|\\n| Random Walk | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n| Fixed Neighbors | 77.47\\u00b10.38 | 66.75\\u00b10.13 | 65.43\\u00b10.76 | 50.08\\u00b10.30 |\\n\\nIn summary, random walks improve performance by sampling diverse neighbors. We appreciate your comment and welcome further discussion.\\n\\n## W4 and Q2: Presentation of research questions\\n\\nThanks for your comments. We have updated the main text with an explanation of the research questions at the beginning of Section 6, as well as at the start of Sections 6.1 and 6.2. These updates are provided below for your reference.\", \"section_6\": \"\\\"We demonstrate the effectiveness of InfoAlign's representation in (1) molecular property prediction, (2) molecule-morphology matching, and (3) analyze the performance of InfoAlign. These lead to three research questions (RQs).\\\"\\n\\nSection 6.1: \\\"Better molecular representations should improve prediction performance. We train MLPs on different representations to predict molecular properties in both classification and regression tasks..\\\"\\n\\nSection 6.2: \\\"Molecular representations are aligned with cell morphology. The zero-shot matching performance of a queried molecule to cell morphology features evaluates the alignment between the two modalities.\\\"\"}", "{\"comment\": \"I thank the authors for their considered response to the comments and questions I provided.\\n\\nI believe the changes made make for a more readable paper, and significantly improve the reproducibility for which I am very grateful. \\nThe detailed response puts the numerical results in better context with suitable error bars to understand the distributions better, and the authors also addressed several key issues with figures which made understanding the results more challenging, so on reflection I am very happy to raise my score to reflect this change. \\n\\nThank you again for a very enjoyable read, and such good engagement in the review process.\"}", "{\"summary\": \"This article proposes a pre-training method where the authors utilize genes, cells, and molecular information as nodes, and construct a context graph using the interactive or similar relationships between different components as edges. The authors then extract paths on the graph using random walks to construct training data, design training objectives from the perspective of mutual information, and integrate cellular and genetic information into molecular representations. After pre-training, the authors fine-tune the model on downstream tasks across four datasets to validate its performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This article pioneeringly incorporates cellular modality information in molecular representation learning, which enables the learned molecular representations to achieve better results in biochemistry-related tasks.\", \"The article conducts sufficient experiments to validate the effectiveness of the model.\", \"The paper is well-motivated; adding cellular modality data to enhance the model's performance in biochemical aspects is very reasonable.\"], \"weaknesses\": [\"The writing requires improvement; the methodology is difficult to follow, and many variables are not clearly defined. See Questions for details.\", \"The description of the dataset and downstream tasks is not sufficiently detailed, with some parts being confusing. See Questions for details.\"], \"questions\": [\"**Methodology**\", \"How is the neighboring node of $x$ defined in line 250? Is it the neighbor of $x$ on the context graph or all the $v_i$ on the sampled path?\", \"As shown in Fig. 2, different decoders are used to reconstruct the features of different types of nodes. However, if there are multiple neighbors $v_1, v_2, \\\\ldots, v_n$ of a certain molecule $x$ sharing the same type, how can the decoder decode various $y_{v_i}$ from the same encoded latent $z$ of $x$ without any additional information about $v_i$ being provided?\", \"**Experiments**\", \"The ChEMBL dataset provides information on whether a certain molecule exhibits activity against a specific target. In this manuscript, a task is defined as predicting whether a molecule can interact with a given target. However, it is unclear why the molecule would be characterized by **Cell Morphology** and **Gene Expression**. Such information should be related to the task (target) itself rather than the input molecule. Could the authors explain how **Gene Expression** and **Cell Morphology** data are generated for molecules?\", \"To substantiate the claim that the proposed multimodal alignment method more effectively models molecular properties within a cellular context, the authors must include a baseline comparison using a simple concatenation approach for alignment. Specifically, the authors should employ a pretrained GNN, such as Uni-Mol[1], to extract molecular features and concatenate them with Cell Morphology and Gene Expression representations as inputs for prediction tasks on the ChEMBL2k and Broad6k datasets.\", \"During the fine-tuning for downstream tasks, is the encoder frozen or is a full-fine-tune performed with the encoder also being trained?\", \"**If the authors can address all my major concerns, I would be pleased to raise the score.**\", \"[1] Zhou G, Gao Z, Ding Q, et al. Uni-mol: A universal 3d molecular representation learning framework[J]. 2023.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Regarding data construction complexity, the contribution of this paper is not about generating new data points for cell morphology or gene expression. We have curated existing data for pretraining, which does not involve biological experiments.\\n\\nExisting biological data is the result of extensive laboratory work conducted by numerous individuals in the fields of biology and chemistry. In contrast, the collection of single-molecule data often does not rely on traditional wet-lab experiments and can be achieved through computational simulations alone. The generation of biological data is not a simple task; if you believe that these data are easily collected, you are merely benefiting from the work of countless preceding researchers. Your viewpoint appears to dismiss the efforts of biochemistry professionals and is fraught with arrogance and a lack of understanding. As a fellow researcher, how could you make such a statement?\"}", "{\"comment\": \"The provided references [2, 3] only demonstrate that fingerprints can indeed outperform simple GNNs such as GCN and GAT on some tasks, but they do not prove that fingerprints possess the capability to match pretrain-GNNs. Moreover, in terms of the magnitude of improvement, pretrain-GNNs can bring about a very significant enhancement [1], which is larger than the performance gap shown in [2]. Therefore, this does not constitute a valid reason for not comparing fully finetuned models.\\n\\n[1] Rong Y, Bian Y, Xu T, et al. Self-supervised graph transformer on large-scale molecular data[J]. Advances in neural information processing systems, 2020, 33: 12559-12571.\\n\\n[2] Understanding the limitations of deep models for molecular property prediction: Insights and solutions. NeurIPS 2023.\\n\\n[3] Enhancing activity prediction models in drug discovery with the ability to understand human language. ICML 2023.\"}", "{\"metareview\": \"This paper proposes a new approach to learn molecular representations through incorporating the information of cellular responses. This is done by (1) constructing a context graph for cellular response data and (2) information bottleneck training on the extracted random walk.\\n\\nOverall, I find the paper to provide meaningful and solid contribution to the field via introducing a (relatively) new data modality. The experiments are convincing (despite some room for improvement). This paper is likely to promote some future works to consider learning molecular representation using cellular responses. \\n\\nThere was a valid concern raised by the reviewers, that the main experiments are conducted without fine-tuning the model. I tend to agree with the concern and it would have been better for the authors to fully explore the performance of models after finetuning. Especially, the authors could have selected a subset of the considered baselines and compared with them after finetuning. However, I believe this concern is partially alleviated by the new experiments constructed by the authors during the rebuttal. I also believe that the old experiments still convincingly show the promise of the new idea. \\n\\nOverall, I recommend acceptance for this paper since it introduces a meaningful data modality to molecular representation learning with meaningful results. The highly encourage the authors to extend their evaluation to fine-tuning setting which better aligns with the practical scenarios.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Sxqu raised strong concerns on the empirical evaluation of the paper. I agree with these points and the experiments are not sufficient to verify if the model achieves SOTA (the authors do not claim SOTA). However, I do not think all the methods need to achieve SOTA to be published, especially for a crowded research area like molecular representation learning. I also think SOTA in this field is less meaningful since public data is limited and it is hard to scale the models.\"}", "{\"title\": \"Author Response to Further Questions\", \"comment\": \"We appreciate the reviewer\\u2019s prompt response and follow-up question. We are happy to provide further clarification on the concerns. Should any issues remain, we are committed to addressing them promptly and thoroughly.\\n\\n## Decoding for Multiple Neighbors\\n\\nThe decoding process predicts node features from the context graph. The values of these features range from 0 to 1 (Lines 292-293). This is not autoregressive generation. Intuitively, it resembles multi-label prediction. And the molecule may be aligned with different nodes simultaneously, each with different weights.\\n\\n## Finetuning\\n\\nWe appreciate your comment. Let's answer them point-to-point.\\n\\n> It does not make sense to freeze the encoders as the original paper of UniMol[1] utilized full finetuning on its downstream tasks (some of the datasets even have less data than those used in the article), so do other molecular pretraining works such as [2].\", \"we_freeze_the_encoder_based_on_the_following_motivations\": \"1. **Task Setting**: As indicated in the title, our goal is \\\"Learning Molecular Representation.\\\" We focus on evaluating the quality of representations from pretraining, rather than fine-tuned encoders. In this context, the core in Section 6.1 is the predictive ability of different molecular representations. The subsequent points (2 and 3) provide a fair and focused evaluation pipeline.\\n\\n2. **Fair and Consistent Evaluation**: We freeze the encoder for **all** approaches (including InfoAlign) to ensure a fair and consistent comparison across methods.\\n\\n3. **Occam's Razor for Evaluation**: We follow the principle of parsimony, which we believe is good practice for model evaluation. Freezing the encoder helps avoid introducing too many assumptions and better isolates the impact of the molecular representations themselves.\\n\\n4. **Practicality**: Obtaining molecular representations has additional benefits. It is less resource-intensive than fully fine-tuning models for downstream applications. Notably, the UniMol model also provides an official API for accessing molecular representations, which suggests they support evaluation of their representations in a similar manner.\\n\\n_______\\n\\n> More importantly, it does not make sense that the Morgan fingerprint outperforms most of the molecular pretraining models, as shown in Table 1 (even outperforming all on ChemBL2k), as this phenomenon challenges the entire field of molecular pretraining's existence.\\n\\n### We found that Morgan fingerprints do not challenge the existence/progress of molecular pretraining. \\n\\nMorgan fingerprints do not outperform pretrained GNN representations on Broad6K and ToxCast, where the best GNN-based representations show significant improvements compared to the best methods based on the fingerprints. On Biogen3K and ChEMBL2K, the best methods based on fingerprints also do not surpass the performance of the best pretrained GNN representations.\\n\\n### Fingerprints could be good baselines to promote better molecular pretraining.\\n\\n(1) Defining universal self-supervised tasks from molecular structures alone is challenging, as discussed in our Introduction and recent publications [1]. Molecular pretraining often requires domain-specific knowledge, which is difficult to capture with manually designed tasks. (2) While fingerprints are a classic method, they are not a weak baseline and can perform well in certain tasks. This observation can also be found in previous work [2,3].\\n\\n_______\\n\\n> The performance of these molecular pretraining models has been severely underestimated. The authors should demonstrate the performance of the fully finetuned version and also update the results in response to \\\"W2, Q4, feature concatenation\\\".\\n\\nWe apologize for any confusion. Our main contribution and evaluation focus on molecular representations, and we chose to freeze the encoder during pretraining to maintain a focused analysis. While fully fine-tuning can unlock the full potential of molecular pretraining models, it may introduce additional assumptions that could influence the analysis. In designing the fine-tuning pipeline, we first ensured a consistent and fair evaluation. Then, following Occam's razor, we froze the encoder to better isolate the impact of the molecular representations themselves. We hope this clarifies our rationale and appreciate your understanding.\\n\\n## Reference \\n[1] Does GNN Pretraining Help Molecular Representation? NeurIPS 2022.\\n\\n[2] Understanding the limitations of deep models for molecular property prediction: Insights and solutions. NeurIPS 2023.\\n\\n[3] Enhancing activity prediction models in drug discovery with the ability to understand human language. ICML 2023.\"}", "{\"comment\": \"Dear reviewer Sxqu,\\n\\nI am a senior author on the paper. I greatly appreciate your taking the time to discuss our work. \\n\\nThe concerns that my lab member\\u2019s \\u201cviewpoint appears to dismiss the efforts of biochemistry professionals and is fraught with arrogance and a lack of understanding\\u201d are misplaced - our laboratory has actually led the experimental work for a large proportion of the publicly available image data of this type, so we are all aware of the challenges and value of lab work. I hope this is a simple mis-reading of my lab member\\u2019s response as I don\\u2019t see anything in our note to warrant this response.\", \"now_back_to_the_debate\": \"\", \"i_believe_your_major_concern_this\": \"if fully tuned models are doing better than our learned representations, then our learned representations are not terribly useful, and are even less useful if the representations are cumbersome to generate, i.e, they need data collected from wet lab experiments.\\n\\nIf that is indeed your primary concern, I think that's a valid one and I am happy to clarify our stance further, below.\\n\\n--- \\n\\n**Concern 1: If fully tuned models perform better than our learned representations, then our learned representations are not terribly useful**\\n\\n\\n1. Representation learning supports a broader range of use cases. Given the vast number of unlabeled molecules, pre-trained representations can be efficiently stored and applied to various downstream tasks, such as visualization in virtual screening analysis.\\n\\n2. In fully fine-tuned scenarios, representations from pre-trained models can still enhance performance. For example, on ChEMBL2K and Broad6K, InfoAlign\\u2019s representations (81.3\\u00b10.6 and 70.0\\u00b10.1) outperform fully-tuned UniMol (78.9\\u00b10.2 and 65.1\\u00b11.0) by 3.0% and 7.7%, respectively. In the NLP community [1], research shows that the usefulness of each paradigm depends on the specific downstream task. Therefore, both paradigms offer distinct advantages and can be combined. Representation learning does not conflict with full fine-tuning. \\n\\n**Concern 2: The representations are even less useful if the they are cumbersome to generate, i.e, they need data collected from wet lab experiments.**\\n\\n\\nI think this is a fair criticism -- it's definitely a non-trivial overhead if one's approach needs data collected from wet lab experiments.\\n\\nHowever,\\n\\n1. The data we used is already publicly available; we simply tapped into existing resources.\\n2. Profiling assays like Cell Painting are now being routinely run by several academic labs and pharma companies, sometimes for quite large subsets of their compound libraries. Thus, in some contexts this data is freely available to their scientists for compounds of interest. What an amazing opportunity researchers have to tap into that, using InfoAlign-like methods!\\n3. InfoAlign can produce embeddings for a molecule even if we don't have a Cell Painting/gene expression profile for the molecule; hopefully this bit was already clear.\\n\\nLet us know what you think, and thank you for remaining engaged!\\n\\n--- \\n\\n[1] To tune or not to tune? adapting pretrained representations to diverse tasks. ACL. 2019.\"}", "{\"title\": \"Experiments\", \"comment\": \"We thank the reviewer for their insightful comments and apologize for any miscommunication in our initial response. We aim to address the limitations of early-stage experiments and high-throughput data generation for large-scale libraries. As the reviewer rightly pointed out, generating data from assays like L1000 and Cell Painting is time-intensive, often requiring months to procure compounds, prepare cell lines, and execute screens.\\n\\nOne alternative, as suggested, is machine learning (ML)-based virtual screening, which often utilizes chemical structures as inputs. These models leverage the chemical similarity principle (i.e., similar compounds exhibit similar activity) to make predictions. However, datasets used in such models frequently suffer from analogue bias and other challenges, such as incomplete stereochemical information in SMILES representations. As a result, these models often struggle to generalize beyond the chemical space covered in the training data.\\n\\nThis limitation brings us back to the necessity of generating biological data to expand the applicability domain of ML models. While it is infeasible to experimentally screen the vast chemical space (~10^40 molecules), incorporating existing experimental data into representation learning provides a practical path forward. Our work focuses on integrating experimental data to enhance molecular representations, enabling the generation of descriptors enriched with both biological and chemical information. By doing so, we aim to reduce the need for extensive experimental campaigns in the future while facilitating rapid and scalable virtual screens, while having models that outperform previously published models.\"}", "{\"title\": \"Author Response [1/2]\", \"comment\": \"We sincerely appreciate the reviewer's thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. We have also revised the main text and appendix to incorporate the reviewer's valuable feedback, with all changes clearly highlighted in blue for ease of reference. Should any concerns remain, we remain fully committed to addressing them promptly and thoroughly.\\n\\n## W1: Biological context\\n\\nThank you for recognizing the significance of our research problem. In the revision, we have further clarified the definition of the tasks (Appendix D.1) to reduce the barrier to understanding the biological knowledge prerequisites.\\n\\nSo far, we have: (1) formulated the problem as a machine learning task in Section 2, (2) explained the method and curated the dataset for ease of use in Sections 5.1, 6, and Appendices C and D, (3) provided a clear background on the motivation in the first paragraph of the Introduction, and (4) expanded the discussion of cellular response data in the related work section, Appendices C and D, with relevant references for readers interested in further details.\\n\\nWe hope these efforts will help readers in the ML community better understand the problem and encourage further solutions. We would also be happy to discuss additional strategies to further clarify the biological prerequisites in the main text and appendix.\\n\\n## W2: Edge weight\\n\\n### Edge weights are motivated by tasks in drug discovery.\", \"sorry_for_the_confusion\": \"We have now clarified this in the revision (Lines 198-201):\\n\\\"For example, edges derived from computational criteria between molecule nodes are assigned weights based on the assumption that structurally similar molecules may exhibit similar biological effects, a concept widely used in drug discovery, such as lead optimization.\\\"\\n\\nSpecifically, as introduced in Section 5, edges based on chemical and biological criteria have uniform weights (i.e., edge weights are set to 1). Edge weights are primarily applied to edges introduced by computational criteria, where we compute similarity using features, ranging from 0 to 1. For example, the similarity between Morgan fingerprints constructs weighted edges in the context graph. This approach is motivated by assumptions in tasks like lead optimization, where structurally similar molecules may exhibit similar biological effects. The weights quantify similarity, rather than assuming identical effectiveness for structurally similar molecules.\\n\\n### Edge weights are observed to empirically improve performance robustness.\\n\\nIn pretraining, the cumulative edge weights help avoid aligning a molecule with features from distant nodes on the context graph, especially when using longer walk lengths. In the table below, we present new experiments on ChEMBL2K using an unweighted context graph (all edge weights set to 1). With walk length $L=4$ and weighted edges, InfoAlign achieves 81.3\\u00b10.6. As the walk length increases, the performance with weighted edges is more stable and performs better than with unweighted edges.\\n\\n| Walk Length | L=8 | L=10 | L=12 |\\n|--------------|---------------|---------------|---------------|\\n| weighted | 80.28\\u00b10.58 | 81.14\\u00b10.32 | 81.15\\u00b10.56 |\\n| unweighted | 79.75\\u00b10.57 | 79.57\\u00b10.51 | 79.94\\u00b10.36 |\\n\\n## W3: Remove noisy edges\\n\\nThanks for your suggestion. We conducted new experiments without using the mechanism described in Section 5 to remove noisy edges. As shown in the first row of the table below, performance decreases to varying degrees with more noisy edges, compared to using the mechanism with fewer noisy edges.\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|-------------------------|--------------|--------------|--------------|--------------|\\n| with more noisy edges | 79.97\\u00b10.21 | 69.03\\u00b10.22 | 65.88\\u00b10.92 | 50.97\\u00b10.62 |\\n| with fewer noisy edges | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n\\n(The setting for \\\"with more noisy edges\\\": after computing the similarity, we applied a threshold of 0.5 to select the similar edges.)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The authors propose a method called InfoAlign for predicting molecular properties by integrating three different modalities: molecular structures, gene expression, and phenomics embeddings. To learn useful representations, they construct a weighted connected graph over cell morphology profiles, related molecules, and gene expression values. They train an encoder-decoder architecture where, for a molecule of interest, they encode its representation and decode both itself and all other nodes encountered during a random walk on a pre-specified graph. This approach results in mutual information maximization between the compound of interest $x_i$ and related entities discovered through the random walk on the pre-encoded graph. The authors test their method on a variety of chemical property prediction datasets, demonstrating that they outperform various baselines, including pre-trained Graph Neural Networks (GNNs), chemical language models, uni-modal models such as cell morphology or gene expression, and some multi-modal alignment models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The authors perform a comprehensive evaluation against baseline models across a variety of datasets.\", \"They convincingly demonstrate that including additional modalities improves performance, as evidenced by thorough evaluations and ablation studies.\", \"The incorporation of a graph is an interesting way to introduce prior knowledge into the learning representations for a particular molecule.\"], \"weaknesses\": [\"The validity and robustness of the pre-specified graph are not thoroughly explored. It would be informative to assess how sensitive the method is to the quality of the graph. For example, one experiment could involve removing 50% of valid connections and replacing them with random pairs of nodes; another could involve using a completely random graph.\", \"The second gap identified by the authors is slightly misformulated: \\\"They treat molecules as the sole connectors between gene expression and cell morphology, ignoring the potential for genetic perturbations.\\\"\", \"Essentially, the authors are arguing that incorporating genetic perturbation data can further improve predictive capacity. However, there is no ablation study where this information is omitted to directly validate its impact on empirical performance.\", \"Regarding the ToxCast dataset, the authors report a performance of 0.72 ROC AUC using GROVER. Did the authors use a different partitioning of the dataset than previous works?\", \"The ablation loss is only regarding removal of the losses as far as I understand the data itself is still input into the training. Can the authors perform an ablation where a full data modality is not added as part of training?\"], \"questions\": \"- Are the authors surprised by the relatively minor drop in ROC AUC values when omitting individual modalities?\\n- Some additional relevant literature that could enhance the discussion includes:\\n [0] Cross-Modal Graph Contrastive Learning with Cellular Images\\n [1] How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval (a work evaluating zero-shot classification)\\n [2] Approximating Mutual Information of High-Dimensional Variables Using Learned Representations (proposes a scalable approach for approximating mutual information of high-dimensional objects)\\n- One of the conclusions from the work is emphasizing the importance of molecular features. Dot he authors have an explanation for why the absence of molecular features in the ablation results in a ToxCast AUC that overlaps the non-ablated model performance? Without seeing the results I would expect the performance in the absence of an ablated molecular feature reconstruction loss to be a lot worse.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response [2/2]\", \"comment\": \"## Q1: Data requirements\\n\\n### InfoAlign does not require cell morphology and gene expression data for downstream tasks but improves InfoAlign pretraining\\n\\nCellular response data support the pretraining of InfoAlign. For downstream tasks, where only molecules are used as input and the output is a molecular representation, no additional data types are required.\\n\\nWe have updated Appendix D.4 with new experiments that removed either cell morphology or gene expression-related nodes from the context graph. The results are also given in the table below. The downstream performance drops when these nodes are removed. Although gene expression data is more sparse (about ten times less than cell morphology), it still contributes valuable information to InfoAlign\\u2019s performance. In summary, we believe that additional data from cell morphology and gene expression add value, and InfoAlign may perform worse with less data. Fortunately, we have curated multiple sources of cellular response data to ensure diverse and comprehensive data for pre-training.\\n\\n\\n| | ChEMBL2K | Broad6K | ToxCast | Biogen3K |\\n|-----------------------------|--------------|--------------|--------------|--------------|\\n| w/o cell-related nodes | 79.57\\u00b10.58 | 68.41\\u00b10.31 | 65.11\\u00b10.82 | 51.21\\u00b10.17 |\\n| w/o gene-related nodes | 77.97\\u00b10.33 | 67.1\\u00b10.17 | 64.93\\u00b10.96 | 51.57\\u00b10.46 |\\n| InfoAlign | 81.33\\u00b10.62 | 69.95\\u00b10.09 | 66.36\\u00b11.05 | 49.42\\u00b10.18 |\\n\\n\\n## Q2: Molecular features on context graphs\\n\\nThank you for the interesting question. In this work, we primarily use Morgan fingerprints to avoid introducing unnecessary factors that could influence the development and analysis of InfoAlign's pre-training strategy and model. Based on our observations (Table 1/2), Morgan fingerprints remain a competitive and cost-effective representation of molecular structures.\\n\\nImproving InfoAlign with other pre-trained GNN representations or even their ensemble is indeed a promising direction. However, since this is outside the main focus of this work, we leave it for future exploration.\\n\\n## Q3: Computational Complexity \\n\\nThank you for the question. Compared to existing work, InfoAlign has cost dealing with the context graph. Pretraining with the context graph introduces minimal additional computational complexity. We use a sparse matrix to extract the random walk on the context graph. Let $N$ and $M$ denote the number of nodes and edges in the context graph, respectively, with an average node degree $k$, where $k \\\\ll N$ and $M \\\\ll N$. The time and space complexity of the random walk are $\\\\mathcal{O}(k)$ and $\\\\mathcal{O}(M)$, both much smaller than the dense version's $\\\\mathcal{O}(N^2)$.\\n\\nBy using random walks to sample local neighborhoods for pretraining molecules, we can scale the context graph efficiently. Data size is not a major concern; the practical limitation lies in the limited number of cell morphology and gene expression features due to the high cost of generating them. Pretraining is efficient on a V100, using less than 13GB of GPU memory with a large batch size of 3072.\"}" ] }
BbYu1wLwmj
Safe Meta-Reinforcement Learning via Dual-Method-Based Policy Adaptation: Near-Optimality and Anytime Safety Guarantee
[ "Siyuan Xu", "Minghui Zhu" ]
This paper studies the safe meta-reinforcement learning (safe meta-RL) problem where anytime safety is ensured during the meta-test. We develop a safe meta-RL framework that consists of two modules, safe policy adaptation and safe meta-policy training, and propose efficient algorithms for the two modules. Beyond existing safe meta-RL analyses, we prove the anytime safety guarantee of policy adaptation and provide a lower bound of the expected total reward of the adapted policies compared with the optimal policies, which shows that the adapted policies are nearly optimal. Our experiments demonstrate three key advantages over existing safe meta-RL methods: (i) superior optimality, (ii) anytime safety guarantee, and (iii) high computational efficiency.
[ "Reinforcement learning", "meta-learning" ]
Reject
https://openreview.net/pdf?id=BbYu1wLwmj
https://openreview.net/forum?id=BbYu1wLwmj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y5kzCSn4hf", "x2o6v0cEIq", "wpu1Awfrbq", "wfPwJmn2aP", "vNykC3qF34", "plSptxsban", "ebCgeNPYSZ", "cOOvCPM8GL", "c8uelxFvRq", "ZyDMHFuN0o", "XFOs4xvamV", "QaYkMKMnGq", "NEGOvPAAYB", "Mqjil3gyTX", "IQexxjhNuF", "FxBrfJrsv2", "EtBSNYsFvY", "EkrjYBfb1S", "EdzOvp3KV6", "DFVRmq3eR3", "CDnfEJYLvq", "B08DEfMmQU", "81nh73IU4I", "38JeLvI157" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732142633637, 1730740830670, 1729151826134, 1732604950319, 1730690917373, 1732561523535, 1732492580806, 1737523689350, 1732143615677, 1732821396302, 1732566885468, 1732143767294, 1732586803352, 1732142296033, 1732596788597, 1730518315063, 1732143119739, 1734628877347, 1732142401013, 1732564358920, 1732143897467, 1732144785617, 1732142555954, 1732143218114 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_9TCU" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_bErq" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_GqFW" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_GqFW" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_9TCU" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_bErq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_TzFn" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_bErq" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Reviewer_TzFn" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Area_Chair_Witb" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ], [ "ICLR.cc/2025/Conference/Submission5179/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (4/4)\", \"comment\": \">**Question 1. Why was a dual-method-based approach chosen over other constraint-handling techniques? I guess that using any state-of-the-art safe RL baseline in meta-RL settings could also achieve good performance.**\\n\\n**Answer:** As we state in the answer to Weakness 3, the proposed method is the first algorithm that simultaneously offers two key advantages, (i) the safety guarantee for every single policy optimization step (using data collected on a single policy) and (ii) holding a closed-form solution, which enables us to use the dual method to reduce the computational complexity of meta-policy training. The existing methods for safe policy optimization, including the primal-dual-based methods, e.g., CRPO, RCPO, PPO-Lagrangian, and the trust-region-based methods, e.g., CPO, do not hold these two advantages simultaneously, and therefore are not suitable for the safe meta-RL problem. \\n\\nWe conduct experiments on seven scenarios including navigation tasks with collision avoidance and locomotion tasks to verify these advantages of the proposed algorithms. \\n\\n>**Question 2. Could there be advantages to comparing it with alternatives, such as shielded RL?**\\n\\n**Answer:** In this paper, we consider a safe-meta RL problem. During the meta-test, given an unknown environment with an unknown CMDP, the agent can sample few-shot data from the environment and adapt the policy. \\n\\nIn shielded RL, the shield function for the agent is pre-trained for two cases: (i) the MDP is known; (ii) a large amount of data is sampled from an unknown MDP. Therefore, the shielded RL method cannot be used in the safe meta-RL problem.\"}", "{\"summary\": \"This paper investigates the problem of ensuring safety in meta-reinforcement learning (meta-RL) by proposing a framework that guarantees anytime safety during meta-testing. The approach is based on dual-method-based policy adaptation, which includes modules for safe policy adaptation and safe meta-policy training. It provides empirical results showcasing improvements over existing safe meta-RL methods in terms of optimality, safety guarantees, and computational efficiency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The method achieves high computational efficiency, making it advantageous for scaling to complex tasks.\\n\\n2. Empirical results demonstrate that the proposed method outperforms baseline approaches in terms of both optimality and safety across a variety of tasks.\", \"weaknesses\": \"1. The related work section lacks thorough investigation, e.g., some multi-task/multi-objective safe RL methods; these can be helpful for meta-safe RL.\\n\\n2. The paper is not well-written and appears to rely heavily on language models for content generation, e.g. the abstract.\\n\\n3. The method lacks novelty; based on my understanding, it does not present new contributions, including in the theoretical aspects. It extends primal-dual settings for meta-safe RL, similar to primal meta-safe RL (meta-CRPO).\", \"questions\": \"Why was a dual-method-based approach chosen over other constraint-handling techniques? I guess that using any state-of-the-art safe RL baseline in meta-RL settings could also achieve good performance.\\n\\nCould there be advantages to comparing it with alternatives, such as shielded RL?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an algorithm for a safe meta-reinforcement learning problem. Specifically, the proposed algorithm ensures anytime safety during the meta-test, which consists of safe policy adaptation and safe meta-policy training modules. Theoretically, the authors prove the anytime safety guarantee of policy adaptation and show that the obtained policy is near-optimal. The authors' empirical experiments show that the proposed algorithm performs better than the baseline methods.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper addresses an important and interesting problem. The motivation behind this problem is well-presented and easy to understand.\", \"The proposed algorithm is technically sound. It is easy to follow the deviation of the algorithm.\", \"The experiment has been well designed and the results are good. This paper sufficiently covers the necessary baseline methods including state-of-the-art methods. The benchmark tasks are Gym and Safety-Gymnasium, which are quite standard safe RL literature.\"], \"weaknesses\": [\"I have a serious concern about the theoretical results in Section 5. Specifically, I guess the authors have some misunderstanding regarding sufficient safe visits or ergodicity.\", \"This paper is on safe RL, which implicitly means that there is a set of states that cannot be visited (frequently). This also means that the ergodicity assumption does not hold in safe RL tasks. If I understand correctly, the authors seem to cite Moldovan and Abbeel (2012) as a piece of evidence that CMDP is ergodic in line 376. Unfortunately, however, in Moldovan and Abbeel (2012), there are opposite statements such as\", \"> Almost all proposed exploration techniques presume ergodicity; authors present it as a harmless technical assumption but it rarely holds in interesting practical problems.\", \"> Unfortunately, many environments are not ergodic.\", \"Related to the above, I disagree with Remark 1 and Remark 2. If the authors address standard MDP, the remarks would be true. It is not true with CMDPs or safe RL. Safe RL studies should not assume ergodicity and thus actually consider \\\"reachability\\\" or \\\"returnability\\\" as in\", \"Turchetta, Matteo, Felix Berkenkamp, and Andreas Krause. \\\"Safe exploration in finite markov decision processes with gaussian processes.\\\" Advances in neural information processing systems 29 (2016).\", \"Wachi, Akifumi, and Yanan Sui. \\\"Safe reinforcement learning in constrained markov decision processes.\\\" International Conference on Machine Learning. PMLR, 2020.\", \"Even worse, the authors try to propose an algorithm with almost surely. Hence, I feel Assumption 2 is incompatible with the nature of the proposed algorithm.\", \"The authors may want to argue that Assumption 2 still holds by setting a large $B$, but I guess such a large $B$ will lead to useless bounds on both safety and optimality.\", \"The authors may also want to insist that it is ok to guarantee safety only during meta-tests. However, I do not think it is reasonable to assume sufficient coverage \\\"for any policy and any state.\\\"\", \"**Suggestions**\", \"I read through the proofs of the theorems, but a large portion is strongly built on Assumption 2. Given Section 5 is a core contribution of this paper, I think this is a serious mistake and I do not think that it can be fixed during the rebuttal period.\", \"For the next submission, I recommend the authors to change Assumption 2 in two points.\", \"I think \\\"for all policy $\\\\pi$\\\" is too strong. If I were an author, I would try to make an assumption with \\\"for a safe policy\\\".\", \"Also, \\\"for all state $s$\\\" is not reasonable. I would make an assumption characterized by a (safe) subset of state space.\"], \"questions\": [\"Q1: Please tell me your thoughts about my comments in Weakness.\", \"Q2: Is it possible to relax Assumption 2 while maintaining the claims in Theorems?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response and the new set of results in the revised manuscript. I have no further questions.\"}", "{\"summary\": \"The paper proposed a safe meta RL framework to learn a meta policy which is safe to a new RL task. The key contributions include (i) theoretical analysis and show that anytime safety guarantee can be achieved if the initial policy is safe, (ii) theoretically analyze the tradeoff between safety guarantee and optimality, (iii) empirically validated its outperformance against other meta RL algorithms in computational efficiency and reward / safety performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is a theoretically dense paper and its core strength lies in its theoretical derivations and insights. The paper did a good job in showing the safety guarantee (when initial policy is safe) and Theorem 1 provides key insight on the tradeoff between this safety guarantee and reward optimality.\\n\\n2. The paper did a thorough study on the shortcomings of other safe meta RL paper and identified the key area which it can improve on (i.e. computational efficiency and anytime safety guarantee. \\n\\n3. The empirical result shows that it convincingly outperforms other safe meta RL algorithms in terms of computational efficiency and reward / safety performance.\", \"weaknesses\": \"1. The experiments portion of this paper is relatively short (esp in main paper). I'd think including other experiments would further improve the paper. For example, trying out different values of $\\\\delta_{c_i}$ (allowable constraint violation) and observe the tradeoff between reward and safety.\\n\\n2. The allowable constraint violation $\\\\delta_{c_i}$ seems like a hyper-parameter and I would appreciate further guidance on how to determine the appropriate value for a safety-constrained task. Perhaps performing experiment suggested in item (1) above could help. \\n\\n3. The paper does point out the inherent shortcoming of CPO being computationally expensive and proposed a dual method for safe policy adaptation. However, the safe policy adaptation problem outlined in Eq4 & 5 seems rather similar to Lagrangian-based online safe RL algorithm, e.g. RCPO, PPO-Lagrangian. The authors might want to illustrate how is this dual method particularly novel. \\n\\n4. To achieve anytime safety, the initial policy should already be safe. The paper could further illustrate how this is achieved. In Fig1, safe meta RL seems to start with safe policy in test env while MAML and meta-CRPO don't. I'm curious how this is achieved in practice. \\n\\n5. In Fig4, meta-CPO (blue dashed line) is not present in the humanoid task. Is there any reason why this method is missing from humanoid task only?\", \"questions\": \"Please refer to the Weakness section and I'm more than happy to discuss if there's anything I misunderstood or missed out.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the authors' response. After reviewing the response, considering other reviewers' comments, and re-checking the paper, I recognize that the study simply extends CRPO, a primal-based method, to meta-RL by replacing the primal optimization with dual optimization (See this [study algorithm 2](https://openreview.net/pdf?id=BbYu1wLwmj#page=6.66) and [CRPO algorithm 1](https://proceedings.mlr.press/v139/xu21a/xu21a.pdf#page=4.39)). I also agree with Reviewer bErq's feedback, and the manuscript overstates the method's performance. Therefore, I will maintain my score of 3: reject.\"}", "{\"title\": \"Response\", \"comment\": \"**Ergodicity.** Since the initial review, I have fully understood that this paper focuses on expected cumulative safety constraints. However, it is important to note that ergodicity should not be assumed in the context of safe RL. While Turchetta et al. (2016) and Wachi (2020) provide accessible examples, the general principle holds: ergodicity is not a valid assumption for safe RL in this setting. Specifically, if the safety cost $c(s,a)$ exceeds the available safety budget (i.e., the safety threshold minus the cumulative safety cost), it is clear that ergodicity cannot hold.\\n\\n**Experimental Results.** The experimental results presented in the paper are not aligned with the theoretical framework. In particular, there is a mismatch between the assumptions in the theory and the experimental settings. For consistency and to strengthen the paper's argument, it is essential to adjust the experimental setup so that it aligns with the newly introduced Assumption 2.\\n\\n**Conclusion.** As I noted in my initial review, I do not believe the issues raised in this paper can be addressed within the rebuttal period. After reviewing the relevant literature on safe RL, carefully examining the theoretical assumptions and results, and evaluating the experimental design, I conclude that this paper should be evaluated in a different review cycle. Therefore, I will maintain my original rating of 3: Reject.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response (1/3)\", \"comment\": \"We are grateful and indebted for the reviewer's time and effort invested to evaluate our manuscript, and for all the suggestions and reference recommendations to make our manuscript a better and stronger contribution. We answer the weaknesses and questions as follows.\\n\\n>**Weakness 1.1. While the proposed algorithm is strong, no intuition is provided when introducing the algorithm structure, making the flow of the paper not smooth.**\\n\\n**Answer:** Thanks for the comments. In the revised manuscript, we add several explanations after the proposed algorithms are introduced. Please refer to the revised manuscript for the details and the added sentences are highlighted in red color.\\n\\nFor example, after the safe policy optimization algorithm (1), we add the intuition of the algorithm design: \\\"The safe policy adaptation $\\\\mathcal{A}^{s}$ in problem (1) is inspired by the derivation of CPO, where both problem (1) and CPO aim to approximate the original safe RL problem. Specifically, the objective and constraint functions of problem (1) serve as upper bounds of the true objective and constraint functions $J_\\\\tau(\\\\pi)$ and $J_{c_i,\\\\tau}(\\\\pi)$ of the safe RL problem.\\\"\\n\\n>**Weakness 1.2. The connection between the proposed approach and Meta-CRPO and Meta-CPO is not clear. Although this is briefly discussed in the appendix, I suggest connecting the proposed method with the previous approaches also when introducing the new algorithm. This makes the readers much easier to understand what the key difference in the algorithm is that makes the proposed method perform better.**\\n\\n**Answer:** Thanks for the suggestions. In the revised manuscript, we add the discussion that connects the proposed method with Meta-CRPO and Meta-CPO, in Section 3.1 and Section 4.1. Please refer to the revised manuscript for the details and the added sentences are highlighted in red color.\\n\\n>**Weakness 2. The experimental results can be improved. The influence of the hyperparameters is not discussed or tested, and how to select the hyperparameters is unclear.**\\n\\n**Answer:** Thanks for the suggestions. In the revised manuscript (Appendix D.4 and Figure 6), we test the experimental results with different hyper-parameter settings, i.e. different allowable constraint violation constant $\\\\delta_{c_i}$, on two environments, including Half-cheetah and Car-Circle-Hazard. \\n\\nMoreover, we also include guidance about how to choose the hyper-parameters, including $\\\\delta_{c_i}$, $\\\\lambda$, and $\\\\lambda_{c_i}$ in Appendix D.4 of the revised manuscript. The guidance is presented as follows.\\n\\nGuidance of selecting $\\\\delta_{c_i}$: As indicated in both theoretical results in Section 5.2 and the experimental results in Figure 6, we choose a large $\\\\delta_{c_i}$ when the constraint satisfaction is not required to be strict, and a small $\\\\delta_{c_i} \\\\rightarrow 0$ when the constraint satisfaction is prioritized.\\n\\nGuidance of selecting $\\\\lambda$ and $\\\\lambda_{c_i}$: We set $\\\\lambda=\\\\lambda_{c_i}$ and tune them such that, the KL divergence of initial policy $\\\\pi$ and the adapted policy $\\\\pi^\\\\prime$ solved from the safe policy adaptation problem (1) is close to $0.03$. If the KL divergence is too large, the objective and constraint functions of problem (1) are not good approximations of the accumulated reward/cost functions, as indicated by Lemma 1. If the KL divergence is too small, the policy adaptation step of problem (1) is too small.\"}", "{\"comment\": \"Thank you for the detailed response and clarifications! The presentation and the rigor of the paper have been improved. I have no further questions.\"}", "{\"comment\": \"Thanks for your reply.\\n\\nIn terms of Assumption 2, in the revised manuscript, we no longer assume the ergodicity and only make an assumption on safe policies and a subset of the state space. We rewrite all the theorems and their corresponding proofs under the new relaxed assumption.\\n\\nIn terms of experimental results, our experiments are aligned with the theoretical framework and the new assumption 2. Specifically, assumption 2 only serves as an indicator to select the hyperparameter $\\\\lambda$ in Theorem 1. After Assumption 2 is replaced by the new relaxed assumption, only the constant $\\\\lambda$ in Theorem 1 is changed and all the problem settings are not changed. In the experiments, the agent can visit any state in a subset of the state space, which matches the relaxed Assumption 2. Therefore, there is no mismatch between the new assumption and the experimental settings.\"}", "{\"title\": \"Response (2/3)\", \"comment\": \">**Question 1. Line 69 \\\"Both meta-CRPO and meta-CPO provide positive upper bounds of the constraint violation\\\". Do the authors mean that meta-CRPO and meta-CPO cannot satisfy the constraint that the sum of $c_i$ is less than $d_i$?**\\n\\n**Answer:** Yes. In meta-CRPO and meta-CPO, the constraint violation converges to zero as the number of policy optimization steps becomes sufficiently large or when the KL divergence between the initial policy and the adapted policy is sufficiently small. Consequently, the $\\\\sum_t c_i(t) - d_i$ tends to zero. However, there is no guarantee that it is always smaller than zero. \\n\\n>**Question 2. Line 129: There is a sum over $a^{\\\\prime} \\\\in \\\\mathcal{A}$ when defining the softmax policy. Does it mean that the paper only considers discrete action space?**\\n\\n**Answer:** Thanks for your question. In this manuscript, the action space $\\\\mathcal{A}$ could be either discrete or continuous.\\nThe state space $\\\\mathcal{S}$ could be either a discrete space or a bounded continuous space. In the revised manuscript, we have clarified it and modified the definitions, theorems, and proofs that are not compatible with it, including the definition of the softmax policy.\\n\\nIn the experiments, all the environments have continuous state space and action space.\\n\\n>**Question 3. Line 137: Should the $J_{c_i, \\\\tau}(\\\\pi)$ be $J_{c_i}(\\\\pi)$ ?**\\n\\n**Answer:** Yes, thanks for pointing it out.\\n\\n>**Question 4. Line 192: it is said that setting $\\\\delta_{c_i}=0$ for all $i$ is too strict, and to alleviate the issue, the paper set $\\\\delta_{c_i}=0$. However, in the hyperparameters provided in Table 2, why $\\\\delta_{c_i}=0$ is set to in all environments?**\\n\\n**Answer:** Thanks for the question. The statement should be modified to \\\"When the requirement of the constraint satisfaction is not strict, setting $\\\\delta_{c_i}=0$ for all $i$ in problem (1) may overly restrict the policy update step. To enhance the algorithm\\u2019s flexibility, we set $\\\\delta_{c_i} \\\\geq 0$ as an allowable constraint violation in problem (1). \\\"\\n\\nIn the experiments, we aim to verify the anytime safe property of the proposed method. Therefore, we set $\\\\delta_{c_i}=0$ such that $J_{c_i,\\\\tau}(\\\\pi) \\\\leq d_{i,\\\\tau}$ always holds for the adapted policies $\\\\pi$.\\n\\nTo compare the experimental results under different values of $\\\\delta_{c_i}$, in the revised manuscript (Appendix D.4 and Figure 6), we add the experiments on two environments, including Half-cheetah and Car-Circle-Hazard. \\n\\n>**Question 5. Line 210: \\\"Safety cannot be guaranteed in each step\\\". What is the definition of \\\"step\\\" here?**\\n\\n**Answer:** The policy optimization algorithm, such as (1), requires the data collection by a single policy, i.e., the initial policy $\\\\pi_\\\\phi$, and produces the adapted policy $\\\\mathcal{A}^{s}(\\\\pi_\\\\phi, \\\\Lambda, \\\\Delta, \\\\tau)$. This is one step of the policy adaptation. \\n\\nIn the manuscript, we consider the anytime property of the policy optimization algorithm, i.e., any policy used to explore the environment should be safe. This anytime property requires that each step of the policy optimization algorithm should be safe, because the output policy from each step of the policy optimization algorithm needs to be safe.\\n\\n We have included the above definition of \\\"step\\\" in the revised manuscript (Section 3.1, lines 187-189).\"}", "{\"title\": \"Final response\", \"comment\": \"First of all, please note that this is the **discussion** phase. I do not think it is appropriate for authors to focus on only the reviewers' mistakes while ignoring the paper's weaknesses. In the first rebuttal, it was unclear whether the authors admitted their misunderstandings on ergodicity. I believe it is better to accept mistakes and discuss constructively to improve the paper.\\n\\n**Ergodicity.** It was confirmed that the errors have been corrected in the revised version of the paper.\\n\\n**Experimental settings.** I am not talking about hyperparameter $\\\\lambda$. I have been discussing how the dataset was collected. The current implementation is based on the old assumptions, and unsafe policies collected a large amount of unsafe trajectories with sufficient coverage. This is not consistent with the current theoretical results.\\n\\n**Regarding the comments by Reviewer 9TCU.** The degree of novelty may elicit different opinions from different people, but I understand the perspective of Reviewer 9TCU. The proposed algorithm can be seen as an incremental extension of CRPO.\\n\\n**Conclusion.** Again, this paper has suffered from initial critical errors resulting from unreasonable assumptions, which previously led to unreasonable theoretical results and now result in inconsistent empirical analyses. I do not believe this paper can be ready for publication during this discussion phase. Therefore, I will maintain my original rating of 3: Reject considering that this paper should be evaluated in a different review cycle.\"}", "{\"comment\": \"We are grateful and indebted for the reviewer's time and effort invested to evaluate our manuscript, and for all the suggestions and reference recommendations to make our manuscript a better and stronger contribution. We answer the weaknesses and questions as follows.\\n\\n>**Weakness 1. The related work section lacks thorough investigation, e.g., some multi-task/multi-objective safe RL methods; these can be helpful for meta-safe RL.** \\n\\n**Answer:** Although all of meta-safe RL, multi-task safe RL, and multi-objective safe RL consider multiple tasks in safe RL environments, however, the most important distinction between meta-safe RL and multi-task/multi-objective safe RL is that the agent in meta-safe RL is required to adapt to a new and unknown environment under few-shot data collection. Therefore, the policy adaptation algorithm is the most important part of meta-safe RL. This manuscript designs a novel policy adaptation algorithm in (1) which holds several benefits for the few-shot policy adaptation that the existing methods do not hold. In contrast, the multi-task/multi-objective safe RL learns the policies for multiple tasks during the training stage, where the policy adaptation is not required. Therefore, the multi-task/multi-objective can borrow existing policy optimization methods and do not need to design a new one.\\n\\nThanks for the comments. We have included the above discussion in Appendix A of the revised manuscript.\\n\\n>**Weakness 2. The paper is not well-written and appears to rely heavily on language models for content generation, e.g. the abstract.**\\n\\n**Answer:** We certify that we do not use language models to generate any content of the manuscript. We aim to keep the abstract as concise as possible.\", \"title\": \"Response (1/4)\"}", "{\"title\": \"Thanks very much for the suggestions\", \"comment\": \"We sincerely appreciate your effort and time in reviewing our paper, as well as your invaluable suggestions. We admit that we had a misunderstanding about ergodicity in the first submitted version, and we fully acknowledge the importance of the issue you raised. The suggestions and insights about the modification (Suggestion 2) are instrumental in improving our work for future submissions. We fully understand and respect your decision based on this issue and are grateful for your constructive feedback. Thanks again for your thoughtful and thorough review.\"}", "{\"summary\": \"This paper considers the safe meta-reinforcement learning (meta-RL) problem, and proposes a novel meta-RL algorithm that achieves 1) superior optimality, 2) anytime safety, and 3) high computational efficiency. The proposed algorithm is shown theoretically to achieve monotonic improvement, near optimality, and anytime safety. Experimental results are provided to support the 3 claims.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The considered problem is important. The proposed algorithm has 3 key advantages including superior optimality, anytime safety, and high computational efficiency. The key advantages are justified both theoretically and empirically.\", \"weaknesses\": [\"1. My most important concern about this paper is its writing.\", \"While the proposed algorithm is strong, no intuition is provided when introducing the algorithm structure, making the flow of the paper not smooth.\", \"The connection between the proposed approach and Meta-CRPO and Meta-CPO is not clear. Although this is briefly discussed in the appendix, I suggest connecting the proposed method with the previous approaches also when introducing the new algorithm. This makes the readers much easier to understand what the key difference in the algorithm is that makes the proposed method perform better.\", \"There is some ambiguity about some terms introduced in the paper. Please see Questions for details.\", \"2. The experimental results can be improved. The influence of the hyperparameters is not discussed or tested, and how to select the hyperparameters is unclear.\"], \"questions\": \"1. Line 69 \\\"Both meta-CRPO and meta-CPO provide positive upper bounds of the constraint violation\\\". Do the authors mean $d_i > 0$ or meta-CRPO and meta-CPO cannot satisfy the constraint that the sum of $c_i$ less than $d_i$?\\n\\n1. Line 129: There is a sum over $a'\\\\in\\\\mathcal A$ when defining the softmax policy. Does it mean that the paper only considers discrete action space?\\n\\n2. Line 137: Should the $J_{c_i, \\\\tau}(\\\\pi)$ be $J_{c_i}(\\\\pi)$?\\n\\n3. Line 192: it is said that setting $\\\\delta_{c_i} = 0$ for all $i$ is too strict, and to alleviate the issue, the paper set $\\\\delta_{c_i}\\\\geq 0$. However, in the hyperparameters provided in Table 2, why $\\\\delta_c$ is set to $0$ in all environments?\\n\\n4. Line 210: \\\"Safety cannot be guaranteed in each step\\\". What is the definition of \\\"step\\\" here?\\n\\n5. Line 245: \\\"The complete statement of Proposition 3 that...\\\". Is this Proposition 1 instead? Same question for Line 246 \\\"Proposition 3 shows that...\\\"\\n\\n6. Line 324: \\\"Note that the meta-gradient in (6) does not include the computations of Hessian and inverse of Hessian w.r.t. $\\\\phi$\\\". Could the authors clarify the reason why the meta-gradient in (6) avoids the Hessian and what is the cost of not using Hessian?\\n\\n7. Does assumption 2 imply that the state space considered in the paper needs to be discrete and finite?\\n\\n8. Can the proposed algorithm work if the max accumulated cost constraint is set to $0$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"Thanks very much for your time and effort in reviewing our work. Thanks for your suggestions to make our manuscript better. We answer the weaknesses and questions as follows.\\n\\n>**Weakness 1. The experiments portion of this paper is relatively short (esp in the main paper). I think including other experiments would further improve the paper. For example, try out different values of (allowable constraint violation) and observe the trade-off between reward and safety.**\\n\\n**Answer:** Thanks for the suggestions. In the revised manuscript (Appendix D.4 and Figure 6), we conduct the experiments with different allowable constraint violation constant $\\\\delta_{c_i}$ on two environments, including Half-cheetah and Car-Circle-Hazard. The experiment shows the trade-off between reward and safety.\\n\\n>**Weakness 2. The allowable constraint violation $\\\\delta_{c_i} seems like a hyper-parameter and I would appreciate further guidance on how to determine the appropriate value for a safety-constrained task. Perhaps performing the experiment suggested in item (1) above could help.**\\n\\n**Answer:** Thanks for the suggestions. As stated in the answer to Weakness 1, we have added the experiments. The following is the guidance on how to choose the hyper-parameter $\\\\delta_{c_i}$.\\n\\nAs indicated in both theoretical results in Section 5.2 and the experimental results in Figure 6, we choose a large $\\\\delta_{c_i}$ when the constraint satisfaction is not required to be strict, and a small $\\\\delta_{c_i} \\\\rightarrow 0$ when the constraint satisfaction is prioritized.\\n\\n>**Weakness 3. The paper does point out the inherent shortcoming of CPO being computationally expensive and proposes a dual method for safe policy adaptation. However, the safe policy adaptation problem outlined in Eq. 4 and 5 seems rather similar to Lagrangian-based online safe RL algorithm, e.g. RCPO, PPO-Lagrangian. The authors might want to illustrate how is this dual method particularly novel.**\\n\\n**Answer:** Eq. (4) and (5) solve the safe policy adaptation problem in (1) by the **dual method**. RCPO and PPO-Lagrangian solve the safe RL algorithm by the **primal-dual** method. Although both the proposed dual method for problem (1) and the primal-dual method in RCPO and PPO-Lagrangian are Lagrangian-based safe policy optimization algorithms, they are different. RCPO and PPO-Lagrangian are not suitable for this safe meta-RL problem and are much worse than the proposed method, even worse than CPO.\\n\\nEq. (4) and (5) aim to solve the safe policy adaptation problem in (1). As mentioned in Section 3.1, the safe policy adaptation (1) holds several benefits similar to CPO, including the safety guarantee for a single policy optimization step (using data collected on a single policy) and the monotonic improvement. Moreover, we derive the closed-form solution under certain Lagrangian multipliers for the optimization problem (1). Based on the derived closed-form solution of (1) (shown in (3)), we can use the dual method shown in (4)(5) to solve the safe policy adaptation problem in (1), which significantly reduces the computational complexity during the meta-training. \\n\\nIn contrast, RCPO and PPO-Lagrangian do not hold any of the benefits shown in CPO and the proposed algorithm. First, RCPO and PPO-Lagrangian use the gradient ascent steps on the Lagrangian, which do not have the safety guarantee and the monotonic improvement in each policy optimization step, and therefore cannot guarantee anytime safety in the meta-test stage. Moreover, there is no closed-form solution for the policy optimization step in RCPO and PPO-Lagrangian, and therefore cannot be solved by the dual method, which leads the high computational complexity during the meta-training.\\n\\nThanks for the question and the literature recommendation. We have included the above discussion in the revised manuscript (Appendix C).\"}", "{\"metareview\": \"Safe Meta-Reinforcement Learning via Dual-Method-Based Policy Adaptation: Near-Optimality and Anytime Safety Guarantee\", \"summary\": \"The paper focuses on addressing the challenge of anytime safety in meta-reinforcement learning (meta-RL). The proposed framework integrates two modules: safe policy adaptation and safe meta-policy training, enabling the formulation of policies that are nearly optimal while guaranteeing safety constraints during exploration. By employing dual-method algorithms, the paper achieves computational efficiency and anytime safety, surpassing existing methods like Meta-CPO and Meta-CRPO in both theoretical guarantees and experimental results. Experiments across locomotion and navigation tasks validate the framework's performance, including reduced computational complexity and stricter adherence to safety constraints.\", \"comments\": \"This paper received four expert reviews, with scores 3, 3, 6, 6, and the average score is 4.50. The reviewers acknowledge multiple positive aspects of the paper, including the theoretically grounded approach to address anytime safety in meta-RL. However, the reviewers have concerns about several weaknesses. More than one reviewer pointed out that the proposed dual-method-based approach, while effective, shows significant overlap with existing Lagrangian-based techniques in safe RL (e.g., RCPO, PPO-Lagrangian), and the novelty of the method is not clearly articulated. Reviewer bErq has given critical comments about assumptions used in developing and analyzing the proposed method. More than one reviewer commented that the experimental valuation can be greatly improved, including performing sensitivity analysis on the hyperparameters. One reviewer has also pointed out missing references, including significant contributions in multi-task or multi-objective safe RL, which could provide valuable context.\\n\\nWhile this paper has several commendable aspects, such as its focus on anytime safety and computational efficiency, it suffers from a lack of clarity, incomplete theoretical justification, and limited novelty. Addressing these issues would significantly enhance the paper's quality and impact.\", \"additional_comments_on_reviewer_discussion\": \"Please see the \\\"Comments\\\" in the meta-review.\"}", "{\"comment\": \">**Weakness 3. The method lacks novelty; based on my understanding, it does not present new contributions, including in the theoretical aspects. It extends primal-dual settings for meta-safe RL, similar to primal meta-safe RL (meta-CRPO).**\\n\\n**Answer:** The paper does not extend the **primal-dual method** in meta-CRPO to meta-safe RL. Instead, we design a new policy adaptation algorithm in problem (1) and solve it by the **dual method**. Although the primal-dual method and the dual method are both Lagrangian-based safe policy optimization algorithms, they are different, and the primal-dual method in meta-CRPO is much worse than the proposed dual method in the meta-safe RL problem.\\n\\n**The differences between the proposed method and meta-CRPO.**\\nThis paper designs a new policy adaptation algorithm in problem (1) and solves it by the dual method. The proposed algorithm holds (i) a safety guarantee for a single policy optimization step and (ii) a closed-form solution. The proposed policy adaptation algorithm is the first algorithm that simultaneously offers the two key properties.\\nBased on the derived closed-form solution, we use the dual method to solve problem (1). Meta-CRPO uses the CRPO, a primal-dual-based method for policy adaptation. The method does not hold any of the two properties in our proposed methods. In particular, it does not have a closed-form solution and cannot be solved by the dual method. It uses the primal-dual method to solve safe RL, which cannot guarantee safety for a single policy optimization step.\\nIn this paper, the meta-policy training algorithm in (2) aims to maximize the expected accumulated reward of the policies adapted from the meta-policy. We derive a Hessian-free approach to optimize the meta-policy. \\nIn meta-CRPO, the meta-policy is learned by minimizing the distance between the meta-policy and the task-specific policy, which does not consider the optimality and the safety of the task-specific policies adapted from the meta-policy.\\nIn the following, we elaborate on the advantages of the proposed methods over existing meta-safe RL methods, including meta-CPO and meta-CRPO.\\n\\n**The advantages of the proposed method over meta-CRPO.** The proposed algorithms offer three key advantages over existing safe meta-RL methods, including meta-CPO and meta-CRPO.\\n(i) **Superior optimality.** Our safe meta-policy training algorithm in (2) maximizes the expected accumulated reward of the policies adapted from the meta-policy. In contrast, the meta-training of meta-CRPO learns the meta-policy by minimizing the distance between the meta-policy and the task-specific policy, which does not consider the optimality and the safety of the task-specific policies adapted from the meta-policy.\\n(ii) **Anytime safety guarantee** during the meta-test. The safe meta-policy training produces a safe initial meta-policy by imposing the safety constraint. The safe policy adaptation imposes a constraint on the upper bound of the total cost, and thus is guaranteed to produce a safe policy for each iteration when the initial policy is safe. By integrating these two modules, anytime safety is achieved. In contrast, meta-CRPO employs CRPO, a primal-dual-based method, for policy adaptation, which does not have any safe guarantee in a single policy adaptation step. Thus, anytime safety cannot be guaranteed in meta-CRPO.\\n(iii) **High computational efficiency** in both the meta-test and meta-training stages. In the meta-test, the derivation of the close-formed solution of Problem (1) makes it efficient to be solved.\\nIn contrast, the meta-CRPO and meta-CPO require to solve a constrained optimization problem, which is more computationally expensive than the solution of problem (1). In the meta-training, the close-formed solution of the policy adaptation (1) is used to derive a Hessian-free meta-gradient and reduces the computation complexity of the proposed algorithm to approach that in the single-level optimization. \\nIn contrast, the meta-CPRO requires that the task-specific optimal policies have been learned, which is impractical when the number of training tasks is large.\\nThe meta-CPO uses the bi-level optimization to learn the meta-policy, which requires the computation of Hessian and Hessian inverse. \\nWe conduct experiments on seven scenarios including navigation tasks with collision avoidance and locomotion tasks to verify these advantages of the proposed algorithms.\", \"title\": \"Response (2/4)\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the reply.\\n\\nWe politely disagree that \\\"the study simply extends CRPO, a primal-based method, to meta-RL by replacing the primal optimization with dual optimization\\\". We propose a new policy optimization algorithm in problem (1), which is specifically designed for the safe meta-RL problem. In particular, problem (1) along with its solution in Algorithm 2 is the first algorithm that holds (i) a safety guarantee for a single policy optimization step and (ii) a closed-form solution. The policy adaptation algorithm of CRPO lacks both a safety guarantee for individual policy optimization steps and a closed-form solution. Consequently, (a) it can not be solved by dual-method, which leads to high computational complexity, and (b) it cannot guarantee the anytime safety in the meta-test.\\n\\nWe also disagree that \\\"the manuscript overstates the method's performance\\\". In terms of the algorithm design, we clearly show our motivations of why the proposed method can be better than the existing methods, i.e., three key advantages of the proposed method including (i) superior optimality, (ii) anytime safety guarantee, and (iii) high computational efficiency. In the experiment, we do fair and complete comparisons with all the existing safe meta-RL algorithms on seven environment settings. The proposed method significantly outperforms the baseline methods.\"}", "{\"title\": \"Response (3/3)\", \"comment\": \"**Question 6. Line 245: \\\"The complete statement of Proposition 3 that...\\\". Is this Proposition 1 instead? Same question for Line 246 \\\"Proposition 3 shows that...\\\" **\\n\\n**Answer:** Yes, thanks for pointing it out. We have corrected the statement in the revised manuscript (line 252). \\n\\n>**Question 7. Line 324: \\\"Note that the meta-gradient in (6) does not include the computations of Hessian and inverse of Hessian w.r.t. $\\\\phi$.\\\" Could the authors clarify the reason why the meta-gradient in (6) avoids the Hessian and what is the cost of not using Hessian?**\\n\\n**Answer:** In many existing methods, such as MAML and meta-CPO, the computation of meta-gradient includes the computations of Hessian and inverse of Hessian w.r.t. $\\\\phi$. For example, in MAML,\\nthe adapted policy $\\\\pi_\\\\theta$ is obtained by one-step policy gradient ascent $\\\\theta=\\\\phi+ \\\\alpha \\\\nabla_\\\\phi J_\\\\tau(\\\\pi_\\\\phi)$. The meta-gradient, i.e., the gradient of the meta-objective $\\\\nabla_\\\\phi J_\\\\tau(\\\\phi+ \\\\alpha \\\\nabla_\\\\phi J_\\\\tau(\\\\pi_\\\\phi))$, includes the computations of Hessian. \\n\\nIn this manuscript, we derive the closed-form solution of the policy adaptation problem $\\\\mathcal{A}^{s}$ in (1), as shown in Proposition 1. Specifically, the adapted policy $\\\\pi_\\\\tau$ has an analytical expression $\\\\pi_\\\\tau=g(\\\\phi)$. The meta-gradient is $\\\\nabla_\\\\phi J_\\\\tau(g(\\\\phi))$. Therefore it avoids the Hessian in the meta-gradient computation. The meta-gradient holds a comparable computational complexity as the policy gradient. \\n\\n>**Question 8. Does assumption 2 imply that the state space considered in the paper needs to be discrete and finite?**\\n\\n**Answer:** Thanks for the question. In this manuscript, the state space $\\\\mathcal{S}$ could be either a discrete space or a bounded continuous space. If the state space $\\\\mathcal{S}$ is discrete, $\\\\nu^\\\\pi_{\\\\tau}(s)$ denotes the visitation probability on $s$. If the state space $\\\\mathcal{S}$ is continuous, $\\\\nu^\\\\pi_{\\\\tau}(s)$ denotes the visitation probability density on $s$.\\n\\n>**Question 9. Can the proposed algorithm framework if the max accumulated cost constraint is set to $0$ ?**\\n\\n**Answer:** The proposed algorithm can work when the maximal accumulated cost constraint is set to $0$. However, the theoretical results in Section 5 may not be valid for the case. In Assumption 1, we assume that the feasible set of problem (2) is not empty, i.e., there exists a softmax policy that is safe for all environments. However, when the maximal accumulated cost constraint is set to $0$, if the cost in some states is larger than $0$, the visitation probability to the state has to be strictly $0$ for a safe policy. Then, there does not exist a softmax policy whose visitation probability to any state is strictly $0$. In this case, Assumption 1 is not satisfied, then the theoretical results in Section 5 may not be valid for the case.\"}", "{\"title\": \"Response\", \"comment\": \"We are grateful and indebted for the reviewer's time and effort invested to evaluate our manuscript, and for all the suggestions and reference recommendations to make our manuscript a better and stronger contribution. We answer the weaknesses and questions as follows.\\n\\n>**Weakness 1. Safe RL studies should not assume ergodicity and thus actually consider \\\"reachability\\\" or \\\"returnability\\\" as in (Turchetta, 2016) and (Wachi, 2020). The authors try to propose an algorithm with almost surely. Hence, I feel Assumption 2 is incompatible with the nature of the proposed algorithm.**\\n\\n**Answer:** Papers (Turchetta, 2016) and (Wachi, 2020) study a safe RL scenario, which aims to develop a policy such that any state $s_t$ on the trajectory is safe, i.e., the cost $c(s_t)$ is small than a constant for any timestep $t$. In these two papers, the algorithms with the almost surely property are proposed, i.e., the probability of visiting an unsafe state is smaller than $\\\\epsilon$.\\n\\nThis manuscript, including the algorithm design and the theoretical results (shown in Section 5 in Theorem 1), studies the constrained MDP where the safety is defined as that the expected accumulated costs $J\\\\_{c\\\\_i,\\\\tau}(\\\\pi)=\\\\mathbb{E}\\\\_\\\\pi [\\\\sum\\\\_{t} \\\\gamma^t c(s\\\\_t)]$ are smaller than a threshold. Therefore, it allows for a safe policy to visit some unsafe states (the states with high costs), as long as the expectation of the accumulated costs along many trajectories is smaller than a threshold. Therefore, we are not trying to propose an algorithm with the almost surely property. As a result, this manuscript does not require some assumptions in papers (Turchetta, 2016) and (Wachi, 2020), such as reachability and the regularity of the constraint function $c(s)$. Moreover, as the proposed method is a model-free method, i.e., the knowledge about the cost $c(s)$ on state $s$ cannot be obtained until the state $s$ is visited, we require the assumption of the state visitation. Therefore, the proposed algorithm is compatible with our assumptions.\\n\\n>**Weakness 2. The weaknesses related to the validness of Assumption 2 and Remarks 1 and 2.**\\n\\n**Answer:** We agree that the ergodicity may be a strict assumption in the safe RL problem and Assumption 2 may require a large $B$. Thanks for your comments and suggestions. \\n\\nIn the revised manuscript, we relax Assumption 2 to be compatible with the safe RL setting. Following your suggestion, the relaxed assumption only assumes the safe policy has sufficient visitation to a subset of safe space. Here is the relaxed assumption.\\n\\n**Relaxed assumption:**\\n\\n>>There exists a set of states $\\\\mathcal{S}^{v} \\\\subseteq \\\\mathcal{S}$ and a constant $\\\\eta > 0$ such that, for any task $\\\\tau \\\\in \\\\Gamma$ and any safe policy $\\\\pi^{s} \\\\in$ { $ \\\\pi \\\\in \\\\Pi: J\\\\_{c_i,\\\\tau}(\\\\pi) \\\\leq d_i + \\\\delta_{max}, \\\\forall i = 1, \\\\cdots, p$ }, \\n$\\\\nu^{\\\\pi^s}_{\\\\tau}(s) \\\\geq \\\\eta $ for all $s \\\\in \\\\mathcal{S}^{v}$.\\n\\nThe new assumption supposes that there exists a set of states $\\\\mathcal{S}^{v}$ such that the safe policy can take sufficient visitation in the set.\\n\\nUnder the new assumption, we rewrite the proofs for Section 5, including Lemma 1, Proposition 4, Corollary 1, and Theorem 1. \\nOverall, with the new Assumption 2, the theoretical results remain unchanged except for some constants. In the proofs for the theoretical results in Section 5, Assumption 2 and the constant $B$ in Assumption 2 (in the old manuscript) are only used when proving Proposition 4, and the proofs of all remaining theoretical results are built on Proposition 4. Therefore, in the revised manuscript, based on the new assumption, we build a new Proposition 4, which uses the constant $\\\\eta$ and $\\\\alpha$ (in the new assumption) to replace the constant $B$. Then, we almost keep other proofs unchanged.\\n\\nThanks for the suggestion again. Please refer to Section 5 and Appendix F.4 in the revised manuscript for the details of the theoretical results and proofs. \\n\\n>**Question 2. Is it possible to relax Assumption 2 while maintaining the claims in Theorems?**\\n\\n**Answer:** Yes. As stated in the answer to Weakness 2, we relax Assumption 2 and maintain the theoretical results unchanged except for some constants.\"}", "{\"comment\": \"**The theoretical contribution of the proposed method over meta-CRPO.** In terms of theoretical contribution, the paper is the first to derive a comprehensive theoretical analysis regarding near optimality and anytime safety guarantees for safe meta-RL.\\nThe theoretical contribution of this paper over meta-CRPO and meta-CPO is shown in Table 1. First, we establish the theoretical basis of the algorithm design that guarantees anytime safety, i.e., zero constraint violation for any policy used for exploration. Second, we derive a lower bound of the expected accumulated reward of the adapted policies compared to that of the task-specific optimal policies, which shows the near optimality of the proposed safe meta-RL framework. \\nFinally, we demonstrate a trade-off between the optimality bound and constraint violation when the allowable constraint violation varies, which enables the algorithm to be adjusted to prioritize either safety or optimality.\\nIn meta-CPRO, the optimality bound is provided, but anytime safety is not guaranteed.\\nMeta-CPO provides neither an optimality bound nor anytime safety.\", \"title\": \"Response (3/4)\"}", "{\"title\": \"Response (2/2)\", \"comment\": \">**Weakness 4. To achieve anytime safety, the initial policy should already be safe. The paper could further illustrate how this is achieved. In Fig 1, safe meta RL seems to start with safe policy in test env while MAML and meta-CRPO don't. I'm curious how this is achieved in practice.**\\n\\n**Answer:** The learned meta-policy, which is the starting policy for the meta-test, is designed to be safe. In the proposed safe meta-RL algorithm, the optimization problem in (2) is solved for the meta-training stage. The constraints in the optimization problem of (2) ensure that the solved meta-policy is safe, i.e., the starting policy for the meta-test is safe. In Algorithm 2, we prioritize the minimization of the constraint violation of the meta-policy (shown in lines 10-12 in Algorithm 2). Please refer to Section 3.2 for the details. \\n\\nIn contrast, MAML and meta-CRPO do not have such a mechanism in the meta-training. Thus, they cannot guarantee that the initial policy of the meta-test is safe.\\n\\n>**Weakness 5. In Fig 4, meta-CPO (blue dashed line) is not present in the humanoid task. Is there any reason why this method is missing from humanoid tasks only?**\\n\\n**Answer:**\\nDue to the high dimension of the Humanoid tasks, the meta-training of meta-CPO is too slow (10 times slower than the proposed method) in Humanoid tasks. It is extremely time-consuming (over one month) to run the meta-training of meta-CPO multiple times on humanoid tasks and draw its figure. So the result of meta-CPO is not shown in Fig 4. In contrast, the proposed method can deal with the high-dimensional problem.\\n\\n We have included the above discussion in the revised manuscript (Appendix D.3).\"}" ] }
Bb1ddVX8rL
Legendre-KAN : High Accuracy KA Network Based on Legendre Polynomials
[ "Wei Chen", "Qingfeng Xia", "JiaHui Sun" ]
Recently, the Kolmogorov-Arnold Network (KAN) has been proposed, significantly outperforming MLP in terms of interpretability and symbolic representation. In practice, KANs are required to fit data to extremely high precision. For instance, in typical applications of KAN like inferring precise equations from data and serving as solvers for partial differential equations, high accuracy is an intrinsic requirement. In the current architecture of KAN, cubic B-spline basis functions were selected as the approximate tools. However, the inflexibility of fixed degree and knots in B-splines restricts the adaptability of the activation functions. Due to these inherent limitations of B-spline functions, especially low-order and homogeneity, KAN still has room for improvement in accuracy. In this paper, we propose the Legendre-KAN that can enhance the degrees of freedom of the basis functions in the KAN. Compared to the traditional Spline-KAN, Legendre-KAN utilizes parameterized Legendre basis functions and normalization layers at the edges of the KAN. Benefiting from higher-order orthogonal polynomials, Legendre-KAN significantly outperforms the Spline-KAN in terms of accuracy. Extensive experiments demonstrate that Legendre-KAN achieves higher accuracy and parameter efficiency, of which accuracy reaches 10-100 times that of Spline-KAN in some cases. For those functions which can be symbolized, this leads to more correct results as opposed to Spline-KAN. Our approach effectively improves the accuracy of the mathematical relationships in KANs, providing a better solution for approximating and analyzing complex nonlinear functions.
[ "KA Network; Legendre Polynomials; Symbolic Representation; Function Approximation; High Accuracy" ]
Reject
https://openreview.net/pdf?id=Bb1ddVX8rL
https://openreview.net/forum?id=Bb1ddVX8rL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wOd74mQmEu", "v0Y4aH9h9X", "m5VUVv1qQW", "fe5bSLVbC4", "WLU7ASZw9o", "QMpJGS4d9Q", "LvIj3o2l1W", "KcfJHEbHdw", "F07zG9OOqT", "C6U1MwZwqm", "1O9qSio8Lp" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "decision", "official_comment" ], "note_created": [ 1732713161038, 1732530123309, 1730670403494, 1730677707926, 1733117213848, 1732530370317, 1730334632953, 1734539657151, 1730470544148, 1737523924961, 1732552382204 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8671/Authors" ], [ "ICLR.cc/2025/Conference/Submission8671/Authors" ], [ "ICLR.cc/2025/Conference/Submission8671/Reviewer_8ZCw" ], [ "ICLR.cc/2025/Conference/Submission8671/Reviewer_5qh9" ], [ "ICLR.cc/2025/Conference/Submission8671/Reviewer_8ADV" ], [ "ICLR.cc/2025/Conference/Submission8671/Authors" ], [ "ICLR.cc/2025/Conference/Submission8671/Reviewer_8ADV" ], [ "ICLR.cc/2025/Conference/Submission8671/Area_Chair_TpZP" ], [ "ICLR.cc/2025/Conference/Submission8671/Reviewer_Y68h" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8671/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Y68h\", \"comment\": \"Thank you for your suggestions.\\n\\n# QA\\n\\n**Q1** * A deeper theoretical comparison of Legendre polynomials and B-spline functions is necessary to strengthen the argument for the proposed method.*\\n\\n**A1** Thank you for your valuable advice. We provide detailed experimental proof as to why Legendre-KAN is better than Spline-KAN. This may help to understand.\\n\\n**Q2** *\\u2022 Test the proposed method on additional tasks beyond symbolic representation to demonstrate its effectiveness in other domains and strengthen the overall claims.*\\n\\n**A2** Thanks for your review! We are already hard at work experimenting. However, due to equipment and time constraints, we currently only conduct experiments on the MNIST data set. In the future, we will conduct additional experiments to strengthen our claims.\\n\\n**Q3/4/5** *\\u2022 There are grammatical errors and incomplete sentences in the manuscript \\u2022Enhance the descriptions in figure captions for clearer understanding. \\u2022 Enhance the clarity of the figures, especially Figure 6, and ensure that all labels and legends are accurate. For instance, the labels in Figure 4b appear to be reversed.*\\n\\n**A3** Thank you for your careful reading! We are sorry for these errors. In the latest version, we have added explanations under the figures and updated Figure 4b. Due to tool limitations, the clarity of some figures may still be poor.\"}", "{\"title\": \"Response to Reviewer 8ADV\", \"comment\": \"**Dear Reviewer 8ADV:**\\n\\nThank you for your letter and for your comments concerning our manuscript. Those comments are all valuable and very helpful for revising and improving our paper, as well as the important guiding significance to our researches! We have studied comments carefully and have made correction which we hope meet with approval.\\n\\n**Q1** *Lack of detailed and reasonable explanations in section 2.2 \\\"B-spline functions and its fitting characteristic\\\".*\\n\\n**A1** Thank you for your positive comments and valuable suggestions to improve the quality of our manuscript. We updated some contents in Section 2 and Section 3. In summary, B-spline performs poorly in the jump regions of the function. Due to its locality and low degree, it lacks degrees of freedom in fitting these regions, which means that the number and degree of the basis functions are lower, and they cannot achieve high-precision fitting of the signal in this region. More importantly, the low fitting precision in the jump regions will affect the overall fitting effect.\\n\\n**Q2** *Architecture of Legendre-KAN is missing some key information.*\\n\\n**A2** Thanks for the great suggestions. We describe the complete activation function of Spline-KAN, which will help the reader to compare with the improvement of Legendre-KAN.\\n\\n**Q3** *Some irregularities in the paper writing.*\\n\\n**A3** Thank you very much for reading our article carefully! We feel sorry for our poor writings. We tried our best to improve the manuscript and made some changes to the manuscript.\\n\\nWe are very lucky to have met a responsible reviewer. Thank you very much for your description of the strengths of the manuscript. If there are any other problems, we will try our best to solve them. In addition, comparative experiments with other KANs are in progress. If time permits, we will add relevant experimental results in the appendix D.\"}", "{\"summary\": \"This paper proposes a new variant of the Kolmogorov-Arnold Network (KAN) called Legendre-KAN, designed to improve the accuracy of symbolic representation and function approximation. The main advancement here is the replacement of the traditional B-spline basis functions in KANs with parameterized Legendre polynomials, which offer higher degrees of freedom and are known for their global approximation capabilities and numerical stability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The shift from B-splines to Legendre polynomials appears well-motivated, and the results convincingly show an increase in accuracy for small problems.\", \"weaknesses\": \"1. There have been several KAN alternatives proposed at this point -- Fourier KANs, Wavelet KANs, RBF KANs, etc. There are no comparisons to those alternatives.\\n2. The paper starts with mention of areas which require high accuracy and precision, however the target experiments are extremely small scale. Even the \\\"complex\\\" nonlinear functions are simple polynomials where no-one uses neural networks for approximation.\\n\\nIn the current state, the paper requires significant revision before it can be considered for publication.\", \"questions\": \"See limitations for the main issues with the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce a new network called Legendre-KAN, which combines Legendre polynomials with the Kolmogorov-Arnold (KA) theorem. The motivation behind this development is that the standard Kolmogorov-Arnold Network (KAN), due to the inherent limitations of B-splines, cannot sufficiently reduce training error for complex tasks such as solving partial differential equations. The authors conduct a series of experiments in the field of symbolic regression to demonstrate that their approach outperforms both KAN and MLP in terms of test loss when fitting the equations.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The Legendre-KAN achieves lower test set losses on a set of symbolic expressions compared to the standard KAN.\", \"The authors provide an extensive overview of the method.\"], \"weaknesses\": [\"The current results are limited to only a small set of equations. In the field of symbolic regression, there are well-established benchmarks, such as SRBench and SRSD, on which novel approaches are typically tested. The authors should test on these benchmarks.\", \"No analysis with noise is performed.\", \"It is unclear what scientific contribution this approach brings. If the authors present this work as a contribution to symbolic regression, they should test it against strong and well-established baselines (such as Operon, DSR, and Neural Symbolic Regressors) rather than only KAN. Alternatively, if the authors are focusing solely on the KAN comparison, they should explain why they chose the symbolic regression task and what better performance on this task implies.\"], \"questions\": [\"Do you have any performance improvements in non-symbolic regression tasks compared to KAN? For example, in the abstract, you mention solving partial differential equations, and in your related work, you mention approaches where KAN has been used in the context of Graph Neural Networks and Transformers. Did you test your approach on these tasks and settings?\", \"What do the terms \\u201ctime ratio,\\u201d \\u201cEqual params,\\u201d \\u201cbest k/grid,\\u201d and \\u201cbest k\\u201d mean in the tables?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"After reading the responses and all the reviews, I will keep my score.\"}", "{\"title\": \"Response to Reviewer 8ZCw\", \"comment\": \"Dear Reviewer 8ZCw:\\n\\nThank you for your comments concerning our manuscript. Thank you for your time and effort in reviewing the manuscript. The explanation of Legendre-KAN's advantage in high-precision fitting has been revised in Sections 2.2 and 3.1. We are sorry for the careless writing in English.\\n\\n**Q1** *There have been several KAN alternatives proposed at this point -- Fourier KANs, Wavelet KANs, RBF KANs, etc. There are no comparisons to those alternatives.*\\n\\n**A1** Thank you very much for your advice! We will add experiments comparing with other KAN in the next edition, expected in a few hours. It seems that Legendre-KAN performs better than other KAN in symbolic experiments, which may be superior to Legendre's global polynomials versus his orthogonality. The code for comparison comes from the github release. There seemed to be some problems with the parameters of wav-kan, and we adopted the torch.rand() to improve its accuracy. If time permits, we will conduct more comparative experiments.\\n\\n[1] Xu J, Chen Z, Li J, et al. FourierKAN-GCF: Fourier Kolmogorov-Arnold Network--An Effective and Efficient Feature Transformation for Graph Collaborative Filtering[J]. arXiv preprint arXiv:2406.01034, 2024.\\n\\n[2] Bozorgasl Z, Chen H. Wav-kan: Wavelet kolmogorov-arnold networks[J]. arXiv preprint arXiv:2405.12832, 2024.\\n\\n**Q2** *The paper starts with mention of areas which require high accuracy and precision, however the target experiments are extremely small scale. Even the \\\"complex\\\" nonlinear functions are simple polynomials where no-one uses neural networks for approximation.*\\n\\n**A2** Thank you again for your positive comments and valuable suggestions to improve the quality of our manuscript. Appendix C has some experiments with complex functions. We are conducting some other experiments on the Feynman dataset and will add the results to the appendix if time permits. We would like to emphasize that Legendre-KAN has a very amazing accuracy on the polynomial part of signals or formulas, and the polynomial part exists in a large number of scientific studies, including quantum physics, analytical and computational chemistry. More importantly, in some symbolic regression tasks, the polynomial part of the signal is difficult to represent. In KAN, accurate polynomial results cannot be obtained by matching the optimal function, but Legendre-KAN can accurately express this type of activation function. In addition, Legendre-KAN still shows high accuracy in some non-polynomial parts of the signal, such as $sin(x)$ or the Bessel function, which is closely related to the high degree of freedom and orthogonality of Legendre polynomials and our improvement of the network.\"}", "{\"summary\": \"The Kolmogorov-Arnold Network (KAN) has been proposed as a significant improvement over MLPs in terms of interpretability and symbolic representation. However, in this paper, researchers have identified issues with the cubic B-spline basis functions used in KAN, specifically their inflexibility due to fixed degrees and knots. As a result, KAN struggles to reduce training error to the precision required for scientific research, leading to mathematical expressions that differ greatly from the true function, thereby limiting its practical applications.\\n\\nTo increase the flexibility of the basis functions in KAN, this paper introduces Legendre-KAN, which employs parameterized Legendre basis functions and normalization layers at KAN's edges. Extensive experiments show that Legendre-KAN achieves 10-100 times greater accuracy than KAN in symbolic representation tasks and in fitting complex nonlinear functions that cannot be easily symbolized. This improvement enhances the accuracy of mathematical relationships within KANs, offering a more effective solution for approximating and analyzing complex nonlinear functions.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. Researchers replaced the B-spline basis functions with Legendre polynomials in KAN, introducing Legendre-KAN. They highlight several benefits of using Legendre polynomials: (1) Legendre polynomials have a global polynomial function approximation space; (2) With higher order, Legendre polynomials can capture more complex patterns and relationships within the data; (3) By applying appropriate orthogonalization, Legendre polynomials are numerical stable.\\n2. Two technique trick to improve network: (1) Add a normalization layer to normalize the input activation values of each layer to interval $[0,1]$, which unifies the form of each layer's basis function. (2) Use smaller initialization parameters in order to solve the problem of gradient explosion.\\n3. Experiments in both symbolic representation tasks and fitting complex nonlinear functions that cannot be easily symbolized shows that Legendre-KAN outperforms KAN with 10-100 times greater accuracy.\", \"weaknesses\": \"1. Lack of detailed and reasonable explanations in section 2.2 \\\"B-spline functions and its fitting characteristic\\\".\\n\\t(1) In line 196, figure 3 and figure 4 are used to support the claim \\\"spline functions perform well in smooth regions but may introduce significant errors in certain areas\\\". However, there are two questions. First, figure 3 and figure 4 contradict each other in multiple places: in figure 3(b), Legendre polynomials fits better than B-spline, while in figure 4(b) it is the opposite; in figure 3(c), B-spline fits well, which is inconsistent with figure 4(c). Second, the term \\\"certain areas\\\" is vague, leading to confusion about attributing \\\"significant errors\\\" to the localized fitting caused by piecewise spline functions in the following sentences.\\n\\t(2) This part identifies the main drawback that the spline function is localized and piecewise polynomials. However, it is summarized as \\\"the activation functions with lower degrees of freedom prevents Spline-KAN from producing accurate results\\\" in line 254. The conception of \\\"degrees of freedom\\\" and the relationship among \\\"local/global\\\", \\\"order of polynomials\\\" and \\\"degrees of freedom\\\" are confusing.\\n2. Architecture of Legendre-KAN is missing some key information. It is not clearly specified which module is inherited from KAN and which is modified, or which settings follow KAN and which changes. For instance, in line 352, it is mentioned that \\\"we also combined the best-performing $b(x) = SILU (x)$ with the Legendre basis to enhance the smoothness of the high-order polynomial fitting results.\\\" However, it is not clarified the use of SiLU function has already been in the KAN.\\n3. Some irregularities in the paper writing.\\n\\t(1) Lack of definition or repeated definition for the symbols used in the paper. In line 113, a theorem is quoted, but none of the symbol is defined. Meanwhile, same symbols are used in section 3.2 of totally different meanings. Also, in figure 2(a), symbol $B$ with subscript is never defined in the paper.\\n\\t(2) Figures and tables lack brief explanations typically provided below them to aid understanding.\\n\\t(3) Some writing details such as spelling and punctuation errors. For example, \\\"worse\\\" is spelled as \\\"wrose\\\" in line 257 and a sentence ends with a comma in line 274.\", \"questions\": \"See Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes Legendre-KAN, a variant of the recently proposed Kolmogorov-Arnold Network (KAN) architecture. The proposed method replaces the cubic B-spline basis functions with parameterized Legendre polynomials. This modification aims to enhance the flexibility and accuracy of KANs, particularly in tasks involving symbolic regression and function approximation. The authors highlight the potential advantages of Legendre polynomials to address limitations in B-splines such as fixed degree and localized fitting. Experimental results suggest that Legendre-KAN achieves improvements in fitting accuracy compared to standard KANs.\\n\\nThe paper has several notable weaknesses that limit its overall contribution. A concern is the limited scope of the experiments, which are confined to a small set of equations and do not include established benchmarks (eg. SRBench, SRSD, etc..). Additionally, no meaningful analysis/comparison against well-established symbolic regression baselines is provided. While the work aims to improve KANs, it does not address comparisons with many other KAN variants that have recently been proposed (Fourier-KAN, Wavelet-KAN, and several others) leaving its relative advantages unclear. Furthermore, the theoretical explanations have been judged insufficient by the reviewers. \\n\\nFor the reasons outlined above, the panel of reviewers has decided not to accept the paper in its current form. While the idea of leveraging Legendre polynomials in KANs shows significant potential, the paper as it stands lacks the empirical validation and theoretical depth required for acceptance at ICLR. The panel strongly encourages the authors to address these weaknesses in a future submission, specifically by testing on established benchmarks, conducting more comprehensive comparisons, and enhancing the clarity and organization of the manuscript.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers requested several clarifications, which the authors addressed to some extent. However, the reviewers also noted that the experiments were too limited in scope. While the authors acknowledged this feedback and indicated they were working on expanding the experiments, they cited constraints in time and computational resources as reasons for the incomplete results.\"}", "{\"summary\": \"This paper proposes Legendre-KAN, a variant of the Kolmogorov-Arnold Network (KAN) that integrates Legendre polynomials into the activation functions to improve performance. By utilizing Legendre basis functions, along with skills such as normalization and reduced initialization parameters, the authors aim to enhance the accuracy and parameter efficiency of KAN in symbolic representation tasks. Empirical evaluations demonstrate that Legendre-KAN achieves better fitting accuracy compared to the standard KAN.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tIncorporating Legendre basis functions into KAN is a novel approach that appears to contribute positively to the network's performance.\\n\\u2022\\tThe empirical results seem to show that Legendre-KAN results in improved fitting accuracy.\", \"weaknesses\": \"\\u2022\\tThe paper lacks a theoretical analysis to substantiate why Legendre polynomials outperform B-spline functions.\\n\\u2022\\tThere are grammatical errors and incomplete sentences in the manuscript, notably in Section 3.1, which impede comprehension.\\n\\u2022\\tThe evaluation is confined to symbolic representation tasks. While recent study[1] shows that KAN is found to be better than MLP only in symbolic formula representation, but still inferior to MLP on other tasks such as machine learning, CV, NLP and audio processing. It would be more convincing if the proposed method with KAN and MLP could be tested on tasks other than symbolic representation.\\n\\u2022\\tThe structure of the paper is somewhat disorganized, with experimental results appearing in sections typically reserved for background and methodology.\\n\\n[1] Yu, Runpeng, Weihao Yu, and Xinchao Wang. \\\"Kan or mlp: A fairer comparison.\\\" arXiv preprint arXiv:2407.16674 (2024).\", \"questions\": \"\\u2022\\tA deeper theoretical comparison of Legendre polynomials and B-spline functions is necessary to strengthen the argument for the proposed method.\\n\\u2022\\tTest the proposed method on additional tasks beyond symbolic representation to demonstrate its effectiveness in other domains and strengthen the overall claims.\\n\\u2022\\tImprove the clarity and grammatical correctness of the writing to better convey the proposed method's details and implications.\\n\\u2022\\tEnhance the descriptions in figure captions for clearer understanding.\\n\\u2022\\tEnhance the clarity of the figures, especially Figure 6, and ensure that all labels and legends are accurate. For instance, the labels in Figure 4b appear to be reversed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 5qh9\", \"comment\": \"Thank you very much for your instructive suggestions and questions! Your suggestion has inspired us a lot.\", \"for_weakness1\": \"We would like to quote the reply of **KAN**(https://openreview.net/forum?id=Ozo7qJ5vZi) :\", \"regarding_comparison_to_symbolic_regression_methods\": \"KAN, as a network-based method, has strong capability (in fitting even non-symbolic functions) that makes it unfavorable for standard symbolic regression benchmarks. For example, KAN ranks second-to-last in GEOBENCH (https://openreview.net/forum?id=TqzNI4v9DT), whereas the last-ranked one EQL is also a network-based model, which turned out to be useful at least for certain problems despite its inability to do well on benchmarks. On the one hand, we would like to explore ways to restrict KANs' hypothesis space so that KANs can achieve good performance on symbolic regression benchmarks. On the other hand, we want to point out that KANs have good features that are hard to evaluate with existing benchmarks: (1) interactivity. It is very hard to \\\"debug\\\" evolutionary-based symbolic regression methods since the evolution process is long and random. However, it is relatively easier to visualize the training dynamics of KANs, which gives human users intuition on what could go wrong. (2) The ability to ``discover'' new functions. Since most symbolic regression methods require the input of the symbolic library, they cannot discover things they are not given. For example, if the ground truth formula contains a special function but is not given in the symbolic library, all SR methods will fail definitely. However, KANs can discover the need for a new function whose numerical behavior suggests maybe it is a Bessel function; see Figure 23 (d) for an example.\\n\\nFurther, we want to emphasize that Legendre-KAN can fit those parts of polynomials accurately that cannot be represented by symbols and ubiquitous in scientific research, and even give their expressions.\", \"for_weakness2\": \"As suggested by the referee, we have tried our best to verify Legendre-KAN\\u2019s ability in other tasks. It is a shame that we do not have enough time to complete all of those tasks. \\n\\n# Q/A\\n\\n**Q1** *Do you have any performance improvements in non-symbolic regression tasks compared to KAN?*\\n\\n**A1** Thank you for your sincere advice! We're doing the best we can with the experiment. Comparative experiments for applications such as differential equations are ongoing.\\n\\n**Q2** *What do the terms \\u201ctime ratio,\\u201d \\u201cEqual params,\\u201d \\u201cbest k/grid,\\u201d and \\u201cbest k\\u201d mean in the tables?*\\n\\n**A2** Thank you for your question. We apologize for the lack of explanation on some indicators. \\n\\nAll the $\\\\textbf{ratios}$ represent the ratio of the indicator of a certain network to this indicator of Legendre.\\n\\nFor each tasks, we test the network with different parameter quantity.\\nFor each tasks, we test the network with different parameter quantity.\\nThe parameter quantity for the lowest error of the result of\\nthe Legendre-KAN is assumed to be $k_l$.\\nWe divide all the results of the of networks into two parts.\\nThe $\\\\textbf{first part}$ is the part where the parameter quantity is\\nless than or equal to or slightly greater than $k_l$.\\nThe $\\\\textbf{other part}$ is the part where the number of parameters\\nis less than or equal to the maximum number of basis functions specified.\\nThe former is to compare the situation when the two networks' parameter quantities are equal. The second part is used to compare the optimal fitting accuracy of the network under a certain number of parameters. For the result with the lowest error in the first part, the number of basis functions is described as $\\\\textbf{Equal params}$. For the result with the lowest error in the second part, the number of basis functions is described as $\\\\textbf{best k/grid}$ in Spline-KAN and $\\\\textbf{best k}$ in other networks.\"}" ] }
BapOwAzicb
HOGT: High-Order Graph Transformers
[ "Xueqi Ma", "Xingjun Ma", "Chuang Liu", "Sarah Monazam Erfani", "James Bailey" ]
Inspired by the success of transformers on natural language processing (NLP) and computer vision (CV) tasks, graph transformers (GTs) have recently been proposed to boost the performance of graph learning. However, the attention mechanisms used in existing GTs face certain limitations in capturing crucial topological information or scaling to large graphs, due to their quadratic complexity. To address these limitations, in this paper, we propose a high-order information propagation strategy within the transformer architecture to simultaneously learn the local, long-range, and higher-order relationships of the graph. \textcolor{blue}{We first propose a flexible sampling method to extract communities from the graph, and create new community nodes and in particular a learnable community sampling method with reinforcement learning.} We then propose a three-step message-passing strategy dubbed \emph{HOGT} to capture the local and higher-order information in the communities and propagate long-range dependency information between the community nodes to finally obtain comprehensive node representations. Note that as structural information has been flexibly integrated into our designed community-based message-passing scheme, HOGT discards the positional encoding which was thought to be important for GT.
[ "Graph representation learning", "Graph Transformer" ]
Reject
https://openreview.net/pdf?id=BapOwAzicb
https://openreview.net/forum?id=BapOwAzicb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zvTjueiihL", "yeYoB6YJIn", "yFAuqJ8xnv", "we221B3iqD", "uG1gMOd5xt", "tSPlPjxnVj", "saEXgSDopq", "nCU9pxpIBA", "mLNrg7mAER", "lD40dQdZTd", "ht4e7fJVU5", "geQuAh3exg", "fqo23DDavW", "flalZLEZnZ", "e1wXGe2TMb", "YzOH7v1HQq", "YHUqb6ABRz", "XgGoSQEJaI", "VhYj88FWfM", "OHi0EemcOA", "NuY6G4LCfa", "MxDIVJ8Z4u", "MjNqTpIEZN", "HewWoWYDQj", "G1N9M3UAaf", "DGjJRATALZ", "6NU4UBivRp", "48rY1OawJE" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734707691104, 1730218194282, 1732112351231, 1729174703777, 1730423516602, 1732608050411, 1737523833221, 1732108498541, 1732510830683, 1733193710943, 1730663389537, 1732110085207, 1732682443472, 1732103777506, 1733014554645, 1732102238159, 1733216202934, 1732947232294, 1733225392782, 1733224865188, 1732510582419, 1732510381497, 1732113064227, 1732556854678, 1733129808349, 1732113009187, 1732111797919, 1732510476648 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7346/Area_Chair_o8UW" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_nhvf" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_RHuk" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_X76c" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_X76c" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_2D8Y" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_RHuk" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Reviewer_2D8Y" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ], [ "ICLR.cc/2025/Conference/Submission7346/Authors" ] ], "structured_content_str": [ "{\"metareview\": [\"Scientific Claims and Findings:\", \"This paper introduces a high-order graph transformer (HOGT) architecture designed for node classification. The approach involves sampling communities from the graph, creating community nodes, and facilitating message passing between the graph and community nodes. HOGT demonstrates competitive performance across various node classification tasks.\", \"Strengths:\", \"The introduction of a learnable community sampling method using reinforcement learning.\", \"The design of the proposed HOGT is well-reasoned.\", \"Weaknesses:\", \"Although HOGT is a reasonable design, it closely resembles existing works such as hierarchical graph transformers or those utilizing graph cluster structures [1] [2] [3] [4], as mentioned in lines 144\\u2013151 of the related work section. While the authors discuss the distinctions between HOGT and these works, the core differences do not appear to be substantial.\", \"[1] Wenhao Zhu, Tianyu Wen, Guojie Song, Xiaojun Ma, and Liang Wang. Hierarchical transformer for scalable graph learning, 2023.\", \"[2] Wenhao Zhu, Guojie Song, Liang Wang, and Shaoguo Liu. Anchorgt: Efficient and flexible attention architecture for scalable graph transformers, 2024.\", \"[3] Weirui Kuang, Z WANG, Yaliang Li, Zhewei Wei,and Bolin Ding. Coarformer: Transformer for large graph via graph coarsening, 2022.\", \"[4] Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, and Chuan Shi. Less is more: on the over-globalizing problem in graph transformers, 2024.\", \"The AC concurs with Reviewer nhvf that similar performance can be achieved by older and simpler models. Consequently, the advantages offered by HOGT and similar graph transformer methods may not be significant.\", \"Most Important Reasons for Decision:\", \"Based on the weaknesses mentioned above.\"], \"additional_comments_on_reviewer_discussion\": \"In their rebuttal, the authors presented additional experimental results on graph tasks such as link prediction and graph classification, along with a more detailed sensitivity analysis of hyperparameters.\\n\\nAfter the rebuttal, Reviewers 2D8Y and X76c maintained their ratings at 6, Reviewer Rhuk increased their rating from 5 to 6, and Reviewer nhvf kept their rating at 3.\"}", "{\"summary\": \"The paper presents HOGT, a graph transformer that uses community-based processing to handle graph topology and computation complexity issues.\", \"the_method_consists_of_three_parts\": \"Community sampling using reinforcement learning, Message-passing within communities and information propagation between community nodes. HOGT achieves highly competitive results\\nacross node and graph classification tasks.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The technical part of the paper is good -- the method is of careful design and implementation.\"], \"weaknesses\": \"1. The problems of node classification and graph classification are well-studied in the past 10 years.\\nYou can find the old baselines like GAT are very competitive. Due to task saturation, HOGT shows relatively small improvements compared to these simple algorithms.\\n2. The theoretical part of the paper seems like mainly from a related work. Further analysis about HOGT is needed.\\n3. The method is too complex.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your patience! Here is the rest part.\\n\\n**Q3.**\\nA more detailed comparison of HOGT's computational complexity, including training time and memory usage, with other state-of-the-art models is needed.\\n\\n**A3.**\\nThe following table provides a detailed comparison of the training time per epoch, inference time, and GPU memory usage for popular GNN methods including GAT and APPNP, and GT models including Graphormer, LiteGT, polynormer, and HOGT on the Cora and ogbn-proteins datasets. Since the standard practice for model training on these datasets involves a fixed number of epochs, we report the training time per epoch to effectively compare training efficiency. \\n\\nThe results show that HOGT is orders of magnitude faster than Graphormer and polynormer, which require quadratic complexity for global attention. Additionally, HOGT significantly reduces memory consumption due to its efficient implementation of global attention with $\\\\mathcal{O}(N)$ complexity. Compared to GAT, APPNP, and SGFormer, HOGT strikes a balance between performance and efficiency. This result indicates the effectiveness of HOGT in scaling to large-scale datasets by introducing community nodes and a multi-step message-passing strategy.\\n\\n**Table: Efficiency comparison of HOGT and graph Transformer competitors w.r.t. training time per epoch, inference time and GPU memory (GB) cost on a A100. The missing results are caused by out-of-memory.**\\n\\n| **Method** | **Cora** Tr (ms) | **Cora** Inf (ms) | **Cora** Mem (MB) | **ogbn-proteins** Tr (s) | **ogbn-proteins** Inf (s) | **ogbn-proteins** Mem (MB) |\\n|----------------|------------------|-------------------|-------------------|--------------------------|---------------------------|----------------------------|\\n| GAT | 3.18 | 1.68 | 166.35 | - | - | - |\\n| APPNP | 3.32 | 1.49 | 35.57 | - | - | - |\\n| Graphormer | 90.58 | 71.26 | 359.25 | - | - | - |\\n| LiteGT | 15.57 | 5.77 | 227.69 | - | - | - |\\n| polynormer | 218.23 | 5.13 | 264.06 | 1.60 | 0.127 | 6429.06 |\\n| SGFormer | 3.66 | 1.42 | 50.87 | 1.26 | 0.098 | 228.19 |\\n| HOGT | 7.40 | 2.69 | 109.22 | 1.12 | 0.087 | 1284.32 |\\n\\n**Q4.**\\nCan the authors elaborate on the theoretical analysis of the model's expressiveness and how it relates to the approximation of global attention?\\n\\n**A4.**\", \"the_theoretical_foundation_for_approximating_self_attention_in_hogt_is_based_on_the_following\": \"(1) Message-Passing Neural Networks with community nodes (MPNN+CN) can act as a self-attention layer, and (2) under the three-step message-passing framework, the combination of MPNN+CN with self-attention achieves an approximation of full self-attention in graphs.\\n\\n**For Point (1):**\\nThe approximation error of self-attention by MPNN+CN can be bounded under the following assumptions. A detailed proof can be found in [1]. Below, we summarize the key assumptions and results:\\n\\nAssumption 1.\\n\\n$\\\\forall i \\\\in [n]$, $\\\\boldsymbol{x}_i \\\\in \\\\mathcal{X}_i$, and $|\\\\boldsymbol{x}_i| < C_1$, implying that the feature space $\\\\mathcal{X}$ is compact.\\n\\nAssumption 2.\\n\\n$|\\\\boldsymbol{W}_Q| < C_2$, $|\\\\boldsymbol{W}_K| < C_2$, and $|\\\\boldsymbol{W}_V| < C_2$ for the target layer $\\\\mathbf{L}$. Combined with Assumption 1, this ensures that the unnormalized attention $\\\\alpha^{\\\\prime}(\\\\boldsymbol{x}_i, \\\\boldsymbol{x}_j) = \\\\boldsymbol{x}_i^T \\\\boldsymbol{W}_Q (\\\\boldsymbol{W}_K)^T \\\\boldsymbol{x}_j$ is bounded, and $\\\\sum_j e^{\\\\alpha^{\\\\prime}(\\\\boldsymbol{x}_i, \\\\boldsymbol{x}_j)}$ is also upper and lower bounded.\\n\\nUnder these assumptions, MPNN+CN with $\\\\mathcal{O}(1)$ width and $\\\\mathcal{O}(1)$ depth can approximate ${{Performer}}$ and ${\\\\text{Linear-Transformer}}$ arbitrarily well, as shown in [1]. In Proposition 4.1, we consider the Linear Transformer for simplicity.\"}", "{\"summary\": \"The paper presents a unique approach to graph learning by integrating high-order information propagation within the transformer architecture. The paper empirically shows that HOGT achieves competitive results on node and graph classification tasks, especially on heterophilic datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea of using a learnable community sampling method with reinforcement learning for graph representation is novel. It combines the advantages of community detection and adaptive sampling.By addressing the limitations of existing graph transformers in terms of capturing topological information and scalability, this work contributes to the advancement of the field and opens up new research directions for further exploration.\", \"weaknesses\": \"Analysis on the sensitivity of the HOGT model's performance to its hyperparameters such as walk length, hidden dimension, and dropout.\\n\\nFurther exploration of the sampling method's performance in graphs with irregular or sparse structures would enhance the understanding of the model's robustness.\\n\\nA more detailed comparison of HOGT's computational complexity, including training time and memory usage, with other state-of-the-art models is needed.\", \"questions\": \"Could the authors provide more insights into how HOGT scales with graph size, especially in terms of memory usage and training efficiency?\\n\\n\\nCan the authors elaborate on the theoretical analysis of the model's expressiveness and how it relates to the approximation of global attention?\\n\\nHow sensitive is HOGT to its hyperparameters, particularly the number of communities and the reinforcement learning-based sampling method?\\n\\nCould the authors discuss how HOGT captures long-term dependencies in the graph and compare this with other methods that focus on long-range interactions?\\n\\nHow sensitive is the performance of HOGT to changes in hyperparameters such as the hidden dimension and dropout rate? \\nHave the authors experimented with different optimization algorithms for hyperparameter tuning, and if so, what were the results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a high-order graph transformer (HOGT) for graph learning tasks. HOGT introduces a flexible sampling method to extract communities from the graph and a three-step message-passing strategy to capture local, long-range, and higher-order relationships of the graph. The paper demonstrates the effectiveness of HOGT on node classification tasks and shows its superiority over other graph models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The paper introduces a novel approach, HOGT, that combines community-based sampling and message-passing to capture comprehensive information in graph learning.\\n\\n(2) HOGT achieves competitive results on various graph datasets, demonstrating its effectiveness in node classification tasks.\\n\\n(3) The paper provides a theoretical analysis of HOGT, showing its approximation capabilities and the relationship with other graph models.\", \"weaknesses\": \"(1) Domain Limitation of Datasets. Expanding the evaluation to include diverse domains, such as those in the TEG-DB datasets [1], which feature rich node and edge text, would strengthen the findings.\\n\\n(2) Narrow Applicability. The model\\u2019s applicability is somewhat restricted to specific tasks within graph domains, such as node classification. The authors should consider its potential for other important tasks, like link prediction.\\n\\n[1] \\\"TEG-DB: A Comprehensive Dataset and Benchmark of Textual-Edge Graphs.\\\" NeurIPS 2024.\", \"questions\": \"See weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer nhvf\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your time and effort in providing valuable feedback. Below, we have addressed each of the concerns and questions raised.\\n\\n**Q1.**\\nThe problems of node classification and graph classification are well-studied in the past 10 years. You can find the old baselines like GAT are very competitive. Due to task saturation, HOGT shows relatively small improvements compared to these simple algorithms.\\n\\n**A1.**\", \"please_allow_us_to_clarify_the_following_points\": \"1. In our experiments, HOGT consistently achieves notable performance improvements across various datasets compared to traditional baselines. For example, as shown in Table 2 of the paper, HOGT outperforms GAT by absolute margins of 4.15\\\\% and 4.46\\\\% on Pubmed and ogbn-arxiv, respectively. Furthermore, HOGT demonstrates significantly better performance on heterophilic datasets, achieving substantial margins of improvement over GAT and other traditional GCN-based methods. Additionally, we conducted a t-test to evaluate the statistical significance of HOGT's improvements, finding that the gains over the baselines are highly significant (p-value $\\\\ll$ 0.05).\\n\\n2. Traditional GCN-based methods generally perform well on homophilic datasets (as shown in Table 2 of the paper), while heterophily-based methods like H2GCN and GPRGNN excel on heterophilic datasets (Table 3 in the paper). GT models, on the other hand, deliver superior results on large-scale datasets such as ogbn-arxiv, roman-empire, and amazon-ratings, which require capturing long-range dependencies.\\n\\nHOGT distinguishes itself as a unified framework capable of capturing diverse types of information\\u2014local, global, and high-order relationships (Table 1 in the paper). Unlike prior models that focus on specific aspects, HOGT demonstrates versatility by effectively accommodating various graph types (graphs and hypergraphs), data characteristics (homophily and heterophily), data scales (small-scale and large-scale), and diverse graph tasks. This adaptability underscores the broader applicability of the HOGT framework.\\n\\n### Table: A summary of the capabilities of various graph models in processing graph information and types\\n\\n| Model | Local Information | Global Information | Higher-Order Information | Graph | Hypergraph |\\n|-----------------|-------------------|--------------------|--------------------------|-------|------------|\\n| GNN | \\u2713 | \\u2717 | \\u2717 | \\u2713 | \\u2717 |\\n| HGNN | \\u2713 | \\u2717 | \\u2713 | \\u2713 | \\u2713 |\\n| GT | \\u2713 | \\u2713 | \\u2717 | \\u2713 | \\u2717 |\\n| HOGT (ours) | \\u2713 | \\u2713 | \\u2713 | \\u2713 | \\u2713 |\\n\\n**Q2.**\\nThe theoretical part of the paper seems like mainly from a related work. Further analysis about HOGT is needed.\\n\\n**A2.**\", \"we_analyze_the_proposed_hogt_framework_from_two_perspectives\": \"(1) HOGT's ability to approximate self-attention as implemented in general Graph Transformers (GTs), and (2) HOGT's functionality as a high-order Graph Transformer leveraging community nodes (detailed in Appendix A.4 of the paper).\\n\\n**1. Approximation of Self-Attention in HOGT:**\", \"the_approximation_of_self_attention_in_hogt_is_demonstrated_as_follows\": [\"(1) Message-Passing Neural Networks with community nodes (MPNN+CN) can act as a self-attention layer.\", \"(2) Within our three-step message-passing framework, the combination of MPNN+CN and self-attention achieves an approximation of full self-attention in graphs.\", \"While point (1) has been established in related work [1], we focuse on demonstrating (2) in the paper. Specifically:\", \"In the **Graph Node-to-Community Node** (G2C-MP) step, message-passing through a newly introduced community node (connected to all nodes within the community) approximates self-attention within the community, as supported by Proposition 4.1 in the paper.\", \"In the **Community Node-to-Community Node** (C2C-ATTN) step, a self-attention mechanism propagates information among community nodes.\", \"Finally, in the **Community Node-to-Graph Node** (C2G-MP) step, global information from community nodes is propagated back to graph nodes via another MPNN+CN, approximating \\\"full\\\" self-attention.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort in reviewing our paper. Based on your suggestions, we have updated the manuscript (highlighted in blue) with the following additions:\\n\\nSensitivity analysis of HOGT with different hyperparameters.\\n\\nRobustness analysis of HOGT with sparse graph structures.\\n\\nComparisons of efficiency between HOGT and other methods.\\n\\nFurther analysis of the number of community nodes.\\n\\nWe hope the new experimental results and explanations address your concerns.\\n\\nAs the discussion deadline approaches, we kindly inquire if you have any further suggestions for improving our manuscript. Your feedback is invaluable, and we would greatly appreciate your guidance.\\n\\nIf our responses have sufficiently addressed your concerns, we kindly hope you might reconsider the rating. Thank you once again for your thoughtful review and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer nhvf\", \"comment\": \"Dear Reviewer nhvf,\\n\\nThank you once again for your review. We are pleased to see your recognition of the strengths in the technical aspects of our work.\\n\\nIn the updated manuscript, we have further demonstrated the effectiveness of the proposed HOGT across a variety of tasks, including the addition of a link prediction task (Appendix A.9) and an evaluation of its robustness on sparse graph structures (Appendix A.11). During the rebuttal phase, we successfully addressed the concerns of other reviewers, which led to either maintained positive evaluations or improved scores. We genuinely hope our detailed and comprehensive response will also address your concerns and contribute to an improved evaluation of our paper.\\n\\nThank you once again for your time and consideration. We sincerely look forward to your feedback.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper introduces HOGT (High-Order Graph Transformer), a new architecture that tackles key issues in existing graph transformers, especially around capturing topology and scaling to large graphs. The authors use a three-step message-passing process: sampling communities from the graph, creating community nodes as information bridges, and enabling message flow between graph and community nodes. This design removes the need for positional encoding, embedding structure naturally through communities. HOGT shows strong performance across different types of graphs, with impressive computational efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a well-founded architecture with a three-step message-passing strategy that effectively captures multi-scale information in graphs, handling local, global, and higher-order details. Theoretically, it\\u2019s shown that HOGT can approximate global attention and unify existing models, while the community-based design removes the need for positional encoding.\\n\\n2. HOGT also demonstrates strong versatility, performing well on various graph types (homophilic, heterophilic, and hypergraphs) and adapting to different community sampling methods, which enhances scalability across graph sizes. Efficiency is greatly improved, with computational complexity reduced from O(N\\u00b2) to O(m\\u00b2 + N), validated by experiments that show strong results over state-of-the-art methods, especially on challenging datasets.\", \"weaknesses\": \"1. The strict hierarchy in the three-step message-passing mechanism could introduce bottlenecks in information flow. By requiring all long-range communication to route through community nodes, the model risks distorting or weakening critical direct relationships between nodes\\u2014especially in tasks where pairwise connections hold essential information. The assumption that this hierarchical structure is universally beneficial may be too broad, as the paper offers little discussion on cases where direct node-to-node communication might better capture necessary details.\\n\\n2. The approach to initializing community nodes also feels underdeveloped and could pose challenges. Starting with random initialization may lead to instability and slower convergence, particularly in early training stages. Additionally, there's no clear strategy for aligning community node dimensionality with original node features, which seems like a significant gap. Given that these community nodes are crucial bridges for information flow, their initial setup could substantially impact the quality of the representations learned.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your patience! Here is the rest part.\\n\\n**2. HOGT as a Higher-Order Graph Transformer:** \\nWe analyze HOGT\\u2019s ability to capture higher-order representations through the role of community nodes, which function analogously to hyperedges in hypergraph convolutional networks.\\n\\n- **Encoding Complex Relationships:** \\n To capture higher-order correlations in complex graphs, HGCNs introduce hyperedges connecting multiple nodes. Similarly, HOGT introduces a community node for each community, representing multiple nodes sharing common properties (e.g., semantics or structural information). Like hyperedges, community nodes connect to every node within their community, enabling higher-order information encoding.\\n\\n- **High-Order Message-Passing:** \\n\\n**(Apologies for the poor readability caused by incompatible formulas. Please refer to the relevant content in Appendix A.4 in the paper for clarification.)**\\n \\n Following the message-passing paradigm, HGCNs first aggregate information along hyperedges and then propagate it to nodes. In spectral-based HGCNs, convolution is defined as: \\n\\n$\\\\mathbf{\\\\Delta}=\\\\mathbf{D}_{v}^{-1 / 2} \\\\mathbf{S} \\\\mathbf{W}$ \\n\\n$\\\\mathbf{D}_{e}^{-1} \\\\mathbf{S}^{T}$ \\n\\n$ \\\\mathbf{D}_{v}^{-1 / 2}$,\\n\\n$\\\\boldsymbol{h}^{(k)} =\\\\sigma\\\\left(\\\\mathbf{\\\\Delta} \\\\boldsymbol{Z}^{(k-1)} \\\\mathbf{\\\\Theta}^{(k)}\\\\right)$,\\n\\n where $\\\\mathbf{D}_v$ and $\\\\mathbf{D}_e$ are diagonal matrices representing vertex and hyperedge degrees, $\\\\mathbf{S}$ is the incidence matrix indicating node-hyperedge relationships, and $\\\\mathbf{W}$ represents hyperedge connections. This can be refined into three steps: node-to-hyperedge, hyperedge-to-hyperedge, and hyperedge-to-node:\\n\\n$ \\\\boldsymbol{a}_{e^h}^{(k)} = \\\\mathbf{S}^{\\\\top} \\\\boldsymbol{z}^{(k-1)}, $\\n\\n$ \\\\boldsymbol{a}_{e^h}^{(k)} = $\\n\\n$\\\\mathbf{W} \\\\boldsymbol{a}_{e^h}^{(k)}, $\\n\\n$ \\\\boldsymbol{z}^{(k)} = \\\\mathbf{S} \\\\boldsymbol{a}_{e^h}^{(k)}.$\\n\\n Similarly, HOGT\\u2019s three-step message-passing process\\u2014Graph Node-to-Community Node, Community Node-to-Community Node, and Community Node-to-Graph Node\\u2014mirrors this structure. Moreover, in HGCNs, the relationships of hyperedges can typically be ignored, i.e., $\\\\mathbf{W}=\\\\mathbf{I}$. In HOGT, the framework can also be simplified to two steps, excluding the Community Node-to-Community Node step.\\n\\nAt a high level, graph convolutional networks can be seen as special cases of hypergraph convolutional networks. In comparison, our proposed HOGT framework can be simplified to other existing GT models, demonstrating its adaptability and generalizability.\\n\\n[1] Chen Cai, et al. On the connection between MPNN and graph transformer. ICML 2023.\\n\\n**Q3.**\\nThe method is too complex.\\n\\n**A3.**\\nWhile HOGT employs a general framework with a multi-step message-passing strategy, its overall complexity remains low and manageable.\\n\\n**1. Computational Complexity:**\\nThe complexity of HOGT is $\\\\mathcal{O}(m^2 + N)$, where $m$ is the number of communities and $N$ is the number of graph nodes. Since $m \\\\ll N$, the complexity of HOGT approximates to $\\\\mathcal{O}(N)$, making it nearly linear. In contrast, general Graph Transformers (GTs) have a complexity of $\\\\mathcal{O}(N^2)$. This difference makes HOGT significantly more computationally efficient than general GTs, particularly for large-scale graphs.\\nRegarding the time complexity of community sampling, this process is performed as a preprocessing step using techniques like random walk or spectral clustering and does not incur additional computational costs during model training. Furthermore, HOGT maintains excellent performance even when treating the entire graph as a single community, demonstrating its flexibility.\\n\\n**2. Efficiency in Practice:**\\nExperimental results confirm that HOGT significantly reduces training time and memory usage, addressing concerns about its complexity. Table 6 in the paper reports the training time per epoch, inference time, and GPU memory consumption on the Cora and ogbn-arxiv datasets. To ensure a fair comparison, we report the training time per epoch, as fixed training epochs are standard for these datasets.\\nWe observe that HOGT is orders of magnitude faster than popular GT models, including Graphormer, LiteGT, and Polynormer. Additionally, HOGT's memory consumption is substantially lower due to its simplified global attention mechanism, which scales with $\\\\mathcal{O}(N)$ complexity.\\n\\nThese aspects demonstrate that while HOGT introduces a general and powerful framework, its design is computationally efficient, practical, and well-suited for large and complex graph datasets.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your thoughtful comments and suggestions on our paper. We hope that our responses and updates to the manuscript have adequately addressed your concerns.\\n\\nAs the discussion period approaches its conclusion, we would greatly appreciate any additional feedback or suggestions you might have for improving the work. If there are any remaining points of concern or clarification needed, we are happy to provide further elaboration.\\n\\nWe deeply value your insights and thank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer X76c\", \"comment\": \"Dear Reviewer,\\n\\nThank you for taking the time to review our paper and provide valuable feedback. Below are our responses to your concerns.\\n\\n**Q1.**\\nDomain Limitation of Datasets. Expanding the evaluation to include diverse domains, such as those in the TEG-DB datasets [1], which feature rich node and edge text, would strengthen the findings.\\n\\n**A1.**\\nThank you for your insightful suggestion. We have conducted experiments on the TEG-DB datasets [1], specifically the Goodreads-Children and Goodreads-Crime datasets. The results, summarized in the table below, demonstrate that HOGT achieves either better or comparable performance compared to other methods. In these experiments, we treated the data in each batch as a community and introduced a community node for each batch, effectively extending the HOGT framework to these diverse domains.\\n\\n**Table: Performance Comparison Across Methods on Goodreads-Children.**\\n\\n| Methods | AUC (BERT-Large) | F1 (BERT-Large) | AUC (BERT-Base) | F1 (BERT-Base) | AUC (w/o Edge Text) | F1 (w/o Edge Text) |\\n|--------------------|----------------|---------------|---------------|--------------|-------------------|------------------|\\n| GeneralConv | 0.9810 | 0.9179 | 0.9821 | 0.9187 | 0.9825 | 0.9189 |\\n| GraphTransformer | 0.9807 | 0.9200 | 0.9811 | 0.9160 | 0.9776 | 0.9066 |\\n| HOGT | **0.9821** | **0.9216** | **0.9837** | **0.9208** | **0.9825** | **0.9289** |\\n\\n**Table: Performance Comparison Across Methods on Goodreads-Crime.**\\n\\n| Methods | AUC (BERT-Large) | F1 (BERT-Large) | AUC (BERT-Base) | F1 (BERT-Base) | AUC (w/o Edge Text) | F1 (w/o Edge Text) |\\n|--------------------|----------------|---------------|---------------|--------------|-------------------|------------------|\\n| GeneralConv | 0.9772 | 0.9079 | 0.9774 | 0.9077 | 0.9752 | 0.9101 |\\n| GraphTransformer | 0.9738 | 0.9079 | 0.9737 | 0.9079 | 0.9716 | 0.8983 |\\n| HOGT | **0.9776** | **0.9110** | **0.9776** | **0.9110** | **0.9768** | **0.9130** |\\n\\n[1] Zhuofeng Li, et al. TEG-DB: A Comprehensive Dataset and Benchmark of Textual-Edge Graphs. NeurIPS 2024.\\n\\n**Q2.**\\nNarrow Applicability. The model\\u2019s applicability is somewhat restricted to specific tasks within graph domains, such as node classification. The authors should consider its potential for other important tasks, like link prediction.\\n\\n**A2.**\\nIn addition to the experiments on the TEG-DB datasets for link prediction, we also applied HOGT to graph classification tasks to further demonstrate its versatility and superiority, as detailed in Appendix A.8.\\n\\n**Performance on Graph Classification**\\n\\nWe evaluated HOGT on several widely-used real-world datasets from the TU database [2] for graph classification tasks.\", \"nci1\": \"This dataset consists of 4,110 molecular graphs representing two balanced subsets of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines.\", \"proteins\": \"This dataset includes 1,113 protein graphs, where each graph corresponds to a protein molecule. Nodes represent amino acids, and edges capture interactions between them.\\n\\nAs shown in Table 12 in the Appendix A.8, HOGT achieves state-of-the-art performance across all datasets. Compared to GT models such as GraphGPS, HOGT demonstrates the ability to encode more comprehensive and nuanced information in the graph, highlighting its effectiveness and broader applicability beyond node classification.\\n\\n[2] Christopher Morris, et al. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv:2007.08663, 2020.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful consideration and for taking the time to review our rebuttal. We greatly appreciate your feedback and your decision to increase the score in support of our paper.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer 2D8Y\", \"comment\": \"Dear Reviewer:\\n\\nThank you for your constructive comments and suggestions. They have been invaluable in helping us enhance the quality and clarity of our paper. Please find our point-by-point responses to your concerns below.\\n\\n**Q1.**\\nThe strict hierarchy in the three-step message-passing mechanism could introduce bottlenecks in information flow. By requiring all long-range communication to route through community nodes, the model risks distorting or weakening critical direct relationships between nodes\\u2014especially in tasks where pairwise connections hold essential information. The assumption that this hierarchical structure is universally beneficial may be too broad, as the paper offers little discussion on cases where direct node-to-node communication might better capture necessary details.\\n\\n**A1.**\", \"please_allow_us_to_clarify_pairwise_connections_and_their_importance_from_the_following_perspectives\": \"**Preservation of Pairwise Connections:**\\nIn scenarios where pairwise relationships hold critical information, the proposed HOGT framework explicitly addresses this during the final step\\u2014Community Node-to-Graph Node. At this stage, the representation of each graph node is updated by aggregating information from both its associated community nodes and its directly connected neighbors. This mechanism ensures that essential local connections are preserved and effectively incorporated into the model.\\n\\n**Flexibility in Community Optimization:**\\nHOGT is a general framework that allows for the optimization of the number of communities to suit different datasets. In extreme cases, treating the entire dataset as a single community naturally emphasizes direct node-to-node communication. Furthermore, the random walk sampling approach generates communities for only a subset of graph nodes, offering flexibility in balancing local and global interactions. This adaptability ensures that the model can capture pairwise relationships when necessary while still benefiting from the hierarchical structure.\\n\\nOverall, the hierarchical structure introduced by community nodes provides a versatile framework for balancing local and global interactions in graphs. While HOGT is designed to capture comprehensive information across various types of graphs, we acknowledge that achieving an optimal balance between local and global information in complex graph structures remains a challenge. This is a current limitation and a promising direction for future research.\\n\\n**Q2.**\\nThe approach to initializing community nodes also feels underdeveloped and could pose challenges. Starting with random initialization may lead to instability and slower convergence, particularly in early training stages. Additionally, there's no clear strategy for aligning community node dimensionality with original node features, which seems like a significant gap. Given that these community nodes are crucial bridges for information flow, their initial setup could substantially impact the quality of the representations learned.\\n\\n**A2.**\\nWe appreciate the reviewer\\u2019s insightful comments regarding the initialization of community nodes and its potential impact on stability and convergence.\\n\\n**Initialization of Community Nodes:**\\nThe introduced virtual nodes (community nodes) can be initialized using either zero vectors or random initialization. In our experiments, we observed that both approaches resulted in similar final performance after 200 epochs, suggesting that the choice of initialization method has minimal impact on the final outcomes.\\n\\n**Stability and Convergence:**\\nTo address your concerns about stability, we have conducted experiments with 10 independent runs using different random seeds. The results demonstrated low variance, indicating that our framework is robust and stable across various initialization conditions.\\nWhile random or zero initialization is effective, an alternative approach\\u2014such as using max or mean pooling of the features of graph nodes within a community to initialize community node features\\u2014could potentially accelerate convergence by providing a more informed starting point for community node embeddings. However, this method introduces additional computational overhead, which we have deliberately avoided in our current implementation to maintain efficiency.\\n\\nWe will update our paper to discuss the potential advantages of alternative initialization strategies and the trade-offs involved, ensuring a comprehensive understanding of our approach.\"}", "{\"comment\": \"Dear Chairs and Reviewers,\\n\\nHope this message finds you well.\\n\\nWith the closing of the discussion period, we present a brief summary of our discussion with the reviewers as an overview for reference. First of all, we thank all the reviewers for their insightful comments and suggestions. We are encouraged by the positive feedback, as highlighted below:\\n\\nR1. \\\"The paper introduces a well-founded architecture...\\\", \\\"Theoretically, it\\u2019s shown that HOGT can approximate global attention and unify existing models,...\\\", \\\"demonstrates strong versatility, efficiency, ...\\\".\\n\\nR2. \\\"The paper introduces a novel approach...\\\", \\\"demonstrating its effectiveness in node classification tasks\\\", \\\"provides a theoretical analysis of HOGT,...\\\".\\n\\nR3. \\\"The technical part of the paper is good -- the method is of careful design and implementation.\\\"\\n\\nR4. \\u201cThe idea is novel\\u201d\\uff0c \\u201ccontributes to the advancement of the field and opens up new research directions for further exploration.\\u201c.\\n\\nWe have carefully addressed all the comments and provided detailed responses. Since we did not receive specific questions or response from Reviewer nhvf during rebuttal, we summarize the main concerns of other reviews and outline the corresponding updates in the revised manuscript:\\n\\n**The approach to initializing community nodes.** We added an analysis of community node initialization in Appendix A.7 and shown that the existing initialization approach is reasonable.\\n\\n**The performance of HOGT on link prediction.** Additional experiments on the TEG-DB dataset for link prediction are included in Appendix A.9, further highlighting the effectiveness of HOGT.\\n\\n**Hyperparameter sensitivity and robustness evaluations.** We conducted hyperparameter sensitivity and robustness evaluations, presented in Appendices A.10 and A.11, respectively. The results confirm the robustness of the proposed HOGT.\\n\\nBased on the discussion with reviewers, we also present a brief summary of our paper as follows.\\n\\n**Observation:** Existing graph models struggle to effectively capture the complex structural relationships in the graph for different graphs and data types, while also providing theoretical support.\\n\\n**Solution:** We propose a flexible sampling method followed by a three-step message-passing framework in GTs to capture comprehensive information achieving high expressiveness for graph representation learning.\\n\\n**Results:** The effectiveness of our framework has been demonstrated on various graph types (graph and hypergraph), data types (homophily and heterophily), data scales (same-scale and large-scale), and different graph tasks (node classification, graph classification, and link prediction).\\n\\n**Highlights:**\\n\\n- Introduced a higher-order message-passing strategy with flexible sampling methods. \\n\\n- Unified message-passing and GTs by constructing communities and introducing new community nodes.\\n\\n- Provided theoretical proof that the three-step message-passing framework with newly introduced community nodes achieves global attention akin to general transformers.\\n\\n- Demonstrated the versatility and robustness of HOGT through extensive experiments.\\n\\n\\nThanks again for your efforts in the reviewing and discussion. We appreciate all the valuable feedback that helped us to improve our submission.\\n\\nSincerely\\n\\nThe Authors\"}", "{\"title\": \"Official Comment by Reviewer RHuk\", \"comment\": \"Thank you for your rebuttal. I have no further questions. I will increase my score.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful review and for taking the time to consider our rebuttal. We are pleased to see that we have adequately addressed your concerns, with no further questions remaining.\\n\\nAs the discussion deadline approaches, and given that all comments have been thoroughly addressed, we kindly request you to reconsider your rating. Once again, thank you for your insightful review and valuable time.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks once again for supporting our paper with a positive score. While you have no questions about this paper, we noticed that your confidence level remains low. As the discussion period nears its conclusion, we kindly hope you to consider reevaluating.\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks a lot to you for your time and effort in reviewing our paper. For each question you have raised, we have thoughtfully provided our explanations and hope the explanations can alleviate your uncertainty.\\n\\nAs the discussion deadline is approaching, we would like to inquire if you have any further suggestions for improving our manuscript. We would greatly value your input and appreciate your guidance.\\n\\nIf our responses have sufficiently addressed your concerns, we kindly hope you might reconsider the rating. Thank you for your time and consideration.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort in reviewing our paper. Based on your feedback, we have made several updates to the manuscript (highlighted in blue), including:\\n\\nAdding a limitation analysis of HOGT in the Conclusion.\\n\\nIncluding an analysis of community node initialization.\\n\\nAs the discussion deadline approaches, we kindly inquire if you have any further suggestions for improving our manuscript. We deeply value your input and would greatly appreciate your guidance.\\n\\nThank you once again for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks again for your patience! Here is the rest.\\n\\n**Q6.**\\nCould the authors discuss how HOGT captures long-term dependencies in the graph and compare this with other methods that focus on long-range interactions?\\n\\n**A6.**\\nIn HOGT, long-range dependencies between nodes are effectively captured through the use of community nodes and the three-step message-passing scheme. Community nodes act as intermediaries that aggregate and propagate information from their associated nodes, enabling efficient communication between distant nodes. Specifically, any two graph nodes can interact within fewer than three hops via two community nodes, facilitating the propagation of long-range dependencies with minimal computational overhead.\\n\\nCompared to HOGT, general Graph Transformer (GT) models address long-range interactions by establishing direct connections between distant nodes through global attention. While this approach is effective for modeling global dependencies, it often incurs substantial computational costs due to the quadratic complexity of global attention. Alternative methods attempt to mitigate this by introducing anchor nodes [1, 2] or supernodes [3, 4], derived from the graph structure to connect distant nodes. However, these methods heavily depend on the initial graph structure, which may inadequately represent and balance critical information in complex or large-scale graphs. More discussion can be found in the Related Work section.\\n\\nEmpirical results on heterophilic and large-scale datasets, which inherently require modeling long-range dependencies, highlight HOGT's effectiveness. Our experiments demonstrate that HOGT consistently outperforms other GT models capable of capturing global information. This performance gain arises from HOGT's ability to efficiently capture and propagate long-range dependencies through its community-node mechanism, while maintaining a lower computational complexity compared to traditional global attention-based methods.\\n\\n[1] Wenhao Zhu, et al. Anchorgt: Efficient and flexible attention architecture for scalable graph transformers. IJCAI, 2024.\\n\\n[2] Bo Jiang, et al. Agformer: Efficient graph representation\\nwith anchor-graph transformer. arXiv preprint arXiv:2305.07521, 2023.\\n\\n[3] Weirui Kuang, et al. Coarformer: Transformer for\\nlarge graph via graph coarsening, 2022. In URL https://openreview. net/forum, 2021.\\n\\n[4] Wenhao Zhu, et al. Hierarchical transformer\\nfor scalable graph learning. IJCAI, 2023.\"}", "{\"comment\": \"Thank you for your rebuttal. I have no further questions. I will maintain my score.\"}", "{\"comment\": \"We deeply appreciate the valuable feedback from all reviewers. We are pleased to note that during the rebuttal phase, we successfully addressed the concerns of other reviewers, which led to either maintained positive evaluations or improved scores. We genuinely hope our detailed and thoughtful response will also resolve your concerns and help improve the score of our paper. We sincerely look forward to your feedback and thank you once again for your time and consideration!\"}", "{\"comment\": \"Thank you very much for your patience! Here is the rest.\\n\\n**For Point (2):**\", \"the_three_step_message_passing_framework_enables_hogt_to_approximate_full_self_attention_as_follows\": \"- Graph Node-to-Community Node (G2C-MP): Message-passing through a newly introduced community node, which is connected to all nodes in the community, approximates self-attention within the community. This is supported by Proposition 4.1 in the paper.\\n\\n - Community Node-to-Community Node (C2C-ATTN): A self-attention mechanism propagates information among community nodes, enabling communication between communities.\\n\\n - Community Node-to-Graph Node (C2G-MP): Global information in the community nodes is propagated back to the graph nodes through another MPNN+CN, effectively approximating \\\"full\\\" self-attention across the graph.\\n\\nWe focuse primarily on demonstrating Point (2) within the paper, highlighting the interplay between the three-step framework and global attention approximation.\\n\\n[1] Chen Cai, et al. On the connection between MPNN and graph transformer. ICML 2023.\\n\\n**Q5.**\\nHow sensitive is HOGT to its hyperparameters, particularly the number of communities and the reinforcement learning-based sampling method?\\n\\n**A5.**\\n\\nWe analyzed the sensitivity of HOGT to the number of communities using two unlearnable sampling methods (details in Appendix A.8 of the paper). The results reveal the following trends: \\n\\n - For **HOGT (random walk)** on the Cora dataset, increasing the number of communities initially improves performance. This improvement occurs because a larger number of communities extracted by random walk allows HOGT to encode more localized higher-order information. \\n - For **HOGT (spectral clustering)** on Cora, we observe a more complex trend: performance initially decreases with more communities, followed by an improvement. This pattern suggests the presence of critical substructures in the graph that spectral clustering captures effectively when the community structure aligns with these substructures. \\n - On the Wisconsin dataset, HOGT demonstrates stable performance across different numbers of communities for both methods. Since Wisconsin is a small-scale dataset, introducing a community node effectively encodes the global information without significant dependence on the number of communities. \\n\\n**Reinforcement Learning (RL)-Based Sampling:** \\n The RL-based sampling method adaptively learns the optimal number of communities, eliminating the need to predefine this hyperparameter. This approach adds flexibility to HOGT and ensures robust performance without requiring extensive manual tuning of the number of communities. \\n\\nThese observations demonstrate that while the number of communities can influence HOGT\\u2019s performance, the RL-based sampling method provides a practical solution for optimizing this parameter adaptively.\"}", "{\"title\": \"Response to Reviewer RHuk\", \"comment\": \"Dear Reviewer,\\n\\nThanks for reviewing our paper and the valuable comments. Please find our point-by-point response to your concerns below.\\n\\n**Q1.**\\nAnalysis on the sensitivity of the HOGT model's performance to its hyperparameters such as walk length, hidden dimension, and dropout. Have the authors experimented with different optimization algorithms for hyperparameter tuning, and if so, what were the results?\\n\\n**A1.**\\nWe conducted an analysis of HOGT's sensitivity to various hyperparameters, including walk length, hidden dimension, dropout, and optimizer. The results are summarized in the following table. From the findings, we observe that HOGT, when using the random walk sampling method, demonstrates low sensitivity to walk length and dropout. However, on the Cora dataset, the model shows higher sensitivity to the hidden dimension and optimizer, indicating its importance in influencing performance. In our experiments, AdamW is adopted for HOGT and other GT models.\\n\\n**Table: The performances of HOGT with different hyperparameters.**\\n\\n| **Hyperparameters** | **Cora** | **Citeseer** | **Pubmed** |\\n|-----------------------|----------|--------------|------------|\\n| **Hidden dimension** | | | |\\n| 128 | 85.45 | 76.63 | 88.42 |\\n| 256 | 88.13 | 76.76 | 89.20 |\\n| **Dropout** | | | |\\n| 0 | 87.73 | 76.98 | 88.40 |\\n| 0.2 | 86.92 | 76.76 | 88.40 |\\n| 0.5 | 87.05 | 76.93 | 89.02 |\\n| **Walk length** | | | |\\n| 3 | 86.59 | 76.68 | 88.39 |\\n| 5 | 87.59 | 76.73 | 88.41 |\\n| 10 | 87.45 | 76.98 | 88.45 |\\n| **Optimizer** | | | |\\n| Adam | 86.91 | 76.08 | 88.24 |\\n| AdamW | 88.11 | 76.74 | 89.20 |\\n\\n**Q2.**\\nFurther exploration of the sampling method's performance in graphs with irregular or sparse structures would enhance the understanding of the model's robustness.\\n\\n**A2.**\\nCiteseer and Pubmed can be considered sparse graphs, with node-to-edge ratios of 1.4 and 2.2, respectively. To further evaluate the robustness of the proposed HOGT in handling graphs with fewer edges, we conducted additional experiments by randomly removing 10\\\\% and 20\\\\% of the edges from Citeseer and Pubmed. The results, presented in the following table, demonstrate that HOGT, when using the random walk sampling method, maintains strong performance even under these conditions. This highlights the robustness of HOGT in processing graphs with irregular or sparse structures.\\n\\n**Table: The performance of HOGT (randomwalk) with sparse structure on Citeseer and Pubmed. The edge ratio means the reserving ratio of original edges.**\\n\\n| **Method** | **Edge Ratio** | **Citeseer** | **Pubmed** |\\n|------------|-----------------|--------------|------------|\\n| HOGT | 80% | 74.52 | 85.96 |\\n| | 90% | 75.07 | 86.71 |\\n| | 100% | 76.74 | 89.20 |\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort in reviewing our paper. Based on your feedback, we have added more experiments on other domains (TEG-DB) for the link prediction task to further demonstrate the effectiveness of HOGT in the updated manuscript (highlighted in blue).\\n\\nAs the discussion deadline approaches, we kindly inquire if you have any further suggestions for improving our manuscript. We deeply value your input and would greatly appreciate your guidance.\\n\\nThank you once again for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
BaMkS6E2Du
Deliberate Reasoning for LLMs as Structure-aware Planning with Accurate World Model
[ "Siheng Xiong", "Ali Payani", "Yuan Yang", "Faramarz Fekri" ]
Enhancing the reasoning capabilities of large language models (LLMs) remains a key challenge, especially for tasks that require complex, multi-step decision-making. Humans excel at these tasks by leveraging deliberate planning with an internal world model to simulate the potential outcomes of various actions. Inspired by this, we propose a novel multi-step reasoning framework for LLMs, referred to as Structure-aware Planning with Accurate World Model (SWAP). Unlike previous approaches that rely solely on Chain-of-Thought (CoT) reasoning in natural language, SWAP incorporates structural information to guide the reasoning process via a world model and provides a soft verification mechanism over the steps. Moreover, SWAP overcomes the challenge of accurate world state predictions in complex reasoning tasks by introducing a Generator-Discriminator architecture, which enables more reliable world modeling. Specifically, the generator predicts the next state, and the discriminator ensures alignment with the logical consistency required by the problem context. SWAP also encourages the policy model to explore a broad range of potential actions to prevent premature convergence. By resolving the bottlenecks of generation diversity for both actions and states using diversity-based modeling (DBM) and improving discrimination accuracy through contrastive ranking (CR), SWAP significantly enhances the reasoning performance of LLMs. We evaluate SWAP across diverse reasoning-intensive benchmarks including math reasoning, logical reasoning, and coding tasks. Extensive experiments demonstrate that SWAP achieves substantial improvements over the baselines and consistently outperforms existing methods.
[ "Large Language Models", "multi-step reasoning", "planning with world model", "structured reasoning", "generator-discriminator architecture" ]
Reject
https://openreview.net/pdf?id=BaMkS6E2Du
https://openreview.net/forum?id=BaMkS6E2Du
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxSpqZwkLs", "yyElmBZjZ8", "wzY0X98gYK", "w5B142S93C", "uIsjOk8Ywh", "sWtIscc3DO", "rEJ0fFehWD", "pEa1T9cbBH", "nauYenM20r", "jRyv1uv8RA", "im8Bm8kKVB", "iREdulvx25", "gnfHLLyB7u", "gkU85cZuwf", "b4PbS4PFfQ", "ac2jwVHIw6", "aM6tXSLgfk", "XYODjRUfY4", "X0z76xHRQc", "WRhpd10oau", "QHovs63Ihu", "PDFLwhOtSw", "Mh79nWGvXi", "LCBS8jS7Ip", "JMdK3uSwwf", "J2h7Me5kQk", "Hcpl43pp63", "GNwuSMMfqI", "Fc0eikG0sg", "C7m8HvuJOh", "BcVgN29hTU", "7LLuIdU0Pg", "6OgJfqJ079" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732693688179, 1732698406258, 1734578632322, 1732406970088, 1732319551200, 1732561827720, 1732487348759, 1729859424680, 1732641699638, 1730651235305, 1732273201795, 1732694519712, 1732311701382, 1733012506010, 1732271187480, 1732552131528, 1737524207701, 1733012291199, 1732511436141, 1732690852504, 1732506294397, 1730685762682, 1732519144914, 1732434325742, 1733013393794, 1732320270758, 1732511743116, 1732272772882, 1732270373876, 1730763359565, 1732738140624, 1732311659876, 1732511641241 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_p8KY" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_F56z" ], [ "ICLR.cc/2025/Conference/Submission12676/Area_Chair_93np" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_F56z" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_zS5k" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_38qG" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_zS5k" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_38qG" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_F56z" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_zS5k" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Reviewer_p8KY" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ], [ "ICLR.cc/2025/Conference/Submission12676/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you to the authors for the detailed rebuttal. Some of my concerns have been addressed, so I am raising my score to 6.\"}", "{\"comment\": \"Thank you for your efforts in closing the gap between your reported numbers and the original works. I am glad that the rebuttal discussion has helped strengthen the considered baselines.\\n\\n> Specifically, we remove samples from the training set that belong to the test split of MATH.\\n\\nI would suggest the authors report the remaining dataset size. Also, I could not find the data sizes the authors generate (from GPT-4 and DeepSeek) for training their own method, which I suggest they add.\\n\\nAlso, I suggest the authors mention how many roll-outs/samples (i.e., RM@K) they take from both SWAP and PRM methods in Table 1.\\n\\n> From our understanding, [4] uses the vague term \\\"maj@1\\\" to describe the evaluation process for GSM8k and MATH\\n\\nmaj@1 has a widely understood meaning in the community.\\n\\nOverall, I find the experimental results that, a LoRA-tuned SWAP is outperforming the best PRM model by 13.5% on the MATH dataset, to be very strong. Such strong results require rigorous and strong evaluation, and given the rebuttal discussion, it seems that the baselines lack very basic tuning (such as temperature and maj@k), which casts double on the whole evaluation pipeline. I tried to take a look at the code, but it is very unreadable and lacks a README file.\\n\\nTherefore, I maintain my score.\"}", "{\"metareview\": \"The paper proposes a multi-step reasoning framework with methodological limitations. The Structure-aware Planning with Accurate World Model (SWAP) closely resembles existing approaches without presenting novel contributions. Reviewers highlighted persistent issues with presentation, notation clarity, and experimental validation. Performance improvements were marginal and inconsistent, with concerns about evaluation methodology and significant performance gaps. The implementation details remained vague, and authors incompletely addressed fundamental questions about the method's efficiency and generalizability. It is hard to say that the proposed approach substantially advances multi-step reasoning for large language models.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised critical concerns about the paper's methodology, including unclear notation, lack of novelty, and insufficient differentiation from existing approaches. Key issues involved ambiguous calculations, performance gaps, and vague implementation details. The authors responded by refining notations, adding pseudo-code, and conducting additional experiments with different models. They emphasized the framework's graph-based representation as a unique contribution. However, reviewers remained skeptical about the method's efficiency and fundamental innovation.\"}", "{\"comment\": \"Thank you for the newly added comparisons.\\n\\n> We also add experiments with PRMs (PRM800k [1] and Math-Shepherd [2]) with the same baseline model in the revised version (Table 1). It shows that the performance of our method is better than those of PRMs.\\n\\nWhat is the exact details of training with the PRM800K dataset? from my understanding it contains significant fraction of the MATH test set, except for only 500 samples (MATH500), how is the evaluation comparison done on the entire MATH test set?\\n\\n> We checked the evaluation details of the LLama3 model [3,4,5]. The reported results actually come from\\u00a0majority voting\\u00a0rather than single-time inference. Specifically, for the same question, they generate 100 solutions for GSM8k, 256 solutions for MATH, and 200 solutions for HumanEval.\\n\\nThe link in [3] (provided by the authors) explicitly mentions that for MATH and GSM8K the results are maj@1, can you point out to the reference of maj@100 and maj@256 you are referring to?\\n\\nAlso, what does the CoT-SC in Table 1 refer to? From my understanding, it should be the self-consistency/majority sampling baseline (with how many samples?).\\n\\n> we use the default evaluation setting (single-time inference, low temperature=0.2, fixed seed) in our experiments for all methods to ensure a fair comparison.\\n\\nThe default evaluation setting for maj@1 is typically greedy sampling (i.e., temperature=0), which may be one reason behind the author\\u2019s low numbers.\\n\\n> We also found that since the\\u00a0test set size\\u00a0of HumanEval is relatively small (164), the large performance gap is actually responding to a few test samples.\\n\\nThis does not justify the performance gap, or the 12% lower performance compared to the report of the original llama.\"}", "{\"title\": \"Rebuttal by Authors to Reviewer 38qG\", \"comment\": \"Thank you for your insightful comments, which are incredibly helpful in enhancing our work!\\n\\n## 1. Notation\\nThanks for your suggestion! We refine the notations throughout the framework, and add the detailed pseudo-code (Algorithm 1 and 2). Please refer to the revised version (Section 4.1, 4.2, and 4.3) for more details.\\n\\n## 2. Comparison to process reward model\\nWe add the following content in the related work section.\\n\\nAlthough recent research has increasingly explored automatic process annotations using tree search, training an effective PRM remains challenging, as from a mathematical perspective, it assigns a **numerical value** within $[0, 1]$ to each state **independently**. \\nTo overcome this problem, we propose a novel strategy for **automatic ranking annotation**, i.e., given the current context and a set of candidate options, selecting the best option based on relative quality.\", \"our_ranking_strategy_offers_significant_advantages_over_traditional_prms\": \"1) it emphasizes relative quality, making it more robust to noise; 2) it simplifies optimization and enhances generalization.\\nNotably, our high-quality automatic ranking annotation method is non-trivial as it systemically incorporates three key factors: 1) **structural information**; 2) **correctness**; and 3) **semantical equivalence**.\\n\\nWe also add experiments with PRMs (PRM800k [1] and Math-Shepherd [2]) with the same baseline model in the revised version (Table 1). It shows that the performance of our method is better than those of PRMs.\\n\\n## 3. Comparison to other diversity-seeking methods\\nCompared to related work [3,4], our approach offers the following advantages:\\n\\n1) **Diverse beam search** [3] performs beam search in groups using a diversity-augmented objective. However, it has several limitations: 1) Beam search at the token level is computationally intractable; 2) Searching the most suitable strategies and hyperparameters for similarity calculation can be time-consuming; 3) For reasoning tasks involving special tokens (e.g., math reasoning or first-order logic reasoning), embedding-based similarity calculations may be unreliable.\\n\\n In contrast, **SWAP** (implemented as SFT with LoRA) employs an end-to-end learning paradigm, leveraging the world knowledge in pre-trained LMs. Our generation process is sampling-based, making it more efficient than token-level beam search.\\n\\n2) **GFlowNets fine-tuning** [4] is a diversity-seeking reinforcement learning algorithm that uses amortized Bayesian inference. Although it demonstrates better performance compared to SFT with limited training data, it is unclear whether it can scale to large-scale datasets and complex reasoning tasks. As a reinforcement learning method, GFlowNets fine-tuning can be significantly more challenging and costly to train when dealing with large-scale datasets. \\n\\n In contrast, **SWAP** is more scalable and better suited for handling large-scale datasets and complex reasoning tasks efficiently.\\n\\nWe have summarized the above advantages and added to the revised version (Section 4.2). \\n\\n## 4. Confidence interval\\nWe have added confidence interval for the results in Table 1. Please refer to the revised version.\\n\\n## 5. Calculation of semantic similarity\\nWe have added the calculation process in the revised version (Section 4.2).\\n\\nThe effect of action order in multi-step reasoning is an interesting question. Currently, our approach focuses on increasing diversity step-by-step. In our experiments, we observed that the order of actions typically remains stable, since the actions are often interdependent.\\n\\nAs future work, this issue could be addressed through post-processing filtering based on similarity calculations.\\nSpecifically, if a generated trajectory is too similar to any existing trajectories in the stored pool, it will be discarded.\\nTo compare trajectories, the proposed graph representation provides a significant advantage, as graph similarity algorithms can be effectively utilized for this task. \\nAdditionally, we can explore data augmentation by reordering the actions based on the graph's dependency structure, further enhancing diversity and robustness.\"}", "{\"title\": \"Rebuttal by Authors to Reviewer F56z\", \"comment\": \"To avoid confusing the audience, we have added clarifications in Table 1 to better convey the implementation details. Specifically, we have updated method names as follows: \\\"Few-shot CoT\\\" has been changed to \\\"Few-shot CoT (4-shot)\\\", \\\"CoT-SC\\\" has been updated to \\\"SC@maj8\\\", and \\\"PRM (PRM800K)\\\" is now labeled as \\\"PRM (PRM800K*)\\\". Additionally, we have clarified in the table caption with the note: *\\\"We use the filtered PRM800K dataset [1] to evaluate performance on the full MATH test set.\\\"* Please refer to the revised version.\\n\\nAnother potential issue is the comparison between low-temperature sampling (e.g., 0.2) and greedy decoding. Given the time constraints, our experiments were primarily conducted on HumanEval. We observed that while greedy decoding produces more stable results, the performance differences between the two methods are minimal. In the next version of this work, we plan to conduct experiments across all benchmarks and include this analysis in the appendix.\\n\\nWe hope these modifications effectively address your concerns!\\n\\n[1] Sun, Zhiqing, et al. \\\"Easy-to-hard generalization: Scalable alignment beyond human supervision.\\\" arXiv preprint arXiv:2403.09472 (2024).\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thanks to the authors for their efforts to improve the paper. The addition of pseudocode and the revised Table 1 make the paper much clearer than the original draft. Therefore, I would like to raise my rating to borderline.\", \"i_still_have_two_remaining_questions\": [\"Can these trained scores be used as reward models, similar to Math-Shepherd?\", \"The efficiency comparison may not be entirely accurate when based solely on bounds, as there could be differences in coefficients. Could the authors provide an estimate of the real-time ratio for different methods? For example, this could be based on the average number of tokens generated per test case.\"]}", "{\"summary\": \"This paper proposes SWAP, which aims to leverage structural information to guide the reasoning process with a world model. The idea is to enable a policy model and a world model to generate diverse actions and states, leading to a higher possibility of getting the correct results. The authors use multiple benchmarks to demonstrate the effectiveness of their proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper demonstrates that SWAP outperforms various baseline methods on several benchmarks, with ablation studies confirming the effectiveness of the different components proposed by the authors.\\n\\n2. A key contribution of the paper is framing LLM reasoning as a graph, which allows the model to generate diverse actions and states. This approach helps avoid local minima and enhances verifiability in reasoning, effectively addressing the limitations of Chain-of-Thought (CoT) methods. This contribution is significant for advancing multi-step reasoning.\\n\\n3. The overall structure of the paper is well-organized; however, there are some weaknesses in presentation that should be addressed.\", \"weaknesses\": \"1. Many annotations are used before defining them makes the method hard to follow, for example, in line 235 $P_{ori}(z; T_{s_{t-1}})$, it's not clear what is \\\"learned distribution\\\", and the definition of $P_{sem}$ is vague. Annotations are not consistent as well, for example, in line 272 $M_{wm-G}$ is not consistent with $M_{wm}$ in line 175. $M_{\\\\pi_D}$ and $M_{wm-D}$ in equations 8, and 9 are not explained in the paper.\\n2. One of the contributions is process supervision, however, a comparison to other process supervision methods is missing.\\n3. The paper proposes to generate diverse actions and states to enhance performance, however, there is a lack of comparison and analysis with diversity-seeking methods using reinforcement learning designed to enhance diversity, such as [1, 2].\\n4. There lacks a confidence interval for the results, it's not clear how the performance is robust to initialization and randomness.\\n\\n[1] Vijayakumar, Ashwin K., et al. \\\"Diverse beam search: Decoding diverse solutions from neural sequence models.\\\" arXiv preprint arXiv:1610.02424 (2016).\\n\\n[2] Hu, Edward J., et al. \\\"Amortizing intractable inference in large language models.\\\" The Twelfth International Conference on Learning Representations.\", \"questions\": \"1. How do the semantic similarity is calculated? As mentioned, the method aims to improve the diversity of the reasoning process. However, multi-step reasoning problems can have different orders of actions, leading to semantically different but the same answers. As shown in Figure 1\\n2. As shown in Table 1, fine-tuning a Llama-3-8b shows inferior or comparable results to CoT, which is counterintuitive. Could you briefly explain the reason?\\n3. What is the training cost for SWAP, could the author provide a comparison between SFT and the proposed method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledgment of Reviewer Contributions\", \"comment\": \"We sincerely appreciate the valuable comments and insights provided by all the reviewers. We have carefully reviewed the feedback to ensure a thorough understanding of the concerns raised. In response, we have refined our presentation and conducted additional experiments to address these concerns comprehensively. We are pleased with the substantial improvements our paper has achieved as a result of this process.\\n\\nWe are particularly grateful to Reviewers F56z, zS5k, and 38qG for engaging with our rebuttal and providing additional discussion. We also look forward to further interactions with Reviewer p8KY and are eager to address any remaining concerns from all reviewers.\\n\\nWe recognize ICLR's high standards and have made every effort to refine our paper to meet the expectations of the community. We sincerely hope the reviewers could consider our revised submission and the progress made during this process.\\n\\nThank the reviewers again for their time, insights, and constructive feedback!\"}", "{\"summary\": \"The paper discusses a framework called Structure-aware Planning with Accurate World Model (SWAP) that aims to enhance the reasoning capabilities of large language models (LLMs). Key designs include:\\n\\n**Framework Overview**: SWAP integrates structural information into the reasoning process, providing a soft verification mechanism that guides LLMs through multi-step reasoning. The authors suggest this approach may improve upon traditional Chain-of-Thought (CoT) reasoning methods, which can lack effective verification mechanisms.\\n\\n**Generator-Discriminator Architecture**: The framework employs a Generator-Discriminator architecture to enhance world modeling. The generator is responsible for predicting future states, while the discriminator evaluates these predictions to improve the accuracy of the reasoning process.\\n\\n**Diversity-Based Modeling**: The paper introduces a method to encourage diversity in action generation and state prediction, which is intended to allow the model to explore a broader range of solutions and avoid premature convergence on suboptimal paths.\\n\\n**Contrastive Ranking for Discrimination Accuracy**: The authors implement a contrastive ranking approach that focuses on relative comparisons to improve the discriminator's ability to identify valid and invalid reasoning steps.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The introduction of normalization metrics (Eq. 2 & 4) for Diversity-Based Modeling is novel and interesting.\", \"The method is shown effective on diverse tasks (math, coding and logical reasoning) with Llama3-8b as backbone.\"], \"weaknesses\": [\"The paper writing lacks clarity. Especially about the structured search algorithm as mentioned in 4.1. The generator and discriminator framework only shows how to select the action when more than one choices are provided. However, when more than one state and action are selected, what is the next state to process among multiple parallel choices? Figure 2 illustrates the process as a linear step by step process and fails to present the tree search as shown in Figure 4. It would be better if Figure 2 is replaced by detailed pseudo code.\", \"The framework of this algorithm is very similar to related work (TOT, RAP and [1]). RAP also uses world model based tree search and the main difference is MCTS style search is used to rank choices while this work uses a discriminator to reject choices. [1] proposes a similar pipeline that also uses discriminator-aided tree search. Wonder if the authors could point out the main difference with these works and provide a elaborated compare of the main pipeline.\", \"Figure 3 plots the use of LORA tuning to improve generation diversity. However, only posthoc adjustment methods are proposed in the section. Wonder if the authors could further explain this figure.\", \"Table 1 lists a lot of redundant backbone models that may not be directly comparable with SWAP + Llama3-8b. It would be better if the authors compare algorithms while fixing backbones and add more comparison with different backbone models.\", \"[1]Chen, Ziru, et al. \\\"When is tree search useful for llm planning? it depends on the discriminator.\\\" arXiv preprint arXiv:2402.10890 (2024).\"], \"questions\": \"1. What is the efficiency of the SWAP tree search algorithm compared to TOT and RAP? SWAP uses discriminator to prune many options and may have an advantage in efficiency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continuation of Rebuttal by Authors to Reviewer F56z\", \"comment\": \"## 4. Our approach on specialized models\\n\\nThis is an interesting question! Given the time constraints, we primarily conducted experiments using the DeepSeek-Math-7B-Instruct model and observed that SWAP also performs effectively with this specialized model. Expanding the evaluation to include additional specialized models is planned as part of future work.\\n\\n| | GSM8K | MATH |\\n|----------|----------|----------|\\n| **Deepseek-math-7b-Instruct** |\\n| CoT | 82.0 | 45.4 |\\n| SWAP (w/o discriminator) | 82.4 | 45.0 |\\n| SWAP | 86.1 | 47.5 |\\n\\n## Reference:\\n[1] Lightman, Hunter, et al. \\\"Let's verify step by step.\\\" arXiv preprint arXiv:2305.20050 (2023).\\n\\n[2] Wang, Peiyi, et al. \\\"Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning.\\\" arXiv preprint arXiv:2312.08935 (2023).\\n\\n[3] https://github.com/meta-llama/llama3/blob/main/eval_details.md\\n\\n[4] Dubey, Abhimanyu, et al. \\\"The llama 3 herd of models.\\\" arXiv preprint arXiv:2407.21783 (2024).\\n\\n[5] Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\" arXiv preprint arXiv:2302.13971 (2023).\"}", "{\"title\": \"Thanks for reply\", \"comment\": \"Thank you for your recognition and support!\"}", "{\"title\": \"Continuation of Rebuttal by Authors to Reviewer zS5k\", \"comment\": \"## Reference:\\n[1] Chen, Ziru, et al. \\\"When is tree search useful for llm planning? it depends on the discriminator.\\\" arXiv preprint arXiv:2402.10890 (2024).\\n\\n[2] Huang, Jie, et al. \\\"Large language models cannot self-correct reasoning yet.\\\" arXiv preprint arXiv:2310.01798 (2023).\\n\\n[3] Jiang, Dongwei, et al. \\\"Self-[in] correct: Llms struggle with refining self-generated responses.\\\" arXiv preprint arXiv:2404.04298 (2024).\\n\\n[4] Yang, Yuan, et al. \\\"Can LLMs Reason in the Wild with Programs?.\\\" arXiv preprint arXiv:2406.13764 (2024).\\n\\n[5] Lightman, Hunter, et al. \\\"Let's verify step by step.\\\" arXiv preprint arXiv:2305.20050 (2023).\\n\\n[6] Wang, Peiyi, et al. \\\"Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning.\\\" arXiv preprint arXiv:2312.08935 (2023).\"}", "{\"title\": \"Continuation of Rebuttal by Authors to Reviewer F56z\", \"comment\": \"## Comparison between PRM and SWAP\\n\\nWe would like to emphasize the key differences (advantages) of SWAP over PRM:\\n\\n**1. Planning vs. Verification:** PRM is a verification method, meaning it only takes effect after the trajectory is complete. In contrast, SWAP is a planning method, where the discriminator actively guides the selection of actions mid-process, improving sampling efficiency.\\n\\n**2. Enhanced Generator:**\\nOur generator is significantly stronger, as we fine-tune it on RL trajectories. Unlike CoT, we model the reasoning process as a MDP with interleaved actions and states, resulting in a more fine-grained training and inference process.\\n\\nThe evidence is that the average token usage for CoT is 175.6 on GSM8k, whereas SWAP (without the discriminator, i.e., no planning) uses 306.9 tokens.\\nAs noted by OpenAI, there exists an inference-time scaling law: models achieve higher accuracy with more inference tokens, which explains SWAP's superior performance over CoT.\\n\\nAdditionally, we incorporate diversity-based modeling to enhance sampling diversity and further improve efficiency.\\n\\nIn contrast, the generator of PRM uses conventional CoT with IID sampling.\\n\\n\\n**3. Structural Information:** \\nA key innovation of our framework is treating multi-step reasoning as entailment graph construction. \\nThese structures explicitly represent statement dependencies, providing additional guidance for the model.\\nThey also enable structural verification (e.g., rule-based validity checks) to select better options.\\nAt a high level, this approach enforces rigorous reasoning while maintaining expressiveness.\\n\\n**4. Improved Discriminator:**\\nOur framework enhances discrimination through contrastive ranking and meta-knowledge. \\nTraining an effective PRM is challenging, as it assigns independent numerical values within $[0, 1]$ to each state.\\nIn contrast, our discriminator evaluates relative quality, making it more robust to noise, simplifying optimization, and improving generalization.\\nFurthermore, we enhance discrimination accuracy using meta-knowledge extracted from training samples, which identifies common pitfalls for specific problem classes.\\n\\nGiven these significant differences (advantages), it is unsurprising that SWAP achieves substantial improvements over PRM. In fact, these results align with our expectations.\\n\\nWe have restructured the implementation code for PRM and SWAP. We provide them for further verification.\", \"anonymous_link\": \"https://drive.google.com/drive/folders/1ZvevZFc-LKYfCESvSdo3BUbvc94FoCPS?usp=sharing\\n\\n## Reference:\\n\\n[1] https://github.com/meta-llama/llama3/blob/main/eval_details.md\\n\\n[2] https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md\\n\\n[3] https://github.com/openai/human-eval/tree/master\"}", "{\"title\": \"Continuation of Rebuttal by Authors to Reviewer p8KY\", \"comment\": \"## 4. Connection to RL\\nFirst, the main difference between our discriminator and process reward model such as Math-Shepherd is that we use **ranking**. The rationale here is that training an effective PRM remains challenging, as from a mathematical perspective, it assigns a **numerical value** within $[0, 1]$ to each state **independently**. To contrast, our ranking strategy offers significant advantages: 1) it emphasizes relative quality, making it more robust to noise; 2) it simplifies optimization and enhances generalization.\\nNotably, our high-quality **automatic ranking annotation** method is non-trivial as it systemically incorporates three key factors: 1) **structural information**; 2) **correctness**; and 3) **semantical equivalence**.\\n\\nHere, the policy discriminator in our framework, which selects actions based on the predicted future state, is conceptually similar to the Q-function in RL, i.e., the expected cumulative reward.\\n\\nThe world model is defined as the state transition distribution. In RL, feedback is typically obtained directly from the environment. However, in some scenarios, collecting real-world feedback can be expensive or infeasible at scale. Consequently, recent research has focused on simulating the environment using a world model. In our framework, the world model is crucial for determining state (graph) transitions resulting from specific actions, guiding the reasoning process.\\n\\n## Reference:\\n[1] Huang, Jie, et al. \\\"Large language models cannot self-correct reasoning yet.\\\" arXiv preprint arXiv:2310.01798 (2023). \\n\\n[2] Jiang, Dongwei, et al. \\\"Self-[in] correct: Llms struggle with refining self-generated responses.\\\" arXiv preprint arXiv:2404.04298 (2024).\"}", "{\"title\": \"Look forward to discussions\", \"comment\": \"Thank you for your valuable feedback on our submission!\\nWe have carefully reviewed your comments and made substantial efforts to refine the paper and add additional experiments.\\nWe look forward to engaging in further discussions and addressing any potential remaining concerns in the days ahead.\\nThanks in advance!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal by Authors to Reviewer F56z\", \"comment\": \"Thank you for your suggestions and concerns. We address them in detail below:\\n\\n## Implementation details\\n\\nThe filtered PRM800k training set contains 521,007 samples. For a fair comparison with SWAP, we randomly selected 100,000 annotations, matching the scale of SWAP.\", \"for_swap\": \"\", \"the_number_of_trajectories_for_the_generator_are_as_follows\": \"GSM8k (28.3k), MATH (49.3k), ReClor (14.5k), FOLIO (7.3k), HumanEval (3.1k), and MBPP (1.3k);\", \"the_number_of_semantically_equivalent_pairs_we_obtained_are_as_follows\": \"GSM8k (8.1k), MATH (24.2k), ReClor (7.1k), FOLIO (3.8k), HumanEval (1.6k) and MBPP (0.7k);\", \"the_number_of_process_annotations_for_the_discriminator_are_as_follows\": \"GSM8k (48.0k), MATH (98.2k), ReClor (28.7k), FOLIO (14.1k), HumanEval (6.0k), and MBPP (2.5k).\\n\\nThe number of candidate solutions for PRMs is set to 8.\\nFor SWAP, the number of rollouts (breadth limit) is set as 8.\\n\\nWe have incorporated these details into the paper. Please refer to the revised version in Section 5.1 and Appendix D.\\n\\n---\\nGiven that our setting differs from the conventional PRM setup, we provide the following justifications:\\n\\n- **Training set filtering**: The original PRM800k dataset uses 90% of the MATH test split. If we do not filter their dataset, we need to create extra training data for SWAP with the same 90% test split to ensure a fair comparison.\\n\\n- **Training set sampling**: The process annotation density of the filtered PRM800k (521k) is significantly higher than that of SWAP (98k). Without further sampling, a fair comparison cannot be achieved.\\n\\n- **Practical consideration**: This experiment is an addition. Training with the original dataset is infeasible due to time constraints.\\n\\n## Reproducing official Llama3 performance\\n\\nWe wish to draw the reviewer's attention to the following points:\\n\\n1) The official Llama release does not include evaluation code. [1] provides only a brief description (one or two sentences) of the evaluation process for each benchmark.\\n\\n2) The official Llama results involve hyperparameter optimization tailored to specific benchmarks. For example, they report results [2] for Llama-3-8B-Instruct on MMLU (5-shot), GPQA (0-shot), HumanEval (0-shot), GSM-8K (8-shot, CoT), and MATH (4-shot, CoT) with different settings.\\n\\nConsidering 1 and 2, reproducing the official results is a non-trivial task and falls beyond the scope of our research. Our objective is to approximate the official results as closely as possible within our available resources.\\n\\n---\\n**Current results:**\", \"gsm8k\": \"ours 73.7 (4-shot, CoT, greedy) v.s. official 79.6 (8-shot, CoT, maj@1)\", \"math\": \"ours 28.2 (0-shot, CoT, greedy) v.s. official 30 (4-shot, CoT, maj@1)\\n\\n## Baseline fine-tuning\\n\\n**1. Temperature (T):**\\nTemperature may influence performance to some extent but does not affect the overall conclusion.\\nWe have already presented results on HumanEval, where 0-shot performance improved from 50.2 (T=0.2) to 52.4 (T=0).\\nAdditionally, we now provide further results for math reasoning:\\n\\n| Method | GSM8k | MATH500 |\\n|-------------|-------------|-------------|\\n|LLaMA3-8B-Instruct|\\n|Zero-shot CoT (T=0.2) | 70.0| 26.3 |\\n|Zero-shot CoT (T=0) | 72.6| 27.7|\\n| Four-shot CoT (T=0.2)| 72.4| 22.2|\\n| Four-shot CoT (T=0)| 73.7| 23.2 |\\n| SWAP (w/o discriminator) | 78.1|35.1 |\\n| SWAP | 82.7| 40.2 |\\n\\n\\n**2. In-context learning (ICL) prompt format:**\\nThe ICL prompt format can impact the performance on coding tasks.\\nWe show with HumanEval, variations in the ICL format can lead to notable changes in performance.\\n\\nHowever, we would like to emphasize the following points:\\n\\n- **Absence of an official ICL prompt for HumanEval:** [3] does not provide an official ICL prompt for HumanEval. The official Llama3 report [2] only includes 0-shot result.\\n- **Unique challenges in coding tasks:** Coding tasks differ from other benchmarks because the model often generates unnecessary code snippets, which cause parsing issues during evaluation. We invested significant effort in carefully adjusting the prompt for HumanEval to achieve higher performance.\\n- **Impact of test set size:** The HumanEval test set is relatively small (164 samples) compared to other benchmarks, which amplifies the impact of individual problems on accuracy. For example, a single problem corresponds to a 0.6% accuracy change in HumanEval, whereas this value is 0.07% in GSM8K and 0.02% in MATH.\\n- **Official ICL prompt available for other benchmarks:** For other benchmarks such as GSM8K and MATH, we follow publicly available official ICL prompts. Therefore, there are no similar issues for these benchmarks.\\n\\n**3. Setting selection standard:**\\nFor methods other than ICL, we either follow the settings provided in the corresponding paper or official implementation, or we make our best effort to achieve higher performance while ensuring a fair comparison. \\n\\nAgain, achieving the \\u2018best\\u2019 performance is non-trivial due to variations in both base models and benchmarks, which extend beyond the scope of our research.\"}", "{\"title\": \"Thanks for reply\", \"comment\": \"Thanks for your reply and recognition!\", \"as_for_the_remaining_questions\": \"### 1. Reward model\\nIn a Markov Decision Process (MDP), a reward model (or score function) is crucial for learning a policy.\\nAlthough recent research (Math-Shepherd) has increasingly explored automatic process annotations, training an effective PRM remains challenging, as from a mathematical perspective, it assigns a numerical value within $[0, 1]$ to each state independently. \\n\\nTo overcome this problem, we propose a novel strategy for automatic ranking annotation, i.e., given the current context and a set of candidate options, selecting the best option based on relative quality.\", \"our_ranking_strategy_offers_significant_advantages_over_traditional_prms\": \"1) it emphasizes relative quality, making it more robust to noise; 2) it simplifies optimization and enhances generalization.\\nNotably, our high-quality automatic ranking annotation method is non-trivial as it systemically incorporates three key factors: 1) structural information; 2) correctness; and 3) semantical equivalence.\\n\\nWe also add experiments with PRMs (PRM800k and Math-Shepherd) with the same baseline model (Table 1). It shows that the performance of our method is better than those of PRMs.\\n\\n### 2. Efficiency comparison\\nWe evaluated the average number of tokens generated using different methods on the GSM8K dataset with the Llama-3-8B-Instruct model. The results are summarized as follows:\\n\\n| Method | Avg token usage | Accuracy |\\n|----------|----------|----------|\\n| Zero-shot CoT | 175.6 | 70.0 |\\n| ToT | 2214.7 | 75.2 |\\n| RAP | 5241.4 | 76.0 |\\n| SWAP (w/o discriminator) | 306.9 | 78.1 |\\n| SWAP | 3612.0 | 82.7 |\\n\\nWe observed that while the theoretical time complexity of SWAP is comparable to ToT (BFS with pruning), it generates more tokens in practice due to the incorporation of a world model and the construction of an entailment graph. On the other hand, SWAP is significantly more efficient than RAP (MCTS), which involves extensive simulations to estimate the $Q$-value.\"}", "{\"title\": \"Rebuttal by Authors to Reviewer F56z\", \"comment\": \"We further investigated the performance gap between our experimental results and the official LLaMA report.\\n\\nUpon analyzing the model outputs, we observed that with the few-shot CoT method, the model sometimes repeats example code snippets provided in the prompt, leading to errors during evaluation.\\nTo address this issue, we conducted verification experiments to refine and identify the most suitable instructions for few-shot CoT. Similarly, we reviewed and optimized the instructions used for zero-shot CoT.\\n\\nConsidering the potential impact of temperature and instructions, we re-evaluated the zero-shot CoT, few-shot CoT, and SFT-CoT methods on the HumanEval and MBPP benchmarks for all base models.\\nThis evaluation utilized greedy decoding and optimized instructions. \\nThe updated results show significant improvements, with the 4-shot CoT achieving a score of 0.568 on HumanEval, which is much closer to the official LLaMA result of 0.622. \\nPlease refer to the revised version (Table 1) for the detailed results.\\n\\nWe hope these modifications effectively address your concerns!\"}", "{\"title\": \"Thanks for the response from authors\", \"comment\": \"The revision makes it more clear than the original version, and thanks to the answer from the authors, I would like to increase the rating.\"}", "{\"summary\": \"This paper proposes a framework for improving reasoning in LLMs. The framework consists of a graph representation of the multi-step reasoning of the problem, a generator to generate possible next steps, and a discriminator to rank the possible solutions generated by the generator. The paper proposes several improvements to the framework such as adding diversity to step generation, improving the discriminator via process-supervision, and adding meta-knowledge of the problem into the LLM. The paper then shows the effectiveness of the framework by showing improved results on various mathematical reasoning and coding benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and conveys the overall framework well.\", \"Representing step-by-step reasoning as a graph is an interesting idea.\", \"The ablation study shows the performance gain of each design choice.\"], \"weaknesses\": [\"The related work section is very thin. The paper should more thoroughly compare its framework with existing reward modeling and process supervision literature. For instance, how does this method (and its performance) compare with [1]? Especially since the authors take inspiration from such works, they should also include some process-supervision methods in their Llama3-8B baselines.\", \"The reported benchmark numbers for Llama3-8B are significantly lower than what the official llama has reported (e.g., according to Llama's report, HumanEval zero-shot should be 62.2, and MATH 4-shot-CoT should be 30.0, and GSM8K 8-shot-CoT is 79.6). This casts doubt on the validity of the evaluation done by the authors, particularly for HumanEval, where the original Llama's reported performance is higher than what SWAP achieves.\", \"There is very little discussion on the advantage of having a graph (with node connectivities) rather than sequentially listing all the reasoning steps. Can the authors provide any ablations on this? How well is the LLM able to parse the dependencies through json node connectivity? Can the authors provide an example of how the LLM reasoning for a few problems, and what the final graph produced by the LLM would look like?\", \"[1] Wang et al, Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations, 2023\"], \"questions\": [\"Current results consider Llama3-8B generalist model. Is the proposed method able to improve math-specialized (e.g., Qwen Math) or code-specialized (e.g., Deepseek Coder) models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your suggestion\", \"comment\": \"Thank you for your suggestion!\\n\\nWe have added this section in the revised version (Appendix F) to further highlight the advantages of our method compared to related work.\"}", "{\"title\": \"Rebuttal by Authors to Reviewer F56z\", \"comment\": \"Thank you for the follow-up questions.\\n\\n## 1. Training on PRM800k\\nTo maintain consistency with the original experiments (which were conducted on the complete test set of MATH), we also evaluate PRM (trained on PRM800k) on the full test set.\\nWe observed that the original PRM800k data [1] includes 4,500 test samples from the MATH dataset in its training data. \\nTo address this issue, we follow the approach in [2] and use their code [3] to fine-tune Llama3-8b-instruct on a filtered PRM800k dataset as a PRM. \\nSpecifically, we remove samples from the training set that belong to the test split of MATH.\\n\\n## 2. Evaluation details of official Llama3\\nFrom our understanding, [4] uses the vague term \\\"maj@1\\\" to describe the evaluation process for GSM8k and MATH. \\nIf only one-time inference is conducted, the term \\\"maj\\\" would be unnecessary (as seen with other benchmarks in [4]). \\nFurthermore, we observed that Llama3 largely follows the evaluation settings of Llama1 for many benchmarks (as mentioned in [4]). \\nTo verify, we referred to the Llama1 paper [5] and found that it employs majority voting with 256 samples for MATH and 100 samples for GSM8k (Table 7 in [5]). \\nBased on our experimental results, we believe Llama3 uses the same majority voting settings.\\nAlso, we only use 4-shot CoT for all benchmarks (including GSM8k), in contrast to the 8-shot CoT utilized in the official report.\\n\\nThe CoT-SC in Table 1 refers to Chain-of-Thought reasoning with majority voting based on 8 samples.\\n\\n## 3. Effect of temperature\\nWe set the temperature to 0.2 as planning methods require diversity. We want to maintain this setting for a fair comparison. \\n\\nAdditionally, we conducted experiments on HumanEval using different temperatures, with the results as follows:\\n\\n| Llama3-8b-Instruct (0-shot) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | avg. |\\n|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| temperature = 0 | 52.4|53.0|51.8|51.8|52.4|51.8|51.8|52.4|53.0|53.0|52.4|\\n| temperature = 0.2 | 48.2 | 51.2|51.8|49.4|47.6|52.4|49.4|51.2|49.4|51.2|50.2|\\n\\nWe observed that a lower temperature (0) leads to more stable results. However, the average accuracy remains at 52.4, which is close to 50.2 (reported in the paper). We believe that demonstrating the superiority of the proposed method under the same fair comparison setting is sufficient.\\n\\n## 4. Performance gap on HumanEval \\nWe provide the generated results on HumanEval from Llama3-8b-instruct (0-shot) for further verification.\", \"anonymous_cloud_storage\": \"\", \"https\": \"//drive.google.com/drive/folders/1nSg15osa-dwBTYPuepyYT1NiyCXOSgie?usp=sharing\\n\\n\\n## Reference\\n \\n[1] https://github.com/openai/prm800k/tree/main\\n\\n[2] Sun, Zhiqing, et al. \\\"Easy-to-hard generalization: Scalable alignment beyond human supervision.\\\" arXiv preprint arXiv:2403.09472 (2024). \\n\\n[3] https://github.com/Edward-Sun/easy-to-hard\\n\\n[4] https://github.com/meta-llama/llama3/blob/main/eval_details.md\\n\\n[5] Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\" arXiv preprint arXiv:2302.13971 (2023).\"}", "{\"title\": \"Follow-Up on Revisions for Reviewer zS5k\", \"comment\": \"Thank you for your valuable feedback. We have addressed your suggestions in the revised version.\\n\\nWe hope these changes address your concerns and demonstrate the improvements. Please let us know if you have any further comments or suggestions.\"}", "{\"title\": \"Continuation of Rebuttal by Authors to Reviewer 38qG\", \"comment\": \"## 6. Performance of SFT on CoTs\\nThe performance of SFT on CoTs slightly decreases for certain datasets (MATH, HumanEval, MBPP). For these datasets, we fine-tune the model using the provided reasoning processes (or code) written by humans. Language models typically benefit from training data with more steps (tokens) that detail the thinking process. However, we observed that human-written answers are often more concise, omitting some intermediate steps and details.\\n\\nThis results in a reduction in the number of reasoning tokens (during inference) after fine-tuning, which negatively impacts the model's performance. As noted by OpenAI, there exists an inference-time scaling law: models achieve higher accuracy with more inference tokens. \\n\\nAdditionally, it is important to note that these pre-trained models have likely already been fine-tuned on these datasets. Re-fine-tuning on the same datasets yields marginal benefits and can even degrade performance.\\n\\n\\n## 7. Training cost\\nAs discussed in Section 4.2 and illustrated in Figure 2 (the revised version), our approach is implemented using SFT with LoRAs, with the additional cost arising from training a semantic equivalent LoRA. Specifically, starting from the original steps, we generate semantically equivalent alternatives by prompting GPT-4o. The semantic equivalent LoRA is then fine-tuned using the collected training data. For each trajectory, we sample some steps and generate two alternatives for each step. Consequently, the additional training cost of our method is no more than twice that of SFT with LoRA on the trajectory, resulting in a total cost of less than three times the standard SFT with LoRA. \\n\\n\\n## Reference:\\n[1] Lightman, Hunter, et al. \\\"Let's verify step by step.\\\" arXiv preprint arXiv:2305.20050 (2023).\\n\\n[2] Wang, Peiyi, et al. \\\"Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning.\\\" arXiv preprint arXiv:2312.08935 (2023).\\n\\n[3] Vijayakumar, Ashwin K., et al. \\\"Diverse beam search: Decoding diverse solutions from neural sequence models.\\\" arXiv preprint arXiv:1610.02424 (2016).\\n\\n[4] Hu, Edward J., et al. \\\"Amortizing intractable inference in large language models.\\\" The Twelfth International Conference on Learning Representations.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thanks to the the authors for the clarifications. The efficiency/performance trade-off of SWAP appears favorable. It would be great to include this analysis in the new version.\"}", "{\"title\": \"Rebuttal by Authors to Reviewer F56z\", \"comment\": \"Thank you for your insightful comments; they are extremely valuable for enhancing our work!\\n\\n## 1. Related work\\nWe add the following context in the revised version (related work section) to compare our framework with existing reward modeling and process supervision literature.\", \"there_are_two_primary_types_of_reward_models\": \"Outcome Reward Model (ORM) and Process Reward Model (PRM).\\nThe ORM evaluates the fully generated solution by assigning a single scalar confidence score. \\nIts training relies on outcome supervision by comparing generated answers with the ground truth.\\nIn contrast, the PRM provides stepwise rewards throughout the reasoning process, assigning a scalar confidence score to each intermediate steps.\\nEmpirical evidence shows that, compared with outcome supervision, process supervision ensures the correctness of each step, providing more benefits to multi-step reasoning. \\nHowever, the training of PRM requires process supervision, which is hard to obtain, e.g., collecting process annotation from humans is inherently not scalable. \\nAlthough recent research has increasingly explored automatic process annotations using tree search, training an effective PRM remains challenging, as from a mathematical perspective, it assigns a **numerical value** within $[0, 1]$ to each state **independently**. \\nTo overcome this problem, we propose a novel strategy for **automatic ranking annotation**, i.e., given the current context and a set of candidate options, selecting the best option based on relative quality.\", \"our_ranking_strategy_offers_significant_advantages_over_traditional_prms\": \"1) it emphasizes relative quality, making it more robust to noise; 2) it simplifies optimization and enhances generalization.\\nNotably, our high-quality automatic ranking annotation method is non-trivial as it systemically incorporates three key factors: 1) **structural information**; 2) **correctness**; and 3) **semantical equivalence**.\\n\\nWe also add experiments with PRMs (PRM800k [1] and Math-Shepherd [2]) with the same baseline model in the revised version (Table 1). It shows that the performance of our method is better than those of PRMs.\\n\\n## 2. Performance gap\\nWe checked the evaluation details of the LLama3 model [3,4,5]. The reported results actually come from **majority voting** rather than single-time inference. Specifically, for the same question, they generate 100 solutions for GSM8k, 256 solutions for MATH, and 200 solutions for HumanEval. In addition, it is possible that they perform some **hyper-parameter optimization** (e.g., seed, temperature, topK, topP). We also found that since the **test set size** of HumanEval is relatively small (164), the large performance gap is actually responding to a few test samples. \\n\\nIt would be too expensive to replicate the same results. Thus, for practical considerations, we use the default evaluation setting (single-time inference, low temperature=0.2, fixed seed) in our experiments for all methods to ensure a fair comparison. \\n\\n## 3. Advantages of Graph Representation\\nWe add the following content in the revised version (related work section). \\n\\nFurthermore, we notice that although some reasoning processes are inherently **non-linear**, existing methods mainly follow a linear problem-solving manner. \\nLanguage models are expected to implicitly infer the non-linear structure from the linear representation of the reasoning process, which proves challenging for complex reasoning tasks.\\nTo help the model, we integrate **structural information** into the reasoning process which explicitly represents the reasoning structure within the context. \\nThese structures provide the language model with additional **guidance** and **control**, enabling extra capabilities such as symbolic learning and verification. \\n\\nWe emphasize that our approach replaces the original state with a state-graph pair, consisting of a natural language description and its corresponding graph representation. The graph structure serves as an explicit representation of the non-linear dependency between steps. The effectiveness of integrating graph representations is already demonstrated through an ablation study (Table 2 'w/o structure info').\\n\\nFor parsing, we explored various formats and finalized using a JSON dictionary, where child indices serve as keys and lists of parent indices as values. During training data collection, we provided demonstrations in this format to GPT-4o and applied ad-hoc filtering to ensure high data quality. After fine-tuning, the base model demonstrated the ability to effectively generate and interpret this format. The good performance of our method (with ablation study) also demenstrate the parsing capability of LMs.\\n\\nWe already provide multiple example outputs in the paper (Appendix F).\"}", "{\"title\": \"Rebuttal by Authors to Reviewer p8KY\", \"comment\": \"Thank you for your thoughtful comments, all of which are very helpful for improving our work!\\n\\n## 1. Baseline with fine-tuning\\n\\nTo investigate the impact of fine-tuning, we consider the following methods for comparison:\\n1) **SFT on CoT** (shown in Table 1): for datasets that provide reasoning process such as GSM8k and MATH, we directly fine-tune on it; for datasets without reasoning process such as FOLIO, ReClor, we use GPT-4o to generate the reasoning process data and fine-tune on CoTs that lead to the correct final answer. For coding datasets, we fine-tune on the completed code.\\n2) **SWAP w/o discriminator** (shown in Table 1): Since we simulate a Markov decision process (a sequence of interleaved states and actions) with structural information, which are different from CoTs, we consider the fine-tuned version of our framework with only the generator. Since there is no discriminator, we do not perform any planning during inference.\\n\\nOur findings include that fine-tuning on trajectories with structural information brings benefits than CoTs, but more importantly, planning during inference further improves the performance. It is supported by the performance comparison between **SWAP** and **SWAP w/o discriminator** in Table 1.\\n\\n\\n## 2. Novelty \\nThe key innovation in our framework is viewing multi-step reasoning as the process of **entailment graph construction**, supported by a **structure-aware planning** approach tailored for this purpose.\", \"using_graph_representation_offers_several_advantages\": \"(1) It explicitly captures the **non-linear structure** of the reasoning process, providing LMs with enhanced guidance; (2) It allows for greater **control** over the reasoning process. For instance, graph representation facilitates structural verification during training data collection and enhances discrimination accuracy during inference. (3) It also enables exciting future work, such as exploring **the impact of action order** in multi-step reasoning. By capturing the dependency between steps, it becomes possible to identify interchangeable steps for data augmentation. Furthermore, for **long-context reasoning** tasks (e.g., OpenAI's o1 model), a graph structure can provide improved understanding and better control over the process.\\n\\nGiven the representation, we further notice that existing agentic frameworks mainly use **prompt engineering** which totally or partially rely on **self-evaluation**. These strategies bring limited benefits in complex reasoning tasks since self-evaluation without external feedback can be unreliable [1,2]. To overcome this challenge, we propose using a **generator-discriminator** structure with fine-tuning, and further identify the bottlenecks of **generation diversity** and **discrimination accuracy**. We address these bottlenecks with **architecture-level adaptation**. All these strategies distinguish our framework from related work and contribute to **substantial improvements** (Table 1).\\n\\n## 3. Semantic similarity probability calculation\\nWe add the calulation process in the revised version (Eq. 1). \\n\\nAs for normalization, there are mainly two cases when tokens have **negative values** in the adjusted probability $P_{\\\\pi{\\\\text{G}}}$: \\n1) The token\\u2019s original probability $P_{\\\\pi{G}}^{ori}$ is high, but its semantic similarity probability $P^{{sem}}_{\\\\pi{{G}}}$ is even higher. This typically occurs for tokens that **resemble previous generations**, making them less desirable for diversity. \\n2) The token\\u2019s original probability $P^{{ori}}_{\\\\pi{{G}}}$ is not high, indicating it likely represents a step that **deviates from the intended progression of reasoning**. \\n\\nIn both cases, setting negative values to zero allows us to discard these tokens and focus on maintaining a diverse and relevant generation.\\nOther alternatives, such as Softmax, can distort the probability of irrelevant tokens by redistributing values across all tokens, even those that should ideally have very low or zero probability.\", \"consider_an_example_with_an_intermediate_step_in_a_math_problem\": \"For the first token, the original probability distribution $P_{\\\\pi{G}}^{ori}$ is: {\\\"Let\\\": 0.6, \\\"Simplify\\\": 0.2, \\\"Consider\\\": 0.2, every other token: 0}. Given the previous generation \\\"Let $x$ be 4.\\\", the semantically similar probability distribution $P_{\\\\pi{G}}^{sem}$ for the first token becomes: {\\\"Let\\\": 0.5, \\\"Set\\\": 0.2, \\\"Consider\\\": 0.3, every other token: 0}. Let $\\\\gamma=1$, then $P_{\\\\pi{G}}^{ori} - \\\\gamma P_{\\\\pi{G}}^{sem}$ becomes: {\\\"Let\\\": 0.1, \\\"Simplify\\\": 0.2, \\\"Consider\\\": -0.1, \\\"Set\\\": -0.2, every other token: 0}.\\nUsing the proposed method yields {\\\"Let\\\": 0.33, \\\"Simplify\\\": 0.66, every other token: 0}, focusing on high-relevance tokens. \\nHowever, applying Softmax produces {\\\"Let\\\": 0.00011, \\\"Simplify\\\": 0.00012, \\\"Consider\\\": 0.00009, \\\"Set\\\": 0.00008, every other token: 0.0001}, which spreads probability mass thinly across all tokens, diluting the relevance of the original distribution and ultimately harming model performance.\"}", "{\"summary\": \"This paper proposes SWAP, a framework to prompt GPT-4 generate solution chain with reflection to solve problems, and fine-tune other models on it to improve performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The writing is easy to follow. They show the prompts used and examples for different tasks in detail. They do ablation studies to show the effectiveness.\", \"weaknesses\": \"1. The baseline methods only include inference-time techniques without additional fine-tuning. How do these results compare to methods that incorporate fine-tuning? Specifically, I'm referring to approaches where GPT-4 generates or rewrites content based on the training set, and a base model is subsequently fine-tuned on those outputs.\\n\\n2. What is the actual novelty of this work? Numerous studies already prompt GPT-4 with various roles for tasks like analysis, reflection, and planning.\\n\\n3. How is the semantic similarity probability calculated? Also, the current normalization method seems weird. Is there a reason for mapping negative values to zero? This design choice seems suboptimal to me.\", \"questions\": \"What is the underlying connection to reinforcement learning in this approach? The process of constructing supervision data seems quite similar to the steps taken for creating training data for reward models in Math-Shepherd. Additionally, how does the discriminator here relate to the concept of a reward model?\\n\\nCould you clarify what \\\"world model\\\" specifically refers to in this context? How is it defined, and why is it considered a world model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Look forward to address remaining concerns\", \"comment\": \"We have made modifications based on your suggestions and would like to know if you have any remaining concerns. We are eager to address them promptly!\"}", "{\"title\": \"Rebuttal by Authors to Reviewer zS5k\", \"comment\": \"Thank you for your insightful and constructive comments, which are greatly appreciated and extremely helpful in enhancing our work!\\n\\n## 1. Clarity\\nThanks for your suggestion! Figure 2 is mainly used to illustrate the input and output of different modules. We replace it with the detailed **pseudo-code** in the revised version (Algorithm 1 and 2). Additionally, we modify **Figure 1** to depict the step-by-step process of graph construction. To enhance clarity, we also refine the **notations** throughout the framework. Please refer to the revised version (Section 4.1) for the updated details.\\n\\n## 2. Difference from related work\\nThe key innovation in our framework is viewing multi-step reasoning as the process of **entailment graph construction**, supported by a **structure-aware planning** approach tailored for this purpose.\", \"using_graph_representation_offers_several_advantages\": \"(1) It explicitly captures the **non-linear structure** of the reasoning process, providing LMs with enhanced guidance; (2) It allows for greater **control** over the reasoning process. For instance, graph representation facilitates structural verification during training data collection and enhances discrimination accuracy during inference. (3) It also enables exciting future work, such as exploring **the impact of action order** in multi-step reasoning. By capturing the dependency between steps, it becomes possible to identify interchangeable steps for data augmentation. Furthermore, for **long-context reasoning** tasks (e.g., OpenAI's o1 model), a graph structure can provide improved understanding and better control over the process.\\n\\nGiven the representation, we further notice that existing methods (ToT, RAP) mainly uses **prompt engineering** which totally or partially rely on **self-evaluation**. These strategies bring limited benefits in complex reasoning tasks since self-evaluation without external feedback can be unreliable [2,3]. To overcome this challenge, we propose using a generator-discriminator structure with fine-tuning, and further identify the bottlenecks of generation diversity and discrimination accuracy. We address these bottlenecks with **architecture-level adaptation**. \\nThis generator-discriminator structure also contributes to obtaining an **accurate** world model which is not considered in related work (RAP).\\n\\nAs for [1], it mainly focuses on **code generation** (text-to-SQL Parsing, Python-program-based math reasoning), and relies on **external feedback** (i.e., program execution results). Unlike [1], we do not constrain the model on code generation which could hurt its expressiveness on complex reasoning tasks [4]. It also does not consider architecture-level adaptation for generation diversity and discrimination accuracy.\\n\\nAll these strategies distinguish our framework from related work and contribute to **substantial improvements** (see TOT, RAP in Table 1).\\n\\n## 3. Explanation of generation diversity calculation\\n\\nWe have included the detailed calculation process in the revised version (Section 4.2). Please refer to it for further details.\\n\\n## 4. Comparison to existing methods with different base models\\nThanks for your suggestion! We have delete the methods in Table 1 that uses different base models. We now test different methods with the **same** base model (Llama 3-8b-instruct). We also add **PRM** methods (PRM800k [5] and Math-Shepherd [6]). In addition, we change the base model into **Mistral-7b-instruct** and test all the methods. The results (Table 1) show that our method consistently outperform existing methods.\\n\\n## 5. Efficiency\\n\\nThe time complexity of **SWAP** is $O(bNT)$, where $b$ is the breadth limit, $N$ is the generation number limit, and $T$ is the step limit. In contrast, the time complexity of **RAP** (using MCTS) is $O(N_{sim}NT)$, where $N_{{sim}}$ is the total simulation number limit. Typically, a large number of simulations $N_{{sim}} \\\\gg b$ are required to reliably estimate $Q(s, a)$. \\nFor **ToT**, the time complexity depends on the implementation strategy: 1) Breadth-First Search (BFS): without pruning: $O(N^T)$; with pruning: $O(bNT)$. 2) Depth-First Search (DFS): The complexity depends on the state evaluator. The traversal continues until the state evaluator deems the final state satisfactory, making the complexity tied to the evaluation criteria.\\n\\nIn conclusion, SWAP is more efficient than RAP and ToT (BFS without pruning version). It is similar to ToT (BFS with pruning version).\"}", "{\"title\": \"Thanks for reply\", \"comment\": \"Thank you for your acknowledgment and support!\"}" ] }
BZz6Zb4bwa
A Large Deviation Theory Analysis on the Implicit Bias of SGD
[ "Andres R Masegosa", "Luis A. Ortega" ]
Stochastic Gradient Descent (SGD) plays a key role in training deep learning models, yet its ability to implicitly regularize and enhance generalization remains an open theoretical question. We apply Large Deviation Theory (LDT) to analyze why SGD selects models with strong generalization properties. We show that the generalization error jointly depends on the level of concentration of its empirical loss around its expected value and the \textit{abnormality} of the random deviations stemming from the stochastic nature of the training data observation process. Our analysis reveals that SGD gradients are inherently biased toward models exhibiting more concentrated losses and less abnormal and smaller random deviations. These theoretical insights are empirically validated using deep convolutional neural networks, confirming that mini-batch training acts as a natural regularizer by preventing convergence to models with high generalization errors.
[ "implicit bias", "implicit regularization", "optimization", "stochastic gradient descent", "large deviation theory" ]
https://openreview.net/pdf?id=BZz6Zb4bwa
https://openreview.net/forum?id=BZz6Zb4bwa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uvvhFXFMrd", "qdEaoTsg9y", "N8XXbaluyB", "EPTL1R88il" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730720864768, 1730581771225, 1731494409950, 1730168593826 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11297/Reviewer_r3m1" ], [ "ICLR.cc/2025/Conference/Submission11297/Reviewer_FLQi" ], [ "ICLR.cc/2025/Conference/Submission11297/Authors" ], [ "ICLR.cc/2025/Conference/Submission11297/Reviewer_f4No" ] ], "structured_content_str": [ "{\"summary\": \"this paper uses LDT framework to analyze the generalization error and to characterize the implicit bias of SGD. First the paper provides an LDT-centric view on the generalization gap, treating it as a random variable over dataset draw. In particular, the paper provides a decomposition of empirical loss along the lines of LDT where they split it into the expected loss and a function of a generalization gap. The paper then proceeds to characterize the gradient of the empirical loss according to this decomposition and provides an explanation for the regularizing effect of mini-batch SGD.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I find the framework of decomposing the empirical loss and its gradient into distinct components compelling, with each component addressing specific contributions\\u2014one aligned with minimizing the expected loss, while the remaining terms account for deviations. The paper\\u2019s additional partitioning of these deviations into two parts is also notable: one that remains consistent between gradient descent (GD) and stochastic gradient descent (SGD), thereby isolating the component responsible for explaining the generalization gap, identified as the gradient of abnormality in generalization error.\", \"weaknesses\": \"Firstly, I have some reservations regarding the suitability of the chosen formulation of Large Deviation Theory (LDT) for addressing the problem of generalization error. Specifically, in the concentration inequalities ((2) and onwards), the probability is considered over the dataset draw while the parameters are held fixed. Informally, it seems likely that the training loss on a given dataset, at the end of training, will typically fall at the far left of the distribution of empirical losses across dataset draws. In some cases, it may be an outlier\\u2014a \\u2018one-off point\\u2019 on the left\\u2014while the remainder of the distribution might shift significantly to the right and center around the expected loss.\\nGiven that LDT is inherently a probabilistic framework aimed at describing the behavior of the tails of distributions, it may not be entirely suited for capturing an outlier that lies apart from the distribution. However, it is precisely the difference between the empirical loss on the training dataset and the expected loss that is of primary interest here.\\nTo illustrate, consider an overparameterized neural network setting where training continues until convergence, achieving zero training loss. In these cases, the training data often becomes overfitted\\u2014though it can still be beneficial to proceed with training even after the network \\u2018begins to overfit\\u2019 (i.e., when training and validation loss diverge). Once the parameters are fixed, we can evaluate the empirical loss on alternative dataset draws. Assuming the data is sufficiently high-dimensional or exhibits some level of sparsity (such that memorizing the training data alone does not generalize well), evaluating on other dataset draws could approximate the validation or test loss.\\nIn scenarios where limited regularization is applied and the model is forced to overfit (for instance, by gradient flow), performance on these 'validation' sets may approach randomness. This would result in a distribution of empirical losses that centers around the expected loss, with the empirical loss on the specific training dataset at zero\\u2014thus representing the aforementioned outlier or 'one-off point' within the distribution. However, the actual distributions and the position of the training loss on a given dataset may differ significantly from this assumption.\\nIt appears that the issue may stem from the fact that LDT provides bounds for $\\\\hat L(D, \\\\theta)$ rather than $min_\\\\theta \\\\hat L(D, \\\\theta)$. While I could be mistaken, I remain uncertain that this distinction is inconsequential. Given the circumstances described, I am not fully sure that LDT offers meaningful insights into the generalization error.\\nOn a somewhat contrasting note, the paper includes experimental examples using datasets with only 50 samples. While I understand this choice may be due to computational constraints, I\\u2019m uncertain that it accurately reflects typical deep learning scenarios where models learn specific data representations. Consequently, I\\u2019m hesitant to conclude that the distribution graphs presented are truly representative of what occurs in broader deep learning contexts.\\nIt might be the case that LDT may serve as a heuristic for decomposing empirical loss in a meaningful way. As mentioned earlier, I think this is a strength of the paper. However, in the specific context of this paper, I am not fully sure that this reformulation offers meaningful insights into the nature of generalization error. For example, simplifying equation (4), the inverse of the rate function approximates a square root (as shown in equation (5)), suggesting that the 'abnormality' of generalization error is approximated by the square of the generalization loss. In this light, it is unclear to me why high abnormality isn\\u2019t simply another way of indicating high generalization loss, as stated in the paper; the added focus on abnormality doesn\\u2019t immediately seem to provide further benefits beyond analyzing generalization error directly.\\nThis concern extends to the observation in the section \\u201cSGD PREVENTS HIGHLY ABNORMAL GENERALIZATION ERRORS\\u201d (line 402), where it is suggested that smaller batch sizes lead to reduced cosine similarity between the gradient of abnormality for the full dataset and for the batch. If I understand correctly, this point is illustrated solely in Figure 3. With this in mind, it feels though this is a sophisticated reformulation of the generalization error, but lacks sufficient justification for the claim that mini-batch gradient computations lead to smaller generalization errors.\\nFurthermore, if LDT\\u2019s role here is largely heuristic, I believe its value would be more convincing with a broader set of experiments.\", \"questions\": [\"Would you mind clarifying how the LDT deals with the case of the training loss being an outlier in the distribution of empirical losses, or why this is not the case in this data?\", \"Would you mind providing more explanation about the regularizing effect of SGD in the light of your framework? In particular, I am curious about the effect of cosine similarity of the gradient of the abnormality as we vary the batch size\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces tools from \\\"large deviation theory\\\" to try to explain the beneficial implicit bias of SGD. They decompose the generalization gap of a machine learning model into: (a) the \\\"abnormality\\\" of the training dataset (which measures how unrepresentative the loss on the training dataset is, relative to the loss on randomly sampled datasets of the same size), and (b) the degree to which the loss on randomly sampled datasets of this size is concentrated. They argue that this decomposition is informative.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper introduces a three-way decomposition of both the full-batch gradient and the stochastic gradient, and shows empirically that a certain component in this decomposition is highly similar across the two gradients. This is a non-obvious finding. (However, I have questions about this result - see the 'questions' section.)\\n\\nMore generally, the paper offers a very original take on an important question (the implicit bias of SGD).\", \"weaknesses\": [\"Theorems 1,2,4 and proposition 5 consider fixed parameters $\\\\theta$ and study the behavior of the empirical loss under random draws of a dataset $D$. However, in machine learning, the training dataset $D$ is not sampled independent of $\\\\theta$. This is indeed the core challenge of the whole field of learning theory. Thus, I don't understand why Theorems 1,2,4 and proposition 5 are relevant.\", \"The paper does not actually attempt to provide an full explanation for why SGD leads to better generalization, relative to GD. Instead, the paper merely attempts to shed light on the _mechanism_ by which SGD leads to improved generalization. While this is certainly a valuable goal, I cannot actually follow the paper's logic (see below).\", \"The crux of the paper is to advocate for decomposition (3) as a meaningful decomposition of the generalization gap. This decomposition decomposes the generalization gap into: (a) the \\\"abnormality\\\" of the training dataset (which measures how unrepresentative the loss on the training dataset is, relative to the loss on randomly sampled datasets of the same size), and (b) the degree to which the loss on randomly sampled datasets of this size is concentrated. Lines 323 - 365 seem to be arguing that the SGD's efficacy can be empirically localized to the _first_ of these factors (the \\\"abnormality\\\"), as opposed to the second (the \\\"concentration\\\"). However, the experiments in Figure 2 show that relative to GD, SGD improves _both_ abnormality and concentration. This seems to contradict the argument in lines 323 - 365 that SGD's success is due to the abnormality alone. This suggests that the decomposition (3) is not a particularly enlightening way of reasoning about the generalization gap.\", \"Line 469 seems to be offering an explanation for why SGD also improves the concentration of the loss, but I find the argument convoluted and cannot follow it. To me, the occam's razor explanation for why SGD improves both concentration and abnormality is that this was not an informative decomposition of the generalization gap in the first place.\", \"Lines 471-486 show that if we have access to the test set, we can use this information to skip certain SGD steps and therefore generalize better. This seems obvious to me, and does not strike me as supporting the paper's theoretical analysis, as claimed on line 482. Further, I would point out that Figure 5 shows that this intervention also improves 'concentration', not just 'abnormality' (which it is intended to do), which further calls into question the utility of this decomposition.\", \"Overall, I'm not convinced that the paper sheds _any_ light on why SGD generalizes better than GD.\"], \"questions\": \"In the decomposition of the gradient and stochastic gradient in equations 11 and 12, how large are each of the three terms? The paper emphasizes that the second term tends to be the highly aligned between GD and SGD, which is indeed interesting, but I am wondering whether this term might be smaller in norm than the other two.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper applies the large deviation theory to study the generalization error of mini-batch SGD, in turn, provides a perspective on the implicit regularization and the generalization ability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-organized and well-written. The understanding of generalization ability from the perspective of LDT is interesting and the findings are novel. It also hints at a possible variant of SGD by introducing the \\\"skip\\\".\", \"weaknesses\": \"See **Questions**.\", \"questions\": \"In the following, there are several questions to be addressed.\\n\\n**Major**\\n1. About the large index $n$. What value of $n$ can be regarded as a large $n$ in practice? In Line 210, $n$ takes a value of 50,000 and the authors state that this is \\\"a universal cut-off for any model and any data-generating distribution\\\". What about a dataset with less samples, say 30,000 or even less? Will the LDT analysis still apply? \\n\\n2. On the other hand, to plot Figure 1 (right), $n$ takes a rather small value of 50. Equation (9) seems to be a good prediction. However, 50 is not *large*. Therefore again my question, when can we call $n$ a large number? Is there any criterion, at least qualitatively? \\n\\n3. The above questions are also intimately related to Equation (12) about the decomposition of the loss gradient with *mini-batches* which are typically not large in machine learning as stated by the authors in Line 309. For example, online learning uses a batch size of 1; many small-scale experiments use batch sizes of 32 or 64, which are not large compared with the total number of data in the dataset. If we cannot be confident about whether a value of $n$ is large or not, it will be difficult to apply the theories developed here. Please elaborate more on this.\\n\\n4. From Line 244 to Line 298, this section is entitled with *GD*, without mini-batch sampling. However, in the discussion of Figure 2, different batch sizes are taken. I feel confused about this section. If this section is indeed meant to be about GD, could you please explain why batch sizes are mentioned and how they relate to GD?\\n\\n**Minor**\\n1. In Figure 1 (left), three instances of the empirical loss Inception V3 model are plotted. What is the horizontal axis? I suppose that $\\\\mathbf{\\\\theta}$ is a vector rather than a scalar. What are those vertical dashed lines? Another question is that from the figure, it seems that the three instances have different expectation values of the loss, in contract to the statement in Line 064 that \\\"each model's empirical loss $\\\\hat{L}_n(\\\\mathbf{\\\\theta})$ has mean equal to $L(\\\\mathbf{\\\\theta})$. While I understand the statement, how may I understand this figure?\\n\\n2. In Theorem 1, why does the rate function appear with an absolute value in the r.h.s while it has already been signed to be positive? Also, in Figure 1 (center), the rate function may take negative values. Why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BZwXMqu4zG
T2V-Turbo-v2: Enhancing Video Model Post-Training through Data, Reward, and Conditional Guidance Design
[ "Jiachen Li", "Qian Long", "Jian Zheng", "Xiaofeng Gao", "Robinson Piramuthu", "Wenhu Chen", "William Yang Wang" ]
In this paper, we focus on enhancing a diffusion-based text-to-video (T2V) model during the post-training phase by distilling a highly capable consistency model from a pretrained T2V model. Our proposed method, T2V-Turbo-v2, introduces a significant advancement by integrating various supervision signals, including high-quality training data, reward model feedback, and conditional guidance, into the consistency distillation process. Through comprehensive ablation studies, we highlight the crucial importance of tailoring datasets to specific learning objectives and the effectiveness of learning from diverse reward models for enhancing both the visual quality and text-video alignment. Additionally, we highlight the vast design space of conditional guidance strategies, which centers on designing an effective energy function to augment the teacher ODE solver. We demonstrate the potential of this approach by extracting motion guidance from the training datasets and incorporating it into the ODE solver, showcasing its effectiveness in improving the motion quality of the generated videos with the improved motion-related metrics from VBench and T2V-CompBench. Empirically, our T2V-Turbo-v2 establishes a new state-of-the-art result on VBench, **with a Total score of 85.13**, surpassing proprietary systems such as Gen-3 and Kling.
[ "text-to-video generation", "diffusion model", "consistency model" ]
Accept (Poster)
https://openreview.net/pdf?id=BZwXMqu4zG
https://openreview.net/forum?id=BZwXMqu4zG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xd5BHiifBt", "xOg2yBTPfH", "sa2SKLGalh", "qVufHzw4hf", "qKGh3ieNvj", "p8HX5LMr6E", "oxP4pJrTr9", "oHrX6pGblw", "my36diVx4I", "mOzPCx5jP4", "m0Aq0CZEka", "lwV3PbA0nz", "gFEK03VVMI", "g6qRkMe3hd", "a4QY8qk5cK", "YpBeGj6EOX", "Y2XdXm2Qat", "X3IbYFCL3i", "VOiyjz4cP0", "UDGfYv8c3b", "U6WSINJwZQ", "RzIikMWXsz", "RmMlZp4k8G", "QspwI3FP6i", "Nty4pZ9xCy", "Nm7dhkwIg6", "MQDkPLR7KH", "L2cIF0QBt3", "KFNPMMFxl0", "GikBeTfCLp", "FjteUcqu9B", "EIAJd960MK", "Cqm74RxN05", "Bfmd9NODNi", "A9YGHQizIR", "8yKDx7vaNM", "8s6MgofOz4", "6ZCA5wSGkL", "5lJzStTg04", "2jzi2CJE27", "1yTyEKslbM", "1cDGm2GiOr", "0SwLtsaJcW" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732582708266, 1732175420204, 1732176305074, 1733031462571, 1730733471721, 1732653651995, 1732175075438, 1732475566076, 1732731631503, 1732612987786, 1732954333210, 1732174868409, 1732476278218, 1732475366138, 1732913807036, 1732523512735, 1732175137251, 1732934501272, 1732589375012, 1732178012780, 1732175841568, 1732898311358, 1732939508688, 1730117079262, 1732476147903, 1730120636161, 1732596566228, 1732175752100, 1732834584006, 1732731577663, 1732596445287, 1734320831622, 1730782541025, 1732175714313, 1732475794550, 1733006507326, 1730624438816, 1733295121993, 1737523569336, 1732175239473, 1732596306616, 1732834518789, 1732523670576 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_dKgW" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_N6o1" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_N6o1" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_y5Jq" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_XVsE" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_XVsE" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_XVsE" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_dKgW" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Area_Chair_FUJY" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_PqeK" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_PqeK" ], [ "ICLR.cc/2025/Conference/Submission3314/Reviewer_y5Jq" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ], [ "ICLR.cc/2025/Conference/Submission3314/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you to the authors for their response, which addressed most of my concerns. I have increased my score to 6.\"}", "{\"title\": \"(2/2) Response to Reviewer dKgW\", \"comment\": \"> I\\u2019m curious why this method, although based on distilling VideoCrafter2, outperforms VideoCrafter2 across multiple tasks, such as in the three metrics shown in Table 1.\\n\\nFirst, the performance differences can be attributed to the quality of the training datasets. As shown in Table 2, when trained exclusively on the lower-quality WV dataset, VCM (81.31/55.51/76.15) distilled from VideoCrafter2 (82.20/73.42/80.44) does not outperform VideoCrafter2. In contrast, training on higher-quality datasets, such as OV and VG, results in significant improvements, particularly in terms of visual quality. The key reason for this improvement lies in the nature of consistency distillation, which benefits greatly from high-quality training data. Higher-quality data produces better distillation targets, as outlined in Equation 4, enabling $\\\\boldsymbol{f}_\\\\theta$ to better regress toward latents that correspond to videos of higher quality.\\n\\nSecond, our method incorporates additional feedback from a mixture of reward models trained to reflect human preferences, alongside learning from the motion priors extracted from the training videos. These additional supervision signals significantly enhance performance, as demonstrated in Tables 3 and 4.\"}", "{\"title\": \"Response to Reviewer PqeK\", \"comment\": \"We thank the reviewer for their feedback on our work. Please find our detailed response below.\\n\\n> The method combines existing techniques such as consistency distillation and motion guidance, so its novelty is somewhat limited.\\n\\nWe respectfully disagree with the reviewer's argument on our novelty. In terms of guidance strategy, the primary goal of our work is not to design a new energy function for motion guidance. Instead, we aim to empirically demonstrate that augmenting the teacher ODE solver with the energy function's gradients of a conditional guidance strategy can **distill a more capable student video generator while significantly reducing inference costs**. In this context, we showcase the potential of our approach by integrating MotionClone\\u2019s energy function into the teacher ODE solver.\\n\\nAdditionally, adapting MotionClone\\u2019s motion guidance techniques for training is significantly non-trivial for several reasons:\\n\\n- **Dependency on Reference Videos**: MotionClone relies on access to reference videos with high-quality motion. However, identifying suitable reference videos for general text prompts is challenging, which limits its effectiveness and applicability for generic video generation tasks.\\n- **High Computational Cost**: Computing the gradient of the energy function during inference incurs substantial memory and latency overhead. For instance, generating a single video with MotionClone can take approximately seven minutes and 30 GB GPU memory.\\n\\nTo address these challenges, we leverage the critical insight that each training video inherently serves as an ideal reference video for its corresponding training prompt. Additionally, we design a separate preprocessing phase to precompute the motion guidance before the actual training phase. As a result, this preprocessing phase eliminates the need for computationally expensive gradient calculations during training.\\n\\nAs demonstrated in Tables 1 and 4, augmenting the teacher ODE solver with motion guidance leads to significant performance gains and improved motion quality across different evaluation metrics.\\n\\n> VideoCrafter uses a 2D+1D decoupled spatial-temporal approach, whereas most recent advanced methods employ full 3D attention. How would motion guidance be applied when using 3D attention?\\n\\nWe would like to address the reviewer\\u2019s understanding of our contributions. The applicability and generalizability of MotionClone\\u2019s technique are beyond the scope of our paper. The primary goal of our work is not to design a new energy function for motion guidance.\\n\\nInstead, we aim to empirically demonstrate that **augmenting the teacher ODE solver with the energy function gradients of a conditional guidance strategy can distill a more capable student video generator while significantly reducing inference costs**. To illustrate this, we integrate MotionClone\\u2019s energy function into the teacher ODE solver, highlighting the potential of this approach.\\n\\nNonetheless, MotionClone\\u2019s motion guidance can also be applied to advanced methods, such as Open-Sora, Open-Sora-Plan, and Latte [1]. As shown in Figure 3 of [2], these models employ a similar DiT-based architecture featuring two types of Transformer blocks: spatial and temporal. Since the temporal Transformer blocks process information across temporal dimensions, MotionClone\\u2019s success can be replicated within these frameworks with minimal adaptation. For methods utilizing full 3D attention, e.g., CogVideoX, we can always reshape the attention matrix to obtain temporal attention across different video frames. And thus, full 3D attention should not be a barrier to leverage MotionClone\\u2019s techniques.\\n\\n[1] Ma et al., Latte: Latent Diffusion Transformer for Video Generation.\\n\\n[2] Zhao et al., Real-Time Video Generation with Pyramid Attention Broadcast.\\n\\n> What is the peak training cost, such as peak GPU memory, compared to training a single model?\\n\\nThe peak training cost of our method is **identical** to that of training a single model. As highlighted in our paper, the motion prior is extracted during a separate preprocessing phase and does **not** contribute to the peak GPU memory usage during training.\\n\\n> does it perform on less powerful video diffusion models like Zeroscope\\u2014can it still achieve results comparable to VideoCrafter?\\n\\nT2V-Turbo\\u2019s success has already been demonstrated with two different teacher models: VideoCrafter2 and ModelScope. Regarding the generalizability of MotionClone, the original MotionClone paper conducted its experiments using AnimateDiff [3] as the base model, and our work successfully extended this technique to VideoCrafter2. Therefore, there is no inherent barrier to applying this approach to less powerful models like Zeroscope, and we expect it our T2V-Turbo-v2 achieve similar successful results when employing Zeroscope as the teacher model.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer PqeK,\\n\\nThank you so much for raising your rating on our work!\\n\\nAgain, we would like to emphasize that **the primary goal of our research is not to design a superior energy function to improve motion guidance**. Instead, we aim to highlight the vast design space of conditional strategies and demonstrate their potential to enable a more capable student video generator without adding inference overhead.\\n\\nIn this paper, we empirically show that leveraging MotionClone's energy function enhances the motion quality of the generated videos. Thus, we believe our paper successfully achieves its proof-of-concept mission, **paving the way for future research to distill knowledge from a more diverse set of energy functions.**\"}", "{\"summary\": \"This paper introduces T2V-Turbo-v2, an improved text-to-video model that enhances video generation by distilling a consistency model from a pretrained T2V model during the post-training phase. The method integrates multiple supervision signals\\u2014including high-quality training data, reward model feedback, and conditional guidance\\u2014into the consistency distillation process. Experiments show a new state-of-the-art result on VBench.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized and easy to follow.\\n2. The method employing motion guidance is logically sound, and the experimental results showing improved semantic scores effectively validate its effectiveness.\", \"weaknesses\": \"1. The early work Energy-guided stochastic differential equations[1] first present a framework that utilize an energy function to guide the generaion process for diffusion model. Please cite this paper.\\n2.\\tIn Figure 2, does the DDIM inversion require k forward passes for each training step? If so, does this introduce excessive computational cost?\\n3.\\tPlease provide .mp4 files for visual comparisons, as Vbench cannot fully substitute for a user study. Including video files will allow reviewers and readers to better assess the performance and quality of the proposed method.\\n\\n[1] The method employing motion guidance is logically sound,\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the further questions\", \"comment\": \"Dear Reviewer N6o1,\\n\\nWe sincerely appreciate that you increased the rating for our work! We have cited the EGSDE paper in our revised manuscript.\\n\\nAdditionally, please find our response to your questions below:\\n\\n> Do you think the existing open-domain academic datasets would suffice for effectively distilling larger-scale diffusion models, such as CogVideoX?\\n\\nThank you for the great question! As demonstrated in our paper, consistency distillation can be performed efficiently. In Table 2, we show that VCM distilled from VideoCrafter2 achieves a superior Quality Score compared to its teacher model by training on OV or VG datasets for just 8,000 gradient steps. Our training was conducted on a machine with 8 GPUs and a total batch size of 24 (3 * 8), equating to approximately 200,000 text-video pairs. Both OV and VG datasets are densely captioned, open-sourced, and readily accessible, making them viable options for distilling larger-scale diffusion models like CogVideoX. Notably, recent work [1] has successfully trained video generation models capable of generating long videos using entirely open-sourced data, including OV and WV datasets.\\n\\n[1] Jin et al. Pyramidal Flow Matching for Efficient Video Generative Modeling. arxiv: 2410.05954.\\n\\n> Did you observe any noticeable decline in visual quality based on human evaluation?\\n\\nVCM\\u2019s visual quality can surpass that of its teacher model when the number of sampling steps is increased. However, videos generated by the student VCM model may exhibit decreased text-video alignment. This observation motivated our incorporation of reward models to address and mitigate this performance drop.\\n\\n> Could you elaborate on the differences and unique challenges between video distillation and image distillation? For instance, can existing image distillation techniques be directly applied to video models?\\n\\nThank you for the insightful question. Fundamentally, existing techniques for distilling image diffusion models can be directly applied to video diffusion models, as video tensors are essentially stacks of image tensors. However, video distillation introduces unique challenges, particularly in modeling cross-frame dependencies. For instance, it is crucial to evaluate whether image-based distillation methods might degrade motion quality when applied to video models. Our method provides a solution to address the potentail quality loss by incorporating reward objectives and conditional guidance as additional supervision signals.\"}", "{\"title\": \"(2/3) Response to Reviewer XVsE\", \"comment\": \"> Could you explore why VCM achieves the best performance using only OV, while the proposed method does not attain similar results?\\n\\nThe performance difference is discussed in Lines 365\\u2013375. We hypothesize that the modest performance gains of T2V-Turbo-v2 on OV stem from the excessively long captions in the OV dataset, which are not well-suited to the short context length of the RMs we used. Specifically, the maximum context length of HPSv2.1 and CLIP is 77 tokens, while InternV2\\u2019s is only 40 tokens. Consequently, these RMs can only operate optimally on datasets with shorter captions, which explains why T2V-Turbo-v2 performs better on WV compared to OV or VG.\\n\\nIn contrast, VCM optimizes only the consistency distillation loss, making it better equipped to leverage the high-quality video data in OV, resulting in its superior performance on this dataset.\\n\\n> It appears that different methods may lead to varying conclusions regarding dataset choices.\\n> \\n\\nWe highlight this point through out our paper. Curating training datasets for different learning objective is crucial to achieve optimal results.\\n\\n> Additionally, how would the results differ if T2V-Turbo (not v2) was used?\\n\\nWe believe that similar results will be obtained if we use T2V-Turbo that optimizes the LoRA weights, as the results in Table 2 are obtained without augmenting the teacher ODE solver with motion guidance (w/o MG).\\n\\n> Considering OV and VG are both high-quality video datasets, it would be insightful to analyze why OV+WV exhibits poorer performance compared to VG+WV, which performs quite well.\\n\\nWe conjecture that the poorer performance of VCM and our T2V-Turbo-v2 on the OV+WV dataset stems from the significant domain gap between OV and WV data. In our preliminary study, we observed that most OV videos are centered around human activities, whereas both WV and VG datasets encompass a broader diversity of video content. Although WV videos are lower in quality and often include watermarks, their content diversity aligns more closely with VG than OV. This domain gap likely hampers the performance of VCM and T2V-Turbo-v2 on OV+WV, whereas the VG+WV combination benefits from greater alignment between the datasets.\\n\\n> In the section on data preprocessing, the approach involves using DDIM Inversion on all videos to obtain the necessary motion guidance for training, which is effective in reducing training time. Nevertheless, this approach does not significantly simplify the overall complexity. It would be better to explore improvements to the motion guidance strategy itself to enhance training efficiency.\\n\\nWe acknowledge that our preprocessing step requires additional computational resources, amounting to approximately 400 GPU hours. However, this is a manageable cost that can be completed in about two days on a server equipped with 8 NVIDIA A100 GPUs (40 GB each). Importantly, this additional preprocessing effort yields significant benefits in terms of motion quality and inference acceleration.\\n\\nIn comparison, the original MotionClone approach requires approximately 6 minutes to generate a single video, with 3 minutes spent on DDIM inversion from a reference video. Moreover, the peak GPU memory consumption can reach up to 35 GB due to the gradient calculations involved. In contrast, our model eliminates the need for identifying appropriate reference videos, conducting DDIM inversion, or calculating gradients during inference without adding any inference overhead compared to inference from a VCM. For example, our T2V-Turbo-v2 only takes 5 seconds to generate a video in 16 steps using BFloat16.\\n\\nThe efficiency and performance gains of our approach are clearly reflected in its results. Our model achieves the #1 ranking on VBench, surpassing numerous proprietary video generation systems, including Gen-3, Kling, and MiniMax. This demonstrates that the preprocessing trade-off is well-justified, offering both superior motion quality and significant reductions in inference time.\\n\\n> It would be valuable to include theoretical or experimental results to analyze why the EMA model in consistency distillation is unnecessary.\\n\\nOur decision to remove the EMA model is based on empirical observations, which show that its removal does not lead to training instability. For readers seeking a theoretical perspective, we refer to Section 3.2 of [1], which provides an in-depth analysis explaining why the EMA model can be omitted without compromising training stability.\\n\\n[1] Song et al. Improved techniques for training consistency models. ICLR 2024\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer y5Jq,\\n\\nWe greatly appreciate your insightful feedback, which has significantly contributed to the clarity and enhancement of our work. We have carefully addressed your comments in our response, clarified potential misunderstandings, and included the results for OV + VG and OV + VG + WV in the same settings as Table 2 to explain why we do not use the OV datasets to train our main models.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We hope our responses address your concerns thoroughly and provide additional support for advocating the acceptance of our paper, potentially leading to an improved rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Let's discuss!\", \"comment\": \"Dear Reviewer XVsE,\\n\\nThank you once again for your time and efforts in providing valuable feedback on our work. Your insights have been instrumental in helping us refine and improve our submission. \\n\\nWe would like to kindly invite you to follow up on the discussion regarding our work. If you have any additional comments or concerns, please don\\u2019t hesitate to let us know, and we will do our utmost to address them promptly.\\n\\nThe Authors\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response and for addressing my concerns. The additional ablation studies on the data are particularly interesting and valuable. Based on the clarifications and supplementary experiments, I have increased my score.\\n\\nI do, however, have some additional questions and points of curiosity:\\n\\n1. If one were to distill larger-scale diffusion models, such as CogVideoX, but lacked access to corresponding large-scale training datasets, do you think the existing open-domain academic datasets would suffice for effective distillation? From your experiments, did you observe any noticeable decline in visual quality based on human evaluation?\\n\\n2. Could you elaborate on the differences and unique challenges between video distillation and image distillation? For instance, can existing image distillation techniques be directly applied to video models? From my understanding, many successful video diffusion models fundamentally extend image diffusion by treating videos as higher-dimensional imagess, without changing the underlying modeling approach.\\n\\nLastly, I\\u2019d like to apologize for missing the related work reference in my initial review, which is titled EGSDE: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations.\\n\\nThank you again for your efforts.\"}", "{\"title\": \"Thank you again!\", \"comment\": \"Dear Reviewer PqeK,\\n\\nWe hope you are having a wonderful weekend! Once again, we sincerely appreciate your valuable feedback on our work.\\n\\nWe kindly invite you to follow up on the discussion, and we would be happy to address any additional concerns or questions you might have.\\n\\nThe Authors\"}", "{\"title\": \"(1/3) Response to Reviewer XVsE\", \"comment\": \"We thank the reviewer for the constructive feedback. Please find our detailed response below.\\n\\n> The integration of MotionClone into T2V-Turbo is an interesting direction. However, while the contribution highlights the potential for diverse forms of energy functions, this study primarily utilizes the motion representation from the MotionClone work without substantial modifications to the energy function's format.\\n\\nWe would like to clarify some misunderstandings in the reviewer\\u2019s assessment of our work:\\n\\n1. **Objective of the Study**: The primary goal of our work is not to design a new energy function for motion guidance. Instead, we aim to empirically demonstrate that augmenting the teacher ODE solver with the energy function's gradients of a conditional guidance strategy can **distill a more capable student video generator while significantly reducing inference costs**. In this context, we showcase the potential of our approach by integrating MotionClone\\u2019s energy function into the teacher ODE solver. In other words, **our method is not confined to MotionClone\\u2019s motion guidance**.\\n2. **Challenges in Scaling MotionClone for Training**: Adapting MotionClone\\u2019s motion guidance techniques for training is significantly non-trivial for several reasons:\\n - **Dependency on Reference Videos**: MotionClone relies on access to reference videos with high-quality motion. However, identifying suitable reference videos for general text prompts is challenging, which limits its effectiveness and applicability for generic video generation tasks.\\n - **High Computational Cost**: Computing the gradient of the energy function during inference incurs substantial memory and latency overhead. For instance, generating a single video with MotionClone can take approximately seven minutes and 30 GB GPU memory.\\n \\n To address these challenges, we leverage the critical insight that each training video inherently serves as an ideal reference video for its corresponding training prompt. Additionally, we design a separate preprocessing phase to precompute the motion guidance before the actual training phase. As a result, this preprocessing phase eliminates the need for computationally expensive gradient calculations during training.\\n\\n> The other enhancements are relatively minor, such as the reward model used, which only adds a CLIP compared to T2V-Turbo.\\n> \\n\\nWe would like to clarify the contributions of our work and address the concerns regarding the significance of our enhancements.\\n\\n1. **Coupled Effects of Training Data and Reward Models**:\\n \\n The choices of training data and reward models are arguably the most critical components in the post-training phase of a generative AI model. In our work, we conduct a rigorous and thorough empirical investigation into how these factors impact the performance of T2V models. A key finding of our study is that their effects are **not orthogonal**\\u2014the interaction between training datasets and RMs plays a pivotal role in shaping the final performance. Specifically, our Section 4.2, 4.3, and Appendix C.2 empirically demonstrate that **curating training datasets for different learning objectives is crucial** for achieving optimal results. To the best of our knowledge, this is the first work to systematically study how the selection of training data and RMs affects the post-training performance of a T2V model. Therefore, we firmly believe our findings provide invaluable insights for advancing post-training research in video generation models.\\n \\n2. **Different Conclusion Regarding the Use of Reward Models**:\\n \\n Our findings regarding the effects of different RMs diverge significantly from the conclusions of the T2V-Turbo paper. T2V-Turbo suggests that feedback from a single image-text RM (HPSv2.1) is sufficient to achieve substantial performance gains. In contrast, our work reveals that relying solely on HPSv2.1 results in only minimal enhancements to video quality. Instead, we show that incorporating feedback from a more diverse set of RMs is essential to achieve meaningful performance improvements.\\n \\n The reasons behind these differing conclusions are centered on the datasets used for different learning objectives, which are summarized below:\\n \\n - Our T2V-Turbo-v2 leverages a dataset that combines videos from VG and WV. Specifically, we minimize the consistency distillation loss on the entire dataset but optimize reward objectives only on the short-captioned WV data.\\n - T2V-Turbo was trained exclusively on WV data with short video captions.\", \"these_experimental_differences_underscore_a_critical_insight\": \"the impact of reward feedback is highly dependent on the dataset composition and design. Our results highlight the importance of curating datasets and carefully selecting RM sets to achieve optimal performance in video generation tasks.\"}", "{\"title\": \"Looking forward to your response\", \"comment\": \"Dear Reviewer PqeK,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Looking forward to your response!\", \"comment\": \"Dear Reviewer XVsE,\\n\\nWe greatly appreciate your insightful feedback, which has significantly contributed to the clarity and enhancement of our work. We have carefully addressed your comments in our response, clarified potential misunderstandings, and included the new human evaluation results you requested.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether these changes warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer XVsE\", \"comment\": \"Dear Reviewer XVsE,\\n\\nWe appreciate your detailed feedback. Please find our detailed comments below.\\n\\n> The methodology appears to be a combination of MotionClone and consistency distillation. Relying solely on MotionClone as a contribution seems insufficient, especially since the paper uses MotionClone's guidance strategy without modifications or enhancements.\\n\\nWe would like to address a major misunderstanding by the reviewer. **We do not rely on MotionClone as a contribution.** Instead, our work demonstrates how conditional guidance strategies\\u2014specifically, the gradients of energy functions\\u2014can be utilized as additional supervision signals during the post-training phase of video generation models.\\n\\nIn this paper, we adapt MotionClone's guidance strategy to the training phase by addressing two key challenges: (1) its reliance on manually selected reference videos and (2) its substantial computational cost. Moreover, **our method does not introduce inference overhead, while MotionClone requires approximately 7 minutes to generate a video.**\\n\\nAgain, we emphasize that our method is not limited to MotionClone\\u2019s motion guidance but highlights the broader potential of conditional guidance strategies to enhance model performance.\\n\\n> MotionClone itself does not use carefully curated high-quality videos but rather employs commonly used videos like those from the DAVIS dataset. Hence, the idea of using training videos as their own reference videos is natural and not particularly captivating.\\n\\nThe original DAVIS [1] dataset contains only 50 videos, with MotionClone utilizing just 40 of them. This severely limits the scalability of MotionClone for training, as it is impractical to rely on such a small dataset. Additionally, **MotionClone requires manual effort to select different reference videos for different text prompts**, further complicating its application at scale.\\n\\nIn contrast, our approach is designed to handle datasets with millions of videos and eliminates the need for any manual effort in selecting reference videos for text prompts. By leveraging each training video as its own reference, our method achieves scalability and automation, addressing the challenges inherent in MotionClone\\u2019s original setup.\\n\\n[1] Pont-Tuset et al., The 2017 DAVIS Challenge on Video Object Segmentation. \\n\\n> Does this imply that motion guidance is only required during training and not during inference?\\n\\nWe never use MotionClone's guidance during inference. Our generation process is as fast as the baseline VCM.\\n\\n> Thus, it remains unclear whether the conclusions are applicable to other models.\\n\\nThe original MotionClone is built on top of AnimateDiff [2], and prior work, such as AnimateLCM [3], has demonstrated that an LCM can be learned from AnimateDiff. Consequently, there is no inherent barrier to applying our approach to other models, such as AnimateDiff.\\n\\nWe argue that **our experiments already highlight the strong generalizability of our method**. Unlike the original MotionClone paper, which uses AnimateDiff as its base model, we chose VideoCrafter2 as our foundation, successfully demonstrating the applicability of our approach to different video base models.\\n\\n[2] Guo et al., AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. ICLR 2024.\\n\\n[3] Wang et al., AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data. SIGGRAPH ASIA 2024\"}", "{\"comment\": \"We thank the authors time to add the quantitative results for my questions, and the explanation of other concerns, like the computation resource, the hyper-parameter choice, etc. I will keep my initial rating.\"}", "{\"title\": \"(3/3) Response to Reviewer XVsE\", \"comment\": \"> It would be better to conduct a user study to further verify the performance from a human perspective.\\n\\nWe thank the reviewer for the constructive suggestion. In response, we conducted a human evaluation to compare the 16-step video generation of T2V-Turbo-v2 w/o MG and T2V-Turbo-v2 w/ MG to verify the effectiveness of motion guidance.\", \"we_hire_annotators_from_amazon_mechanical_turk_to_answer_two_questions\": \"Q1) Which video demonstrates better motion quality? Q2) Which video do you prefer given the prompt? Appendix E provides further experimental details.\\n\\nThe human evaluation results in Figure 9 of Appendix F show that videos generated by T2V-Turbo-v2 w/ MG are consistently favored over those from T2V-Turbo in terms of motion quality and general preference. These findings corroborate our automatic evaluation in Table 4, verifying that incorporating motion guidance enhances model performance and improves the motion quality of the generated videos.\\n\\n> The pseudo-code in Algorithm 2 for training includes theta-, but the method states that EMA is not needed. Please update either the text or the algorithm to ensure consistency.\\n\\nWe thank the reviewer for pointing it out. We have updated Algorithm 2 to remove $\\\\theta^-$.\"}", "{\"comment\": \"Thank the authors for their timely response, which has partially addressed my concerns.\\n\\nThe authors have highlighted that the paper aims to validate the enhancement of video generation models using the energy function's gradients of a conditional guidance strategy, and that the method is not limited to MotionClone\\u2019s motion guidance. However, I believe that simply applying MotionClone and integrating it in a straightforward manner is relatively insufficient to enhance the novelty of the method and validate the research objective. Thus, I remain unconvinced regarding the methodological contributions.\\n\\nI realize I may not have expressed this clearly before, so I would like to further clarify my point about reliance on MotionClone as a contribution: since the authors mentioned that the approach is not limited to MotionClone's motion guidance, it would be better either to experiment with multiple conditional guidance strategies or to further improve MotionClone. This would better highlight the innovation of the method and the persuasiveness of the research objective.\\n\\nAdditionally, the authors noted that the proposed method does not require motion guidance during inference. I believe this is valuable and meaningful for the field of video generation.\\n\\nOverall, after reconsidering the strengths and weaknesses of this paper in light of my concerns, I have decided to raise my score to 6 and leave the final decision to the area chair.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer dKgW,\\n\\nThank you so much for recognizing our updates!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"General response: new experimental results!\", \"comment\": \"We appreciate the reviewers for their time and constructive feedback on our work. We have responded to individual reviews below and would like to clarify some common misunderstandings about our work.\\n\\n1. We firmly believe that our rigorous and thorough ablation studies on the training data and reward models provide valuable insights for future research on post-training strategies for video generation models. Our findings reveal the surprising **coupled effects of training data and RMs**, highlighting the importance of **curating datasets tailored to specific learning objectives to achieve optimal results**. To the best of our knowledge, this is the first work to systematically examine how the selection of training data and RMs impacts the post-training performance of a T2V model.\\n2. **Our method is not confined to MotionClone\\u2019s motion guidance.** Instead, we leverage MotionClone\\u2019s energy functions to highlight the vast design space of conditional guidance strategies, which enables the distillation of a more capable student video generator while significantly reducing inference costs.\\n3. **Adapting MotionClone\\u2019s motion guidance techniques for training is significantly non-trivial** due to 1) its dependency on reference videos and 2) high computational costs. We address these challenges by leveraging the critical insight that each training video is an ideal reference video for its corresponding training prompt. Additionally, we design a separate preprocessing phase to precompute the gradients of energy functions, enabling efficient training.\\n4. **Trade-offs of the Preprocessing Phase**. While our preprocessing phase introduces additional computational overhead (~400 GPU hours), it yields significant improvements in motion quality and accelerates the inference process.\\n\\nAdditionally, we include two new experiment results:\\n\\n1. In Appendix E, we conduct experiments on OV + VG and OV + VG + WV to **corroborate the results in Table 2** and clarify why we excluded OV data when training our main models.\\n2. In Appendix F, we conduct a **human evaluation** to compare the 16-step generation of T2V-Turbo-v2 w/o MG and T2V-Turbo-v2 w/ MG, confirming that incorporating motion guidance significantly enhances model performance and improves the motion quality of generated videos.\"}", "{\"title\": \"Response to Reviewer N6o1\", \"comment\": \"> The early work Energy-guided stochastic differential equations[1] first presents a framework that utilizes an energy function to guide the generation process for a diffusion model. Please cite this paper.\\n\\nCan you please include the citation for the paper you mentioned? We are happy to cite the paper in our manuscript.\\n\\n> In Figure 2, does the DDIM inversion require k forward passes for each training step? If so, does this introduce excessive computational cost?\\n\\nAs our paper already mentions, the inverse DDIM used to obtain motion prior can be done in a separate preprocessing phase and thus will **NOT** introduce any additional computational cost for training.\\n\\n> Please provide .mp4 files for visual comparisons\\n\\nThank you for the suggestions. We have included the original video files in the supplemental material. On the other hand, you can click to play the videos in our manuscript if you open it using Adobe Acrobat Reader.\"}", "{\"comment\": \"I sincerely thank the authors for their responses and the additional experiments they provided. I reviewed the comments from other reviewers, and the authors have addressed most of my concerns.\\n\\nHowever, despite the further clarification on contributions, I still believe that the contributions are relatively limited.\\n\\n1. As reviewers PqeK and dKgW pointed out, the methodology appears to be a combination of MotionClone and consistency distillation. Although the authors mentioned that they aim to validate the enhancement of video generation models using the energy function's gradients of a conditional guidance strategy, relying solely on MotionClone as a contribution seems insufficient, especially since the paper uses MotionClone's guidance strategy without modifications or enhancements.\\n\\n2. The authors discussed challenges in scaling MotionClone for training. \\n - Regarding the dependency on reference videos, MotionClone itself does not use carefully curated high-quality videos but rather employs commonly used videos like those from the DAVIS dataset. Hence, the idea of using training videos as their own reference videos is natural and not particularly captivating. \\n - Regarding the computational cost, the authors mention that the proposed method does not increase inference cost. Does this imply that motion guidance is only required during training and not during inference?\\n\\n3. While the selection and analysis of training data and reward models are interesting, the authors conducted experiments and analyses only on VideoCrafter2 without attempting further validation on more models. Thus, it remains unclear whether the conclusions are applicable to other models.\"}", "{\"title\": \"Response to Reviewer XVsE\", \"comment\": \"Dear Reviewer XVsE,\\n\\nThank you so much for raising your score! Please find our detailed response below.\\n\\n> I believe that simply applying MotionClone and integrating it in a straightforward manner is relatively insufficient to enhance the novelty of the method and validate the research objective.\\n\\nFirst, we would like to reiterate that the primary goal of our research is not to design a superior energy function to improve motion guidance. Instead, we aim to highlight the vast design space of conditional strategies and demonstrate their potential to enable a more capable student video generator without adding inference overhead. In this paper, we leverage MotionClone's energy function to augment the teacher ODE solver, demonstrating its effectiveness in enhancing the motion quality of the generated videos.\\n\\nSecond, integrating MotionClone into the training process is far from straightforward. As highlighted in the paper and our earlier response, scaling MotionClone for training involves significant engineering efforts to address its considerable computational cost. **We believe our paper successfully achieves its proof-of-concept mission, paving the way for future research to distill knowledge from a more diverse set of energy functions.**\\n\\n\\n> it would be better either to experiment with multiple conditional guidance strategies.\\n\\nWe sincerely thank the reviewer for their valuable suggestion. We agree that experimenting with multiple conditional guidance strategies would further strengthen our argument. However, due to the limited timeframe of the rebuttal period, we are unable to conduct these additional experiments and plan to explore them in future work.\"}", "{\"summary\": \"In the paper, the authors present T2V-Turbo-v2, a method to enhance diffusion-based text-to-video models by distilling a consistency model from a pre-trained T2V model. This approach integrates high-quality training data, feedback from multiple reward models, and motion guidance into the distillation process. Through ablation studies, it emphasizes the importance of high-quality datasets and diverse reward models to improve visual quality and text-video alignment. The method also verifies the effectiveness of incorporating motion guidance to enhance video motion quality. T2V-Turbo-v2 achieves a state-of-the-art total score of 85.13 on VBench, outperforming advanced text-to-video models like Gen-3 and Kling.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experiments are comprehensive and thorough, with detailed analysis.\\n2. The analysis of minimizing CD loss using entire datasets while restricting reward optimization to short-captioned datasets is interesting and meaningful, which may encourage future work.\\n3. This paper establishes a new SOTA total score on VBench, leveraging open-source models to outperform some advanced text-to-video models.\\n4. The paper is well-written and easy to understand.\", \"weaknesses\": \"1. The integration of MotionClone into T2V-Turbo is an interesting direction. However, while the contribution highlights the potential for diverse forms of energy functions, this study primarily utilizes the motion representation from the MotionClone work without substantial modifications to the energy function's format. The other enhancements are relatively minor, such as the reward model used, which only adds a CLIP compared to T2V-Turbo, and the removal of the EMA model. It might be beneficial to explore further variations to strengthen the contribution.\\n2. It would be beneficial to include a discussion on the performance differences between VCM and the proposed method across various datasets. For instance:\\n - Could you explore why VCM achieves the best performance using only OV, while the proposed method does not attain similar results? It appears that different methods may lead to varying conclusions regarding dataset choices. Additionally, how would the results differ if T2V-Turbo (not v2) was used?\\n - Considering OV and VG are both high-quality video datasets, it would be insightful to analyze why OV+WV exhibits poorer performance compared to VG+WV, which performs quite well.\\n3. In the section on data preprocessing, the approach involves using DDIM Inversion on all videos to obtain the necessary motion guidance for training, which is effective in reducing training time. Nevertheless, this approach does not significantly simplify the overall complexity. It would be better to explore improvements to the motion guidance strategy itself to enhance training efficiency.\\n4. It would be valuable to include theoretical or experimental results to analyze why the EMA model in consistency distillation is unnecessary.\\n5. It would be better to conduct a user study to further verify the performance from a human perspective.\", \"questions\": \"1. The pseudo-code in Algorithm 2 for training includes theta-, but the method states that EMA is not needed. Please update either the text or the algorithm to ensure consistency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up the discussion\", \"comment\": \"Dear Reviewer N6o1,\\n\\nWe greatly appreciate your time and feedback on our work. We have carefully addressed your comments and clarified potential misunderstandings. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether our responses warrant a reevaluation of your rating.\\n\\nMoreover, please take time to include the citation for the paper you mentioned. We are happy to cite the paper in our manuscript.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper present motionclone-based consistency distillation, using motion guidance to improve temporal and spatial coherence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This method demonstrates strong performance, achieving state-of-the-art (SOTA) results on VBench. Visualized video outputs appear smooth and high-quality, reflecting its effective design.\\n\\n2. The paper is clearly written, allowing reviewers to easily understand the authors' intent.\\n\\n3. This method is simple and effective, making it generally more practical.\", \"weaknesses\": \"This method is overly engineering-focused and lacks novelty, as the motion guidance and consistency distillation techniques involved are already established, making it appear less innovative.\\n\\nAdditionally, while it conducts extensive ablation experiments on motion guidance and reward models, this does not constitute a significant contribution of the paper. I am unclear about the paper's contributions; is it providing more interesting insights? It would be helpful if the authors could briefly summarize this in their response.\\n\\nRegarding the contribution summary of the paper (L117), it seems to emphasize the advantages of existing work and the potential of extracting motion priors. And, I do not see any strong insights that stand out; motionclone has already demonstrated this fairly clearly. If the authors mean that motion priors are particularly useful during T2V training, they should provide more experiments. For example, training SVD and VideoCrafter2 shows that the insights presented in T2V-Turbo are quite limited.\", \"questions\": \"1. I\\u2019m curious why this method, although based on distilling VideoCrafter2, outperforms VideoCrafter2 across multiple tasks, such as in the three metrics shown in Table 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up the discussion\", \"comment\": \"Dear Reviewer PqeK,\\n\\nThank you again for serving as a reviewer! Your feedback has been valuable in helping us clarify and improve our work. We have worked diligently to address your comments and included experimental results.\\n\\nWe would like to invite you to continue the discussion with us. We hope that our responses can successfully address your concerns so that you might consider a reevaluation of your rating.\\n\\nThanks and best regards,\\n\\nThe Authors\"}", "{\"title\": \"(2/2) Response to y5Jq\", \"comment\": \"> For the motion guidance, how the values of \\u03bb, \\u03c4 were chosen\\n\\nWe choose \\u03bb = 0.5 based on the settings of MotionClone, which applies motion guidance for the first half of the inference calculation. Our value of \\u03c4 = 500 is also based on MotionClone's choice and we found that slightly lowering its original value from \\u03c4 = 2000 to \\u03c4 = 500 leads to better performance and better training stability.\\n\\n> Can the author give more details on the dataset processing, like the needed computation resource?\\n\\nWe spend 400 GPU hours for the data preprocessing phase, which can be completed with approximately 2 days on a server with 8 NVIDIA A100 GPU (each GPU is of 40 GB memory).\\n\\n> If replace the base pre-trained video generation model with other models, can the T2V-Turbo-V2 method still achieve good results?\\n\\nThe original T2V-Turbo paper has demonstrated its performance when using both VideoCrafter2 and ModelScope as the teacher models. And we show that the technique of MotionClone can also be applied to ModelScope. And thus, we expect our T2V-Turbo-v2 can also achieve good results when using ModelScope as the base model.\"}", "{\"title\": \"Happy Thanksgiving!\", \"comment\": \"Dear Reviewer XVsE,\\n\\nOn this Thanksgiving, we would like to take the opportunity to express our heartfelt gratitude for your time and effort in providing valuable feedback on our work. Your insights have been truly invaluable in helping us refine and improve our submission.\\n\\nWe kindly invite you to follow up on the discussion regarding our work. Should you have any additional comments or concerns, please don\\u2019t hesitate to let us know\\u2014we are committed to addressing them to the best of our ability.\\n\\nThe Authors\"}", "{\"title\": \"Let's discuss!\", \"comment\": \"Dear Reviewer PqeK,\\n\\nThank you once again for your time and efforts in providing valuable feedback on our work. Your insights have been instrumental in helping us refine and improve our submission.\\n\\nWe would like to kindly invite you to follow up on the discussion regarding our work. If you have any additional comments or concerns, please don\\u2019t hesitate to let us know, and we will do our utmost to address them promptly.\\n\\nThe Authors\"}", "{\"title\": \"Looking forward to your response!\", \"comment\": \"Dear Reviewer N6o1,\\n\\nThank you again for serving as a reviewer! Your feedback has been valuable in helping us clarify and improve our work. We have worked diligently to address your comments and included experimental results.\\n\\nWe would like to invite you to continue the discussion with us. We hope that our responses can successfully address your concerns so that you might consider a reevaluation of your rating.\\n\\nThanks and best regards,\\n\\nThe Authors\"}", "{\"metareview\": \"This paper presents T2V-Turbo-v2, a method that enhances a diffusion-based text-to-video model by distilling a consistency model using high-quality training data, reward model feedback, and conditional guidance. The overall writing is good and easy to follow.\\n\\nSeveral reviewers raise questions about the limited novelty and more experiments, with mixed reviews. During the rebuttal and refined version, all the issues are solved, leading to borderline acceptance for all reviewers.\\n\\nThe area chair checks the rebuttal stage, questions, and responses, suggesting the acceptance of this work as a poster.\", \"additional_comments_on_reviewer_discussion\": \"All the issues are well solved during the long discussion stage.\"}", "{\"summary\": \"This paper presents T2V-Turbo-v2, a method that enhances a diffusion-based text-to-video model by distilling a consistency model using high-quality training data, reward model feedback, and conditional guidance. The approach achieves state-of-the-art performance on VBench, demonstrating improved text-video alignment and motion quality through tailored datasets, diverse reward models, and optimized guidance strategies\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The results appear promising and solid.\\n\\nThe experiments are thorough.\\n\\nThe writing is easy to follow.\", \"weaknesses\": \"The method combines existing techniques such as consistency distillation and motion guidance, so its novelty is somewhat limited.\\n\\nVideoCrafter uses a 2D+1D decoupled spatial-temporal approach, whereas most recent advanced methods employ full 3D attention. How would motion guidance be applied when using 3D attention?\\n\\nWhat is the peak training cost, such as peak GPU memory, compared to training a single model? How does it perform on less powerful video diffusion models like Zeroscope\\u2014can it still achieve results comparable to VideoCrafter?\", \"questions\": \"Please see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"(1/2) Response to y5Jq\", \"comment\": \"We thank the reviewer for their positive feedback on our work. Please find our detailed responses below.\\n\\n> The model appears optimized for single-caption, short-context prompts, its ability to generate longer or more complex video context may be limited.\\n\\nWe would like to clarify the reviewer's misunderstanding. The limitation in generating longer or more complex video contexts stems from the text encoder used by the teacher diffusion model, not from our method. In this paper, we adopt VideoCrafter2 as our teacher model, which employs the CLIP text encoder with a context length of 77. This constraint is inherent to VideoCrafter2.\\n\\nHowever, this limitation can be easily overcome by using a more advanced teacher model, such as CogVideoX-5B or Mochi-1, both of which utilize the T5-XXL text encoder, supporting longer context lengths. Therefore, the limitation noted by the reviewer reflects a restriction of the teacher model (VideoCrafter2) rather than our proposed approach.\\n\\n> When generating the dataset for the motion guidance, it may require considerable computation resources\\n\\nWe acknowledge that generating the dataset for motion guidance requires additional computational resources. Our preprocessing takes about 400 GPU hours, which is indeed manageable as it can be completed in approximately 2 days on a server with 8 NVIDIA A100 GPU (each GPU is of 40 GB memory). Importantly, this additional preprocessing effort yields significant benefits in terms of motion quality and inference acceleration.\\n\\nIn comparison, the original MotionClone approach requires approximately 6 minutes to generate a single video, with 3 minutes spent on DDIM inversion from a reference video. Moreover, the peak GPU memory consumption can reach up to 35 GB due to the gradient calculations involved. In contrast, our model eliminates the need for identifying appropriate reference videos, conducting DDIM inversion, or calculating gradients during inference without adding any inference overhead compared to inference from a VCM. For example, our T2V-Turbo-v2 only takes 5 seconds to generate a video in 16 steps using BFloat16.\\n\\nThe efficiency and performance gains of our approach are clearly reflected in its results. Our model achieves the #1 ranking on VBench, surpassing numerous proprietary video generation systems, including Gen-3, Kling, and MiniMax. This demonstrates that the preprocessing trade-off is well-justified, offering both superior motion quality and significant reductions in inference time.\\n\\n> What are the evaluation datasets when comparing the T2V-Turbo-V2 with the SOTA methods (Table 1)?\\n\\nThe results in Table 1 are obtained by comparing our T2V-Turbo-V2 with the SOTA methods using the VBench datasets, which contain 946 unique prompts. We carefully follow VBench\\u2019s evaluation protocols by generating 5 videos for each prompt, as discussed in Lines 301 - 310.\\n\\n> I want to know the metrics of using OV + VG and OV + VG + WV (in Table 2 setting)\\n> \\n\\nWe thank the reviewer for the question. Below, we include the results for OV + VG and OV + VG + WV in Table 2 setting.\\n\\n| | **VCM (OV + VG)** | **VCM (OV + VG + WV)** | **T2V-Turbo-v2 w/o MG (OV + VG)** | **T2V-Turbo-v2 w/o MG (OV + VG + WV)** |\\n| --- | --- | --- | --- | --- |\\n| **Quality Score** | 83.43 | 82.95 | 83.86 | 82.35 |\\n| **Semantic Score** | 57.52 | 54.98 | 69.25 | 76.80 |\\n| **Total Score** | 78.25 | 77.36 | 80.94 | 81.24 |\\n\\nAs shown in the table, VCM achieves a high Quality Score on the OV + VG dataset, similar to training on pure OV data, but adding the lower-quality WV data slightly decreases this score. Conversely, incorporating reward feedback in T2V-Turbo-v2 significantly improves the Semantic Score for the OV + VG + WV dataset, while the gains for OV + VG remain comparatively moderate. These findings align with our discussion in the main text: RMs with short context lengths operate optimally on datasets with shorter captions, highlighting the importance of aligning dataset characteristics with RM capabilities. Furthermore, the results justify our decision to exclude OV data when training the main T2V-Turbo-v2 models, as incorporating OV into VG + WV datasets negatively impacts model performance.\\n\\n> why the author does not use the OV dataset?\\n\\nWe make our decision based on the performance when training on OV + WV. Specifically, VCM's Total Score when training on OV + WV (73.30) is worse than when training on OV (78.52) or WV (76.15). Similarly, our method's Total Score when training on OV + WV (81.00) barely improves the performance of training OV (80.97) and is worse than when training on WV (81.34) data. This phenomenon suggests a big domain gap between the OV and WV datasets. Thus, we do not use the OV dataset to train the main version of our method.\"}", "{\"title\": \"Looking forward to your response!\", \"comment\": \"Dear Reviewer dkgW,\\n\\nWe greatly appreciate your insightful feedback, which has significantly contributed to the clarity and enhancement of our work. We have carefully addressed your comments in our response, clarified potential misunderstandings, and explained the technical contributions of our paper. Additionally, we also included new experimental results to corroborate our findings.\\n\\nWe kindly invite you to revisit our paper in light of these updates and clarifications. We would greatly appreciate it if you could consider whether these changes warrant a reevaluation of your rating.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you to the authors for their responses. After thoroughly reviewing the rebuttals and considering the concerns raised by other reviewers, I still hold the opinion that this method heavily depends on MotionClone and is challenging to apply to 3D full transformers (as simply modifying the shape of attention maps does not effectively decouple spatial and motion aspects). However, I am willing to increase my score to 6 and leave the final decision to the area chair.\"}", "{\"summary\": \"This paper introduces T2V-Turbo-V2, aiming at improving the video quality and alignment with prompts by focusing the post-training phase. It distills a pre-trained T2V model using various supervision signals, including, 1. the reward models feedback from pre-trained vision language models (CLIP and InternV2) for both image and video levels. 2. the self-consistency loss used in many other distillation models, and injecting the classifier-free guidance and energy function into the self-consistency loss.\\nThis paper also optimize the data preprocessing and the reward feedback, allowing the T2V-Turbo-V2 to achieve the state-of-the-art results on the Bench and outperform previous models like VideoCrafter2, Gen-3, and Kling.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The model achieves high scores across multiple metrics and outperforms proprietary systems, demonstrating the effectiveness of proposed modules.\\n2. T2V-Turbo-V2 introduce an effective post-training process to enhance video quality and prompt alignment, and this process if architecture agnostic, potentially can be used on other pre-trained video generation models.\\n3. This paper provide detailed ablation studies on various factors like the dataset selection, reward model configurations, the effectiveness of motion guidance, and so on.\", \"weaknesses\": \"1. The model appears optimized for single-caption, short-context prompts, its ability to generate longer or more complex video context may be limited.\\n2. When generating the dataset for the motion guidance, it may require considerable computation resources.\", \"questions\": \"1. What are the evaluation datasets when comparing the T2V-Turbo-V2 with the SOTA methods (Table 1)?\\n2. Since both the OV and VG dataset contain the high visual quality data, and the Quality Score is high when only using OV, it seems that OV is a good dataset to improve the visual quality, I want to know the metrics of using OV + VG and OV + VG + WV (in Table 2 setting) and why the author does not use the OV dataset?\\n3. For the motion guidance, how the values of \\u03bb, \\u03c4 were chosen?\\n4. Can the author give more details on the dataset processing, like the needed computation resource?\\n5. If replace the base pre-trained video generation model with other models, can the T2V-Turbo-V2 method still achieve good results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"MotionClone can indeed be applied to transformers with full 3D attention!\", \"comment\": \"Dear Reviewer PqeK,\\n\\nThank you again for your insightful comments. We would like to comment further on MotionClone's applicability to transformers with full 3D attention.\\n\\nAs per MotionClone's authors' [comments](https://openreview.net/forum?id=aY3L65HgHJ&noteId=rxABkhmCT5), **MotionClone can indeed be applied to the latest DiT-based T2V model, e.g., CogVideoX**, in which MotionClone demonstrates effectiveness in training-free motion customization.\\n\\nWe hope it can address your concern about MotionClone's applicability!\\n\\nThanks and Best regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"(1/2) Response to Reviewer dKgW\", \"comment\": \"> This method is overly engineering-focused and lacks novelty, as the motion guidance and consistency distillation techniques involved are already established, making it appear less innovative.\\n\\nWe respectfully disagree with the reviewer's argument on our novelty. In terms of guidance strategy, the primary goal of our work is not to design a new energy function for motion guidance. Instead, we aim to empirically demonstrate that augmenting the teacher ODE solver with the energy function's gradients of a conditional guidance strategy can distill a more capable student video generator while significantly reducing inference costs. In this context, we showcase the potential of our approach by integrating MotionClone\\u2019s energy function into the teacher ODE solver.\\n\\nAdditionally, adapting MotionClone\\u2019s motion guidance techniques for training is significantly non-trivial for several reasons:\\n\\n- **Dependency on Reference Videos**: MotionClone relies on access to reference videos with high-quality motion. However, identifying suitable reference videos for general text prompts is challenging, which limits its effectiveness and applicability for generic video generation tasks.\\n- **High Computational Cost**: Computing the gradient of the energy function during inference incurs substantial memory and latency overhead. For instance, generating a single video with MotionClone can take approximately seven minutes and 30 GB GPU memory.\\n\\nTo address these challenges, we leverage the critical insight that each training video inherently serves as an ideal reference video for its corresponding training prompt. Additionally, we design a separate preprocessing phase to precompute the motion guidance before the actual training phase. As a result, this preprocessing phase eliminates the need for computationally expensive gradient calculations during training.\\n\\nAs demonstrated in Tables 1 and 4, augmenting the teacher ODE solver with motion guidance leads to significant performance gains and improved motion quality across different evaluation metrics.\\n\\n> Additionally, while it conducts extensive ablation experiments on motion guidance and reward models, this does not constitute a significant contribution of the paper. I am unclear about the paper's contributions; is it providing more interesting insights? It would be helpful if the authors could briefly summarize this in their response.\\n\\nWe firmly believe that our findings provide invaluable insights for advancing post-training research in video generation models. The choices of training data and reward models are arguably the most critical components in the post-training phase of a generative AI model. In our work, we conduct a rigorous and thorough empirical investigation into how these factors impact the performance of T2V models. A key finding of our study is that their effects are **not orthogonal**\\u2014the interaction between training datasets and RMs plays a pivotal role in shaping the final performance. Specifically, our Section 4.2, 4.3, and Appendix C.2 empirically demonstrate that **curating training datasets for different learning objectives is crucial for achieving optimal results**. To the best of our knowledge, this is the first work to systematically study how the selection of training data and RMs affects the post-training performance of a T2V model.\\n\\n> Regarding the contribution summary of the paper (L117), it seems to emphasize the advantages of existing work and the potential of extracting motion priors. And, I do not see any strong insights that stand out; motionclone has already demonstrated this fairly clearly. If the authors mean that motion priors are particularly useful during T2V training, they should provide more experiments. For example, training SVD and VideoCrafter2 shows that the insights presented in T2V-Turbo are quite limited.\\n\\nIn our paper, we perform rigorous and thorough ablation studies to investigate the impacts of two critical components\\u2014training data and reward models\\u2014and demonstrate how different design choices affect model performance during the post-training phase. Our findings reveal the surprising **coupled effects of training data and RMs**, offering **strong insights** that can guide future research on post-training strategies for video generation models.\\n\\nAdditionally, MotionClone only demonstrated its effectiveness during inference. Our work scales its application to the training phase by addressing two significant challenges: (1) its reliance on reference videos, and (2) the high computational cost of gradient calculations. Notably, our method is not confined to MotionClone\\u2019s motion guidance. It supports the integration of other energy functions into the teacher ODE solver, highlighting the vast design space for conditional guidance strategies. In this work, we use MotionClone\\u2019s energy function as a concrete example to showcase the potential of this approach, but its utility extends far beyond this specific application.\"}", "{\"title\": \"Follow up the discussion\", \"comment\": \"Dear Reviewer XVsE,\\n\\nThank you again for serving as a reviewer! Your feedback has been valuable in helping us clarify, improve, and refine our work. We have worked diligently to address your comments and included new human evaluation results.\\n\\nWe would like to invite you to continue the discussion with us. We hope that our responses can successfully address your concerns so that you might consider a reevaluation of your rating.\\n\\nThanks and best regards,\\n\\nThe Authors\"}", "{\"title\": \"Happy Thanksgiving!\", \"comment\": \"Dear Reviewer PqeK,\\n\\nOn this Thanksgiving, we would like to take the opportunity to express our heartfelt gratitude for your time and effort in providing valuable feedback on our work. Your insights have been truly invaluable in helping us refine and improve our submission.\\n\\nWe kindly invite you to follow up on the discussion regarding our work. Should you have any additional comments or concerns, please don\\u2019t hesitate to let us know\\u2014we are committed to addressing them to the best of our ability.\\n\\nThe Authors\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer y5Jq,\\n\\nThank you for your responses! If our responses have addressed your concerns, could you kindly increase the confidence score to better advocate the acceptance of our work?\\n\\nThanks and best regards,\\n\\nThe Authors\"}" ] }
BZrSCv2SBq
ADAM Optimization with Adaptive Batch Selection
[ "Gyu Yeol Kim", "Min-hwan Oh" ]
Adam is a widely used optimizer in neural network training due to its adaptive learning rate. However, because different data samples influence model updates to varying degrees, treating them equally can lead to inefficient convergence. To address this, a prior work proposed adapting the sampling distribution using a bandit framework to select samples adaptively. While promising, both the original Adam and its bandit-based variant suffer from flawed theoretical guarantees. In this paper, we introduce Adam with Combinatorial Bandit Sampling (AdamCB), which integrates combinatorial bandit techniques into Adam to resolve these issues. AdamCB is able to fully utilize feedback from multiple actions at once, enhancing both theoretical guarantees and practical performance. Our rigorous regret analysis shows that AdamCB achieves faster convergence than both the original Adam and its variants. Numerical experiments demonstrate that AdamCB consistently outperforms existing Adam-based methods, making it the first to offer both provable guarantees and practical efficiency for Adam with adaptive batch selection.
[ "ADAM", "Combinatorial Bandit", "Importance Sampling", "Mini-Batch", "Optimization", "Regret Minimization" ]
Accept (Poster)
https://openreview.net/pdf?id=BZrSCv2SBq
https://openreview.net/forum?id=BZrSCv2SBq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pi8uruxHHM", "ovBlSXdz5i", "mI32lXg1u9", "m8pK5HSpoA", "lRUouTkIG9", "hloSgx6On9", "hboindyR5A", "c8smyOX2MM", "YeboGKlmxy", "YJZW5gNg15", "VxjiTIQfkF", "R2N0AST88b", "HhYwPzGeEP", "HaDEYhDrh8", "5pYTk7pmPE", "4lkniaTJi0" ], "note_type": [ "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732271348558, 1737524017205, 1732613695143, 1734951491143, 1730389876160, 1732278087793, 1730643832026, 1732608202579, 1732278304634, 1732551216974, 1732982147852, 1730609425762, 1732270822736, 1732278455369, 1732270680977, 1732672796598 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Area_Chair_Vheo" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_fG67" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_797D" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_7DWV" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_fG67" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_7DWV" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Authors" ], [ "ICLR.cc/2025/Conference/Submission9967/Reviewer_797D" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for taking the time to review our paper and for your thoughtful and valuable feedback. We appreciate your positive recognition of our work and the constructive comments you have provided. Below, we address each of your comments and questions in detail:\\n\\n---\\n\\n**[W] Applicability of the Method in Real-World Applications**\\n\\nWe would like to clarify that our paper primarily focuses on **optimization with adaptive batch selection using a bandit method**, and our contributions are centered on this specific optimization framework. While the reviewer raises important questions regarding the type of task, model dependence (e.g., regularization techniques like weight decay and dropout), and data-related concerns (e.g., noise and augmentation), we emphasize that the proposed method is **task-agnostic and model-independent**, as long as the loss function can be computed.\\n\\nBelow, we specifically address each aspect while reiterating the broader applicability of our method:\\n\\n1. **Type of Task:** The method is not restricted to any particular task or domain. It dynamically adapts to the task at hand by leveraging the computed loss and gradient norms, making it applicable to supervised learning, semi-supervised learning, and even self-supervised learning setups.\\n2. **Model Dependence (Regularization Techniques):** Our approach does not interfere with standard or task-specific regularization techniques such as weight decay, dropout, or total variation loss. These regularizations function independently of how batches are constructed. Our method focuses solely on selecting the most informative samples for efficient optimization, complementing these regularization strategies.\\n3. **Data Samples (Noise and Augmentation):** Regarding data augmentation, the approach is fully compatible and can incorporate augmented samples effectively without biasing the training process, provided the augmentation strategy is balanced.\\n\\nIn summary, while the reviewer raises interesting questions, the scope of our paper is the general **optimization method** rather than the intricacies of specific tasks, models, or data manipulations. We hope this clarification resolves any questions and underscores the broad applicability of our combinatorial bandit-based (adaptive batch selection) optimization method.\\n\\n\\n---\\n\\n### **Answers to Questions**\\n\\n**[Q1] Semi-Supervised learning**\\n\\nOnce a specific loss function is defined, the feedback is derived from the gradient of that loss function. Therefore, even in semi-supervised learning settings, **as long as a loss can be computed,** our proposed method can be applied. For unlabeled data, **pseudo-labeling** techniques could be used to assign temporary labels, **allowing the computation of a loss**. This enables our adaptive sampling approach to function effectively, regardless of whether the setting is fully supervised or semi-supervised.\\n\\n---\\n\\n**[Q2] Extra regularizations**\\n\\nOur method does not conflict with sample-agnostic regularization techniques such as weight decay or dropout, as these **operate independently of batch construction**. For regularizations related to the output, such as total variation loss, the adaptive sampling method would indirectly align by prioritizing samples that lead to higher gradient norms, which often correlate with areas requiring regularization (e.g., regions with high variance). The flexibility of our approach ensures compatibility with a wide range of regularization techniques.\\n\\n---\\n**[Q3] Adversarial Samples/Noises**\\n\\nOur method can be uOur method does not inherently prioritize noisy samples unless they consistently produce high gradient norms over multiple iterations. In practice, gradient norms for noisy samples often decrease once the model learns to ignore the noise, reducing their selection probability. However, for datasets with substantial noise, adding noise-robust loss functions or noise-detection mechanisms can complement our method to prevent overfitting to such samples.\\n\\n---\\n**[Q4] Data augmentation**\\n\\nData augmentation can indeed affect the adaptive sampling process, as augmented samples often exhibit strong similarity to original samples. Our method could be adapted to treat augmented samples as lower-priority if they exhibit lower gradient relevance, thereby ensuring that the model focuses on diverse samples. Alternatively, adaptive sampling could prioritize augmented samples when they introduce new informative patterns.\\n\\n---\\n**[Q5] Comparison with Other Sampling Methods (e.g., DoReMi)**\\n\\nOur method leverages a combinatorial bandit framework focused on adaptive batch selection, which is theoretically grounded in gradient-based relevance. In contrast, reinforcement learning-based methods like DoReMi optimize data mixtures with LLMs as the specific use case. There is a clear difference between the two methods. We are confident that its ability to prioritize informative samples enhances efficiency in many large-scale applications.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for their continued discussion and for assigning a positive score.\\n\\nHowever, there appears to be a misunderstanding in the reviewer's comment regarding the performance gap between AdamCB and Adam, as our key results indicate the opposite of what the reviewer's comment suggests. We sincerely hope the reviewer recognizes our genuine intention to ensure that the evaluations are based on what our results truly represent. To this end, we are happy to provide further clarification on this point.\\n\\nAs explained in our previous response (see the section \\\"In high dimension\\\"), **the performance gap between AdamCB and Adam does NOT diminish as $d$ increases**. On the contrary, **the performance gap increases with $d$** with $O(\\\\sqrt{d})$ -- the gap is given by: \\n\\n$$\\n\\\\mathcal{O}\\\\left(\\\\frac{\\\\sqrt{d}}{n^{1/2}}\\\\sqrt{T}\\\\right) - \\\\mathcal{O}\\\\left(\\\\frac{\\\\sqrt{d}}{n^{3/4}}\\\\left(\\\\frac{T}{K}\\\\ln{\\\\frac{n}{K}}\\\\right)^{1/4}\\\\right).\\n$$ \\n\\nThis reflects the fact that AdamCB retains its comparative advantage even in high-dimensional settings (or for any $d$, both in low- and high-dimension). We had clearly stated this above and the reviewer may refer to our response for details (e.g., the distinction between regret compared to the optimality and comparative advantage of AdamCB over Adam).\\n\\nTo further illustrate, consider the example you provided. The relative difference in regret between Adam and AdamCB (**the larger this value, the greater the advantage of AdamCB**) is shown below:\\n\\n- **89 million parameters, training examples $n = 50,000$, iterations $T = 1000$, batch size $K = 128$** \\n$$\\nC \\\\cdot \\\\left( \\\\sqrt{\\\\frac{89,000,000}{50,000}}\\\\sqrt{1000} - \\\\frac{\\\\sqrt{89,000,000}}{50,000^{3/4}}\\\\left(\\\\frac{1000}{128} \\\\ln{\\\\frac{50,000}{128}}\\\\right)^{1/4} \\\\right) \\\\approx \\\\mathbf{1328} \\\\cdot C\\n$$\\n\\n- **89 million parameters, training examples $n = 50,000$, iterations $T = 10,000$, batch size $K = 128$** \\n$$\\nC \\\\cdot \\\\left( \\\\sqrt{\\\\frac{89,000,000}{50,000}}\\\\sqrt{10,000} - \\\\frac{\\\\sqrt{89,000,000}}{50,000^{3/4}}\\\\left(\\\\frac{10,000}{128} \\\\ln{\\\\frac{50,000}{128}}\\\\right)^{1/4} \\\\right) \\\\approx \\\\mathbf{4208} \\\\cdot C\\n$$\\n\\n- **198 million parameters, training examples $n = 50,000$, iterations $T = 10,000$, batch size $K = 128$** \\n$$\\nC \\\\cdot \\\\left( \\\\sqrt{\\\\frac{198,000,000}{50,000}}\\\\sqrt{10,000} - \\\\frac{\\\\sqrt{198,000,000}}{50,000^{3/4}}\\\\left(\\\\frac{10,000}{128} \\\\ln{\\\\frac{50,000}{128}}\\\\right)^{1/4} \\\\right) \\\\approx \\\\mathbf{6276} \\\\cdot C\\n$$ \\nwhere $C > 1$ is some constant. (By the way, this gap is in cumulative sense as the regret was defined in cumulation.) These calculations demonstrate that **AdamCB maintains a significant comparative advantage over Adam** in all cases. \\n\\nFurthermore, **this advantage is observed not only in theory (as shown in the regret bound) but also in experiments**. Please see **Figure 8** and **Figure 9** in Appendix G for experimental results using ConvNext models, where AdamCB consistently maintains its comparative advantage over Adam.\\n\\nWe sincerely hope this clears up any misunderstanding, and we would be happy to provide further clarifications if needed. In light of this clarification (and given our earnest effort to address your feedback comprehensively), we kindly and respectfully ask the reviewer to reconsider the rating.\"}", "{\"metareview\": \"The paper introduces AdamCB, an extension of the Adam optimizer using combinatorial bandit-based adaptive batch selection. It rigorously analyzes convergence guarantees for Adam, AdamBS, and AdamCB, correcting errors in prior proofs. The reviewers highlighted the paper's rigorous theoretical contributions, addressing fundamental issues in Adam's convergence and clear writing. Key criticisms included limited experiments on large-scale models and real-world scenarios, and that there is a lot of overlap in some of the formal analysis with Tran et al, which is only mentioned in the appendix. To me, the biggest criticism is that the paper\\u2019s experimental evaluation is minimal: no discussion of (decoupled) weight decay (or extension to a \\u201eAdamWCB\\u201c), only convolutional networks, no language models, no diffusion, and only using a single fixed hyperparameter setting for all methods (where different methods sometimes require different default hyperparameters; especially AdamBS often diverges, and I assume it would require a lower learning rate). As such, this is a strong theory paper, but its practical relevance will have to be determined in the future.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 797D's concerns around prior knowledge and uniform sampling were resolved, leading to a score increase.\\nSome of the other two reviewers' concerns were addressed, but they both remained skeptical about real-world applicability / large-scale applications.\"}", "{\"summary\": \"This paper proposes an extension of the Adam optimizer by integrating adaptive batch selection. It also identifies a flaw in the proof presented in previous papers. Based on extensive theoretical analysis and some experiments, the proposed method demonstrates better convergence and improved performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper clearly points out the flaw in the proof of previous papers.\\n2. This paper provides a new convergence rate of Adam/AdamBS in the new perspective.\\n3. The results on some simple datasets are better than baselines.\", \"weaknesses\": \"I appreciate the theoretical contribution, and my major concern is the applicability of this method in real-world applications. Since the optimizer is one of the most fundamental components of the entire machine learning process, I wonder if this method can be directly integrated into existing practices. Additionally, I question whether the performance remains robust given the extra effort required to assign sampling bias towards certain samples during batch construction. Please see the question part for more details.\", \"questions\": \"1. **Semi-supervised learning**: Semi-supervised learning is a popular machine learning task where the testing dataset is given without labels. In this context, I wonder if the adaptive sampling method can still assign weights to both training and unlabeled testing samples.\\n\\n2. **Extra regularizations**: Many regularization techniques are sample-agnostic, such as weight decay and dropout. Sometimes, regularization is only related to the output instead of the input, for example, using a total-variation loss to encourage smoothness in image generation. How does the adaptive sampling method work in these cases?\\n\\n3. **Adversarial samples/noises**: The paper mentions that \\\"samples with a low gradient norm are assigned a low weight, whereas samples with larger gradient norms are more likely to be chosen in future iterations.\\\" What if there is noise in the dataset, which is common in real-world datasets? Is it beneficial for the model to learn more from noisy samples that are difficult to fit?\\n\\n4. **Data augmentation**. Does this method compatible with data augmentation? The augmented data will share strong similarity among samples. What should be considered when using data augmentation? Will the training be biased in an unexpected manner?\\n\\n5. **Comparison with other methods for data sampling**: Given the popularity of large language models (LLMs), it is important to fine-tune/pretrain LLMs with a wise combination of data. Is there any potential to demonstrate this method in such a setting? How does this method compare with currently adopted reinforcement learning-based methods for sampling, such as \\\"DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, NeurIPS 2023\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for taking the time to review our paper and for your thoughtful and valuable feedback. We appreciate your positive recognition of our work and the constructive comments you have provided. Below, we address each of your comments and questions in detail:\\n\\n---\\n**[W1] Experiments and practical relevance**\\n\\nWe would like to emphasize that the primary focus of our work is to propose a\\u00a0**provably efficient and practical optimization algorithm**\\u00a0that addresses longstanding inefficiencies\\u2014and even incorrectness\\u2014in Adam-based methods. AdamCB is a general optimization algorithm with convergence guarantees, making it applicable not only to large models but also to a wide range of models and optimization tasks. We sincerely hope that this fundamental focus is appropriately considered, as our algorithm is not solely designed for large models such as LLMs.\\n\\nThat said, in response to the reviewer's feedback, we have performed and included additional experiments with larger-scale models such as ResNet-18 (11.4 million parameters), ConvNeXt-Base (89 million parameters), and ConvNeXt-Large (198 million parameters), in addition to the previously included experiments with MLPs, CNNs, and the VGG network (Figure 5). These new results are presented in the supplementary material (in Appendix G) of the updated manuscript, with some snapshots shown in the tables below (all results are test errors on CIFAR-10). These experiments clearly demonstrate that AdamCB consistently outperforms baselines such as Adam and AdamBS, even for models with larger parameter counts and higher complexity.\\n\\n\\n**ResNet-18** (11.4 million parameters)\\n\\n| Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adam | 2.913$\\\\pm$ 0.277 | 2.646$\\\\pm$ 0.151 | 2.657$\\\\pm$ 0.126 | 2.718$\\\\pm$ 0.146 | 2.701$\\\\pm$ 0.221 | 2.633$\\\\pm$ 0.175 | 2.649$\\\\pm$ 0.207 | 2.820$\\\\pm$ 0.465 | 2.514$\\\\pm$ 0.159 | 2.449$\\\\pm$ 0.302 |\\n| AdamBS | 3.733$\\\\pm$ 1.000 | 4.184$\\\\pm$ 1.316 | 4.262$\\\\pm$ 1.023 | 4.261$\\\\pm$0.855 | 4.528$\\\\pm$1.134 | 4.619$\\\\pm$1.246 | 4.172$\\\\pm$0.825 | 4.643$\\\\pm$1.239 | 4.111$\\\\pm$ 1.026 | 4.763$\\\\pm$1.316 |\\n| AdamCB | 4.688$\\\\pm$2.003 | 2.607$\\\\pm$0.225 | 2.587$\\\\pm$0.251 | 2.214$\\\\pm$0.153 | 2.143$\\\\pm$ 0.042 | 2.255$\\\\pm$0.156 | 2.185$\\\\pm$ 0.175 | 2.135$\\\\pm$0.107 | 1.975$\\\\pm$ 0.057 | 1.978$\\\\pm$0.289 |\\n\\n\\n**ConvNext-base** (89 million parameters)\\n\\n| Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adam | 0.761 $\\\\pm$ 0.040 | 0.691$\\\\pm$ 0.043 | 0.673$\\\\pm$ 0.017 | 0.612$\\\\pm$0.022 | 0.618$\\\\pm$0.037 | 0.573$\\\\pm$0.044 | 0.570$\\\\pm$ 0.057 | 0.608$\\\\pm$0.016 | 0.601$\\\\pm$ 0.012 | 0.544$\\\\pm$0.027 |\\n| AdamBS | 0.665$\\\\pm$0.038 | 0.520$\\\\pm$ 0.010 | 0.493$\\\\pm$ 0.029 | 0.798$\\\\pm$ 0.406 | 1.108$\\\\pm$ 0.829 | 2.082$\\\\pm$ 0.752 | 2.163$\\\\pm$0.802 | 2.050$\\\\pm$0.722 | 2.229$\\\\pm$0.856 | 2.160$\\\\pm$ 0.834 |\\n| AdamCB | 0.624$\\\\pm$0.005 | 0.458$\\\\pm$0.005 | 0.434$\\\\pm$0.028 | 0.400$\\\\pm$0.025 | 0.374$\\\\pm$0.013 | 0.345$\\\\pm$0.019 | 0.323$\\\\pm$0.010 | 0.328$\\\\pm$0.017 | 0.325$\\\\pm$0.019 | 0.312$\\\\pm$0.026 |\\n\\n\\n**ConvNext-large** (198 million parameters)\\n\\n| Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Adam | 0.722$\\\\pm$0.038 | 0.582$\\\\pm$ 0.041 | 0.532$\\\\pm$0.049 | 0.544$\\\\pm$0.018 | 0.525$\\\\pm$0.011 | 0.519$\\\\pm$0.045 | 0.553$\\\\pm$0.037 | 0.490$\\\\pm$0.021 | 0.471$\\\\pm$0.039 | 0.451$\\\\pm$0.020 |\\n| AdamBS | 0.589$\\\\pm$0.030 | 0.423$\\\\pm$0.010 | 0.386$\\\\pm$0.020 | 0.386$\\\\pm$0.029 | 0.597$\\\\pm$0.374 | 0.729$\\\\pm$0.561 | 0.734$\\\\pm$0.573 | 2.218$\\\\pm$1.873 | 2.317$\\\\pm$2.017 | 3.227$\\\\pm$1.475 |\\n| AdamCB | 0.538$\\\\pm$0.023 | 0.403$\\\\pm$0.013 | 0.386$\\\\pm$0.010 | 0.364$\\\\pm$0.009 | 0.338$\\\\pm$0.018 | 0.304$\\\\pm$0.013 | 0.274$\\\\pm$0.009 | 0.281$\\\\pm$0.015 | 0.264$\\\\pm$0.006 | 0.281$\\\\pm$0.031 |\\n\\nThe inclusion of new experiments on these larger models complements the MLP, CNN, VGG network results already presented in the main paper, bridging the gap between simpler architectures and larger models. This expanded evaluation further substantiates the effectiveness and scalability of AdamCB across a diverse range of architectures. For more details, please refer to Appendix G in the updated manuscript.\\n\\nWith these new results, alongside the findings already presented both in theory and experiments, we strongly believe that the efficacy of our method is well supported. Furthermore, our core contribution\\u2014the provable efficiency of AdamCB\\u2014is further strengthened by this comprehensive evaluation. We are also more than willing to conduct additional evaluations, time permitting, to further strengthen our results. We sincerely and respectfully request the reviewer to recognize the potential impact of our work in light of our main contributions and the additional evidence provided.\"}", "{\"summary\": \"This paper introduces the AdamCB algorithm, aiming to address inefficiencies in the Adam optimizer by improving data sampling. AdamCB integrates combinatorial bandit techniques, enabling adaptive batch selection to focus on informative samples. The authors claim enhanced theoretical guarantees, providing a rigorously analyzed regret bound that surpasses those of standard Adam and its bandit-based variant, AdamBS, which are identified here as having flawed guarantees. Experimental results across diverse datasets demonstrate the practical benefits of AdamCB, indicating faster convergence and greater efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper rigorously addresses and corrects the theoretical flaws in convergence guarantees for both Adam and AdamBS, presenting refined proofs that offer independent value to the community.\\n2. The proposed AdamCB algorithm is shown to be both theoretically robust and empirically effective, with rigorous analysis and extensive experimental validation.\\n3. The paper presents the method and supporting claims clearly, facilitating reader comprehension and enhancing accessibility of the technical content.\\n4. Through a fair and thorough comparison, this paper evaluates AdamCB alongside Adam and AdamBS, incorporating corrected theoretical guarantees to provide a balanced assessment.\", \"weaknesses\": \"1. The paper's motivation could benefit from further clarification and depth. In the abstract and introduction, the authors state that uniform sampling leads to inefficiencies in Adam, but they should specify the type of inefficiency (e.g., memory, computational, time, or convergence efficiency; presumably the latter). Additionally, the authors should provide evidence or discussion showing that alternative sampling methods indeed improve Adam's efficiency, strengthening the case for the proposed approach.\\n2. Algorithm 3 requires prior knowledge of $L$ for the weight update rule, which may be limiting in practical applications. It would be valuable to discuss potential ways to relax this requirement or clarify how the authors manage this constraint in practice.\", \"questions\": \"1. In the weight adjustment section, it is unclear why AdamCB requires the sum of probabilities to equal $K$ instead of 1. This choice appears to necessitate an additional operation in the sampling strategy, specifically the introduction of a threshold $\\\\tau$. Clarification on the rationale behind this requirement would be beneficial.\\n2. The paper could further explain why increasing the sample size $n$ leads to faster convergence in AdamCB. A theoretical or intuitive justification for this relationship would strengthen the understanding of the algorithm\\u2019s convergence behavior.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Post Author Response\", \"comment\": \"I appreciate the responses to my questions but I will maintain my score of 6.\\n\\nFor large networks, it's clear the $d\\\\sqrt{T}$ term dominates. Take for example the ConvNext-base model with 89 million parameters and cifar 10 dataset with 50k training examples. For Adam, the ratio of the second term to the first term is 4.7e-7, and even reducing this to 0 may have a negligible impact.\"}", "{\"comment\": \"**[W2] In high dimension**\\n\\nWe appreciate the reviewer\\u2019s comment and are happy to clarify any potential misunderstandings. The convergence guarantee shown in our paper is a regret comparison between the true optimal solution and the algorithm's performance, where the first term, $\\\\mathcal{O}(d \\\\sqrt{T})$, represents the leading term. But, that is about the difference between the performance of an optimization algorithm and the optimality.\\n\\nHowever, if the reviewer\\u2019s point pertains to the \\\"marginal improvement of adaptive selection\\\" (as it appears from the comment), this requires examining the difference between adaptive and non-adaptive (uniform) sampling strategies.\\n\\nAs illustrated in Table 1, AdamCB achieves a convergence rate of\\n$\\\\mathcal{O}\\\\left(d\\\\sqrt{T} + \\\\frac{\\\\sqrt{d}}{n^{3/4}}\\\\left(\\\\frac{T}{K}\\\\ln{\\\\frac{n}{K}}\\\\right)^{1/4}\\\\right)$,\\nwhile corrected Adam (with uniform sampling) achieves a convergence rate of\\n$\\\\mathcal{O}\\\\left(d\\\\sqrt{T} + \\\\frac{\\\\sqrt{d}}{n^{1/2}}\\\\sqrt{T}\\\\right)$.\\nAs shown in the regret analysis, the constant factor in the leading term $\\\\mathcal{O}(d \\\\sqrt{T})$ is the same for both AdamCB and corrected Adam. Therefore, the regret gap arises from the difference in the second terms:\\n$\\\\mathcal{O}\\\\left(\\\\frac{\\\\sqrt{d}}{n^{1/2}}\\\\sqrt{T}\\\\right) - \\\\mathcal{O}\\\\left(\\\\frac{\\\\sqrt{d}}{n^{3/4}}\\\\left(\\\\frac{T}{K}\\\\ln{\\\\frac{n}{K}}\\\\right)^{1/4}\\\\right)$.\\nThis gap becomes larger as $d$ or $T$ increases, highlighting the greater comparative improvement provided by AdamCB in high-dimensional settings. This behavior is both theoretically demonstrated and empirically validated in our experiments with neural network models, as presented in the experiments of Appendix G.\\n\\nWe sincerely thank the reviewer for raising this question, as it provides an opportunity to clarify this point in detail. However, we respectfully disagree with the notion (if the implication was intended) that there is a diminishing marginal improvement of adaptive selection in high dimensions. The theoretical results in our paper clearly demonstrate that adaptive selection retains its advantages in both low- and high-dimensional settings.\\nAdditionally, we have reviewed the discussion in the AdamBS paper mentioned by the reviewer. We believe that the discussion there does not necessarily align with the theoretical results in our work as well as the comparison mentioned above, and as shown in our analysis, the results in AdamBS themselves are invalid. We would be more than happy to include this discussion in the revised manuscript to further highlight our contributions.\\n\\nIn conclusion, adaptive selection consistently provides a distinct advantage in both low- and high-dimensional settings, as demonstrated by our theoretical analysis and empirical results.\\n\\n---\\n**[W3, Q2] LLM pretraining**\\n\\nWe appreciate the reviewer\\u2019s point regarding the applicability of our method to settings such as LLM pretraining, where the dataset may only be swept a few times. However, our method remains valuable in such scenarios. Each gradient update during LLM pretraining is computationally expensive, and any unnecessary gradient computation becomes significantly more wasteful compared to smaller models.\\nEven in cases where the dataset is seen only a limited number of times, the adaptive batch sampling introduced by our method can still provide meaningful efficiency improvements by prioritizing the most informative gradients. Exploring the full potential of this adaptive strategy in LLM pretraining contexts is indeed an intriguing direction for future work. That said, we respectfully believe this aspect should not be considered a weakness of our work as a general optimization method.\\n\\n---\\n**[W4] On Tran et al. 2019**\\n\\nWe would like to clarify that while some techniques from Tran et al. 2019 are utilized to address the technical error in the original Adam analysis, we have explicitly acknowledged and properly credited Tran et al. 2019 in our work.\\n\\nIt is important to emphasize that the primary goal of our research is not to fix errors in the analysis of Adam (or AMSGrad). Rather, the main theoretical contribution of our work lies in **designing and proving the provable efficiency of adaptive batch sampling** (leveraging a combinatorial bandit approach) for Adam-based optimization. This contribution is independent of Tran et al. 2019.\\n\\nOur theoretical improvement in convergence efficiency, achieved through the rigorous integration of a combinatorial bandit framework into Adam optimization, is a core novelty of our work. This innovation is not present in Tran et al. 2019 and represents a significant advancement in the field.\\n\\nTherefore, the novelty of our work is not detracted in any sense, as it addresses a distinct problem and provides contributions that go beyond the scope of Tran et al. 2019. We are happy to clarify this.\"}", "{\"comment\": \"Thanks for the detailed response! Thus, I maintain my score.\"}", "{\"comment\": \"Thank you for your support and for recognizing the contributions of our work. Your feedback has been invaluable in helping us improve our paper.\"}", "{\"summary\": \"This paper proposes a variant of the Adam optimizer with a combinatorial bandit approach (AdamCB) for adaptively selecting batches to train on. Using a combinatorial bandit to select a batch of examples addresses the limitation of a prior approach called Adam with Bandit Sampling (AdamBS) which failed to improve performance with larger batch size due to myopic approach of selecting a single sample at a time with replacement. Another core contribution of the paper is new theoretical analysis of Adam, AdamBS, and the proposed AdamCB that relaxes prior assumptions as well as fixes errors in the prior proofs. These convergence guarantees show AdamCB to have faster convergence than Adam and AdamBS and faster convergence with increasing batch size. Experimental studies show AdamCB to outperform Adam and AdamBS on small scale MLP and CNN tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The primary strengths of this paper are as follows:\", \"The writing is clear and easy to understand.\", \"The assumptions required for the convergence analysis is more general than previous work and the theory also covers Adam and AdamBS.\", \"The paper identifies and addresses incorrect assumptions made in the analysis of AdamBS.\"], \"weaknesses\": [\"The experiments are fairly limited in scale and not very reflective of practical settings that we are in these days with large models and large datasets.\", \"The benefits of adaptive selection will likely be limited in settings with large model dimensionality since then the $d\\\\sqrt{T}$ term will dominate the $\\\\sqrt{d}/n^{3/4}T^{3/4}$ term controlled by adaptive selection. The potentially marginal improvement of adaptive selection is not discussed as far as I can tell in the main paper (it is discussed in the AdamBS paper).\", \"It is unclear how useful bandit selection will be in settings where we see the entire dataset just a few times as with LLM pretraining.\", \"It seems like the theoretical analysis has a lot of overlap with Tran et al. 2019 but this is mainly mentioned in the Appendix and not stated in the main text. It also detracts from the novelty of the theoretical analysis.\"], \"questions\": \"The convergence rate of Adam, Adam with Bandit Selection, and the proposed Adam with Combinatorial Bandits are all dominated by the $d\\\\sqrt{T}$ term. In practice, what is the expected speedup of using combinatorial bandit in training settings with >100 million parameters?\\n\\nIn the LLM setting, not only are models over 100 billion parameters in some cases, we also rarely loop through the full dataset if at all during pretraining. What benefits if any do you expect adaptive batch selection to provide in this setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Answers to Questions**\\n---\\n\\n**[Q1] Clarification on Setting $\\\\sum_{i=1}^{n} p_{i,t}=K$.**\\n\\n- **Why $\\\\sum_{i=1}^{n} p_{i,t}=K$ instead of $\\\\sum_{i=1}^{n} p_{i,t}=1$?**\\n\\nWe thank the reviewer for providing us with the opportunity to clarify this point. AdamCB uses a **combinatorial bandit framework** to sample multiple arms (samples) without replacement in each mini-batch. Unlike single-arm selection bandit algorithms like AdamBS, where$\\\\sum_{i=1}^{n}p_{i,t}=1$ because only one arm is selected at a time, AdamCB must select $K$ simultaneously for a mini-batch. Therefore, it is natural to scale the sum of the probabilities to $K$, reflecting the expected number of samples selected in each round. \\n\\nIf the sum of probabilities were constrained to 1, the algorithm would need to perform additional rescaling or sampling adjustments to ensure $K$ samples are drawn, which would unnecessarily complicate the sampling process. Instead, directly setting $\\\\sum_{i=1}^{n}p_{i,t}=K$ aligns the probability distribution with the batch-level selection requirements. By setting $\\\\sum_{i=1}^{n} p_{i,t}=K$, AdamCB **simplifies the sampling process** and ensures compatibility with mini-batch training.\\n\\n- **Rationale for the Threshold $\\\\tau$**\\n\\nAllowing the sum of probabilities to equal $K$ can lead to individual probabilities $p_{i,t}$ exceeding 1, especially when certain samples are assigned significantly higher weights due to their importance or gradient magnitude. To ensure valid probabilities and prevent any sample from being overrepresented, AdamCB introduces a threshold $\\\\tau$. If a sample's probability $p_{i,t}$ exceeds $\\\\tau$:\\n\\n1. Its index is added to a null set $S_{null,t}$, effectively removing it from active consideration for selection.\\n2. The probabilities of the remaining samples are adjusted to redistribute the excess weight while ensuring the sum of probabilities remains $K$.\\n\\nThis adjustment ensures that no single sample dominates the mini-batch while maintaining the proportional relationship between the weights $w_{i,t}$ and the probabilities $p_{i,t}$. We again thank the reviewer for the chance to clarify this key aspect.\\n\\n---\\n**[Q2] Relationship Between Sample Size $n$ and Convergence in AdamCB**\\n\\nWe thank the reviewer for their thoughtful feedback, which provides us with an opportunity to enhance the understanding of our algorithm\\u2019s convergence behavior by further explaining the relationship between the sample size $n$ and the faster convergence of AdamCB. \\n\\n**For theory**, the second term of the regret bound of AdamCB (Theorem 1) is given as, $(\\\\sqrt{d} / n^{3/4}) \\\\left( (T/K) \\\\ln{(n/K)}\\\\right)^{1/4}$. From this term, it is evident that the regret decreases as $n$ increases. This implies that larger $n$ leads to smaller regret, improving the algorithm\\u2019s convergence rate.\\n\\n**For intuitions,** increasing the sample size $n$ expands the pool of data samples from which mini-batches are drawn. This broader pool allows AdamCB (as well as other algorithms albeit slower convergence rate) to access a wider range of informative samples during each update step. As a result, the selected mini-batches are more representative of the overall data distribution, **reducing variance in the gradient estimates** and leading to more accurate updates. This relationship underscores the advantage of AdamCB in leveraging large datasets to achieve faster convergence, making it particularly effective in modern machine learning tasks with abundant data.\"}", "{\"comment\": \"### **Answers to Questions**\\n---\\n**[Q1]** As detailed in our response to the [W2] comment, we again emphasize that the comparative advantage of AdamCB arises from the second-order terms, Importantly, as demonstrated in our theoretical analysis and experiments, this comparative advantage **does not diminish**, even in high-dimensional settings with a large number of parameters (e.g., > 100 million).\\n\\nIn practice, the speedup achieved by AdamCB extends beyond theoretical guarantees. In our newly included experiments with large-scale models\\u201489 million parameters (ConvNext-base) and 198 million parameters (ConvNext-large)\\u2014AdamCB consistently outperforms both Adam and Adam with Bandit Selection (AdamBS) in terms of convergence efficiency. This improvement is attributed to AdamCB\\u2019s adaptive batch selection mechanism, which prioritizes informative gradients, accelerating convergence compared to the uniform sampling used in Adam or the single-arm selection strategy in AdamBS.\\nThese experimental results underscore that AdamCB effectively leverages adaptive selection to enhance training efficiency. This aligns with our theoretical results.\\n\\nIn conclusion, the advantages of AdamCB, both theoretical and empirical, remain significant irrespective of model size or dimensionality. The additional experiments as well as the already presented results provide compelling evidence of its effectiveness across both small- and large-scale settings.\\n\\n**[Q2]** Answered along with the response to [W3]\"}", "{\"comment\": \"Thank you for taking the time to review our paper and for your thoughtful and valuable feedback. We deeply appreciate your recognition of our work and the constructive comments you have provided. Below, we address each of your comments and questions in detail:\\n\\n---\\n\\n**[W1] Clarification Regarding Inefficiency**\\n\\nWe thank the reviewer for the opportunity to clarify this point. The inefficiencies of Adam (with uniform sampling) can be categorized into two main issues:\\n\\n1. **Convergence Inefficiency**: As explained in Section 2.4.2, the analysis of Adam reveals a technical error in the original framework, preventing it from providing convergence guarantees. This issue implies that Adam can potentially diverge under certain conditions.\\n2. **Algorithmic Limitation with Uniform Sampling**: Adam's uniform sampling nature limits its ability to fully leverage the feedback provided by multiple samples in each mini-batch. This constraint leads to slower convergence (even with corrections) compared to our proposed algorithm, AdamCB, which utilizes a combinatorial bandit sampling mechanism to address these inefficiencies (see Table 1).\\n\\nOur proposed AdamCB algorithm overcomes these challenges by dynamically adapting the sampling distribution to prioritize informative samples, resulting in faster convergence. This improvement is supported by rigorous theoretical guarantees, as outlined in Theorem 1. The provable efficiency of our method is verified through comparisons with existing methods, as shown in Table 1.\\n\\nAdamCB achieves provable regret bounds and demonstrates superior convergence performance compared to both the original Adam and its bandit-based variant, AdamBS. Furthermore, our experiments highlight that AdamCB consistently outperforms its counterparts across various models and datasets. We will refine the paper to ensure these clarifications are more explicit and accessible. Thank you for the opportunity to highlight these points.\\n\\n---\\n**[W2] Knowledge of $L$ is NOT needed.**\\n\\nWe sincerely thank the reviewer for the comment regarding the knowledge of $L$ in Algorithm 3. We are happy to clarify that the presentation of the algorithm requiring knowledge of $L$ is actually **without loss of generality**, meaning that prior knowledge of $L$ is **not necessary**.\\n\\nIn fact, we can relax this requirement and adopt a dynamic approach where the upper bound \\\\(L\\\\) is replaced with a running maximum based on the **gradient norms** observed during training. Specifically:\\n\\n- **Dynamic update of $L$:** In practice, we can replace $L$ with $L_{t}=\\\\max_{t' \\\\leq t} \\\\max_{i \\\\in [n]} \\\\|g_{i,t'}\\\\|$, where $L_{t}$ maintain the running maximum gradient norm observed during training up to iteration $t$. This ensures $L_{t}$ is non-decreasing and provides a sufficient upper bound for the gradient norms.\\n- **Implementation in Algorithm 3:** The weight update rule is modified to incorporate $L_t$, which is updated at every iteration: $L_t \\\\leftarrow \\\\max(L_{t-1}, \\\\max_{i \\\\in [n]} \\\\| g_{i,t} \\\\|).$This ensures that $L_t$ remains a valid upper bound for all gradients observed during training.\\n- **Impact on Theoretical Guarantees:** The use of $L_t$ still preserves the theoretical analysis of the algorithm, as the bounded gradient assumption is satisfied by construction. The convergence guarantees and performance remain unaffected.\\n\\nThis modification enhances the practicality of our method while retaining the rigor of our theoretical results. We sincerely thank the reviewer again for their valuable feedback and for providing us the opportunity to improve the practicality of our approach.\"}", "{\"comment\": \"I would like to thank the authors for their comprehensive explanations. My concerns have been well addressed, and I sincerely hope that these discussions will be included in the revised paper. So, I decided to raise my score.\"}" ] }
BZr41xSleC
Rethinking Message Passing for Algorithmic Alignment on Graphs
[ "Joël Mathys", "Florian Grötschla", "Kalyan Varma Nadimpalli", "Roger Wattenhofer" ]
Most Graph Neural Networks are based on the principle of message-passing, where all neighboring nodes exchange messages with each other simultaneously. We want to challenge this paradigm by introducing the Flood and Echo Net, a novel architecture that aligns neural computation with the principles of distributed algorithms. In our method, nodes sparsely activate upon receiving a message, leading to a wave-like activation pattern that traverses the graph. Through these sparse but parallel activations, the Net becomes more expressive than traditional MPNNs which are limited by the 1-WL test and also is provably more efficient in terms of message complexity. Moreover, the mechanism's ability to generalize across graphs of varying sizes positions it as a practical architecture for the task of algorithmic learning. We test the Flood and Echo Net on a variety of synthetic tasks and find that the algorithmic alignment of the execution improves generalization to larger graph sizes. Moreover, our method significantly improves generalization and correct execution in terms of graph accuracy on the SALSA-CLRS benchmark.
[ "Graph Neural Networks", "Algorithm Learning", "Message Passing" ]
Reject
https://openreview.net/pdf?id=BZr41xSleC
https://openreview.net/forum?id=BZr41xSleC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ySB8dqKVtb", "rJQ5RCNEo9", "hPwXG8T6w5", "dpXLzZKwru", "aygoYXfbU2", "Wt5dWusgmN", "WGNYOqbwIF", "W6ZcO4UdM3", "PEe17ZLq4L", "OCLqNgRpoy", "OBXRTJ1C2e", "Ks7wZG6bYj", "G4wFmel7Yv", "DARdXhxfd1", "AotqsvMBc7", "228ZqKNQUa", "1GAfroIBID", "04joCLaome" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review" ], "note_created": [ 1732822697287, 1732350926036, 1730582075101, 1732463721515, 1732317447240, 1732317227563, 1729606711579, 1737524100353, 1732317873563, 1732316907335, 1732701631418, 1732528851014, 1729511894278, 1732317890751, 1732680915205, 1734734169956, 1732317243790, 1730704231847 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_9TNP" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_tssR" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_9TNP" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_o8Qp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_tssR" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_o8Qp" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_tssR" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_hDVf" ], [ "ICLR.cc/2025/Conference/Submission11057/Area_Chair_BdC7" ], [ "ICLR.cc/2025/Conference/Submission11057/Authors" ], [ "ICLR.cc/2025/Conference/Submission11057/Reviewer_hDVf" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed rebuttal. I have reviewed it carefully and have decided to maintain my original score.\\n\\nThe paper has several limitations that require further experimental validation. For instance, the use of ER and other small random graphs in the experiments is problematic, as it does not adequately represent a wide range of graph types. Additionally, the proposed method is limited to undirected and connected graphs, which significantly restricts its applicability.\\n\\nThank you again for your thoughtful responses.\"}", "{\"title\": \"Response to Rebuttal by Authors\", \"comment\": \"Thank you very much for your responses to my questions! I still have a couple of outstanding clarifications based on my initial points.\", \"my_initial_comment\": \"_\\\"The paper says that the enhanced expressiveness comes from the \\u2018unique message propagation strategy\\u2019 and the \\u2018structured activations of the nodes\\u2019. However, Theorem 4.3 suggests that the expressivity improvements comes purely from marking a node\\\"_\\n\\n I am not sure that your answer is related to this, but to my point after where I say _\\\"Additionally, the choice of node to mark can effect expressivity\\\"_. My point was that I don't see how the message propagation strategy itself improves expressivity, just that the node marking does (as suggested by Theorem 4.3.). Could you clarify or provide evidence for how the propagation scheme itself contributes to enhanced expressiveness?\", \"initial_comment\": \"_\\\"Your method breaks symmetries through the origin node and so may be less beneficial for tasks without this ordering.\\\"_\\n\\nSorry, I think there may be some confusion about what I mean here. I am not saying that \\\"this aspect of the mechanism should be helpful\\\" for the tasks which you use but that for tasks where there is no ordering, it may be harmful. For instance, you can cause two isomorphic graphs to be distinguished by randomly marking nodes in each. The results for \\\"Flood and Echo random\\\" (which seems more relevant for tasks without an ordering?) are underwhelming and don't seem to improve on RecGNN whilst being more computational complex. This led me to wonder about the suitability of the approach for tasks without a fixed ordering.\", \"your_response\": \"_\\\"We believe that the FE Net should improve generalisation due to multiple factors. On one side, there is the involvement of the entire graph, without relying on external inputs such as scaling the number of rounds.\\\"_\\n\\nAre you saying that you can use \\\"m\\\" rounds of message-passing in the train set to cover the full graph but if the graphs are larger in the test set then you may not cover the full graph with standard MPNN approaches. So you have some dependence on the number of rounds for generalisation? This also relates to my long-range interactions point where I state that we may have under-reaching between two nodes in a large graph if we don't use enough layers for the small training graphs. Have I understood this point correctly? If so, are there practical scenarios where we need to generalise to graphs with vastly different diameters?\\n\\n_\\\"Are there any direct applications for direct graphs that you are considering?\\\"_\\n\\nNot particularly, I was just considering the generality of the approach to a wide range of different graph types. I can see how this method can be adapted in these cases now - thank you.\"}", "{\"summary\": \"This paper introduces a message-passing graph neural network called Flood and Echo Network (EF Net). The message-passing structure mimics a breadth-first traversal (BFS) starting from a source node. The authors demonstrate that EF Net is more expressive than traditional MPNNs, which are limited by the 1-WL test, and is also provably more efficient in terms of message complexity. Applied to the SALSA-CLRS benchmark, EF Net shows improvements in generalization and correct execution, achieving higher accuracy on this benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper introduces a message-passing neural network that operates similarly to a breadth-first traversal, requiring O(m) messages, where m is the number of edges in the graph. For many tasks evaluated, FE Net requires fewer messages compared to several other APNN models.\\n2.\\tThe authors demonstrate that FE Net can be more expressive than standard APNN models, particularly in the context of the Weisfeiler-Lehman (WL) test performance.\\n3.\\tThe paper evaluates FE Net on a subset of tasks from the SALSA-CLRS benchmark using randomly generated ER graphs. According to this benchmark, FE Net shows improved generalization compared to GIN and other APNN models.\", \"weaknesses\": \"1.\\tI recommend revising the introduction to clarify that the paper specifically focuses on \\\"neural algorithmic reasoning on graphs.\\\" While the title mentions this, the introduction could be clearer, as it currently suggests a focus on general graph learning. Additionally, there is no evidence provided that FE Net outperforms MPNNs on typical supervised or semi-supervised tasks, such as node classification and link prediction.\\n2.\\tThe exact aggregation and update operations used in FE Net are not clearly discussed. For instance, GIN has been shown to be more expressive with sum pooling as the message aggregator. To establish the expressiveness of FE Net, it would be beneficial to specify these operators explicitly. Furthermore, Theorem 4.2 could be strengthened by showing that FE Net does not fail in cases where the 1-WL test succeeds, in addition to distinguishing graphs where the 1-WL test fails.\\n3.\\tSome assumptions in the paper require further clarification. For instance, it seems to assume that a typical MPNN exchanges O(m) messages per layer, where m is the number of edges. However, the actual number of messages depends on the computational graph. If there is a batch of k nodes with an average degree of d, a 2-layer GCN will involve O(kd^2) messages. Additionally, models like GraphSAGE use sampling to reduce message volume, so O(m) messages per layer may not apply to most APNNs.\\n4.\\tFE Net operates similarly to a BFS traversal, where nodes at the same distance from the source are processed together. Given that the algorithms studied also use BFS-like traversal, it is unsurprising that FE Net outperforms some other GNN models. It is unclear, however, how FE Net would perform with algorithms that don\\u2019t resemble BFS, as FE Net (and other GNNs) struggled with generalizing to tasks like DFS and MST.\\n5.\\tMost of the experiments in the paper are conducted with Erd\\u0151s-R\\u00e9nyi (ER) graphs, which may not fully represent practical graph structures. Showing results on scale-free graphs or real-world graphs could provide more comprehensive insights into FE Net\\u2019s performance.\\n6.\\tSince FE Net begins from a source node and reaches all nodes in a connected component, it is unclear how it would handle graphs with multiple connected components. Providing a discussion on this aspect would improve understanding of FE Net's applicability to such cases.\", \"questions\": \"1.\\tDo the authors have any insights on how EF Net performs on scale-free graphs?\\n2.\\tCan EF Net be applied to semi-supervised learning tasks, such as node classification?\\n3.\\tIs EF Net suitable for non-BFS style algorithms, like clustering or triangle counting?\\n4.\\tWhat specific aggregation and update operations are employed in EF Net?\\n5.\\tHow is it ensured that EF Net does not fail in cases where the 1-WL test succeeds?\\n6.\\tIn the experiments with GIN, how many layers were used?\\n7.\\tDoes EF Net's performance decline when applied to high-diameter graphs, such as road networks?\\n8.\\tHow does EF Net handle graphs with multiple connected components?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> I am not sure that your answer is related to this, but to my point after where I say \\\"Additionally, the choice of node to mark can effect expressivity\\\". My point was that I don't see how the message propagation strategy itself improves expressivity, just that the node marking does (as suggested by Theorem 4.3.). Could you clarify or provide evidence for how the propagation scheme itself contributes to enhanced expressiveness?\\n\\nUnfortunately the first paragraph of the initial answer was lost when reformatting in openreview, our apologies. In Theorem 4.3, we show equivalence to a model which has access to a marked node. Note that in the FE net, the origin is never in any way distinguished or contains explicit distance information that would distinguish it from the rest of the nodes, so there is no marking present. Only through the propagation mechanism do the nodes gain additional information that they can leverage. \\n\\n> Sorry, I think there may be some confusion about what I mean here. I am not saying that \\\"this aspect of the mechanism should be helpful\\\" for the tasks which you use but that for tasks where there is no ordering, it may be harmful. For instance, you can cause two isomorphic graphs to be distinguished by randomly marking nodes in each. The results for \\\"Flood and Echo random\\\" (which seems more relevant for tasks without an ordering?) are underwhelming and don't seem to improve on RecGNN whilst being more computational complex. This led me to wonder about the suitability of the approach for tasks without a fixed ordering.\\n\\nWhether this is helpful or not probably depends a bit on the task as well. Theoretically, already random features give you universal approximation power - with the downside that this is much harder to train. On the other hand, not breaking any symmetries gives you problems with 1-WL indistinguishability - i.e. if you want to predict missing edges but have nodes with the same embedding that can give you edges that cross the entire graph, even if all present edges are very local. In that sense, our method resides somewhere in between, using \\u201cjust a bit\\u201d of symmetry breaking for more information but not too much to destabilise or completely lose the graph induced priors. On a more practical note, it is likely that the graph has not that many symmetries that need to be broken, especially if it is an attributed graph [1]. Therefore this likely has a limited impact (beyond expressivity benchmarks), and also gives you more reason/good options to switch over choosing a fixed node.\\n\\n[1]https://arxiv.org/abs/2202.10156 \\n\\n> Are you saying that you can use \\\"m\\\" rounds of message-passing in the train set to cover the full graph but if the graphs are larger in the test set then you may not cover the full graph with standard MPNN approaches. So you have some dependence on the number of rounds for generalisation? This also relates to my long-range interactions point where I state that we may have under-reaching between two nodes in a large graph if we don't use enough layers for the small training graphs. Have I understood this point correctly? If so, are there practical scenarios where we need to generalise to graphs with vastly different diameters?\\n\\nIf your task is depending on the whole graph, then using MPNNs on the original graph do require some sort of adjusting the number of layers. Otherwise underreaching will happen at some point. In that case, from the perspective of a single node more computation is done (as the number of rounds is increased). Whereas in the FE Net it can be that the total computation is adjusted according to the graph, but the number of phases remains unchanged. Then, from the perspective of a single node, there is no change in the computation - even though from a graph perspective there are more steps executed. We believe that this behaviour can be very helpful for generalising to larger graphs.\\n\\nRegarding generalisation, we are not aware of many settings where this is currently studied besides NAR. Note, that proper size generalisation is very hard, and probably the main reason to study and progress the field of NAR as it focuses on how to learn a generalizable behaviour. There will always be a limit on what sizes you can train a system. Ideally it would suffice to train on these (maybe small/short molecular graphs?) and then still be able to apply it to much larger systems where the same underlying rules should hold. However, this is rather a long term vision than the current state of the field.\"}", "{\"comment\": \"We thank the reviewer for his insights and will go into the raised points and questions in the following.\\n\\n> One concern of FENet is over-squashing problem...\\n\\nOur work does not explicitly aim to address the over-squashing problem, as this is primarily a property of the underlying graph topology rather than the message-passing architecture itself. We also include a statement about over-squasing in the Appendix. Recent work (Giovanni et al., 2023) shows that graph topology is the dominant factor in over-squashing, with architecture choices having a marginal effect compared to bottlenecks in the graph structure. The FE Net's wave-like propagation pattern offers a different approach to information flow, but we acknowledge that fundamental topological bottlenecks would still affect performance. The architecture could potentially be combined with existing approaches that modify graph structure to address over-squashing, but this was beyond the scope of our current investigation which focused on algorithmic alignment of the message-passing mechanism itself.\\n\\n> Though the theoretical runtime complexity is the same as MPNN, the messages are executed in sequence, while in MPNN it is in parallel. Modern GPUs can handle large batches of data executed in parallel, but this sequential property of FENet might make it super slow on GPU, especially with a lot of phases.\\n\\nThere is a tradeoff between involving the entire graph and sequential operations. In order to facilitate information exchange across the entire graph, sequential operations are required for our approach. Note that this impacts any approach - even MPNNs require more sequential multiple rounds to achieve similar reach. As a benefit, our wave-like pattern activates only relevant nodes at each step, potentially reducing memory usage compared to simultaneous updates of all nodes. Most importantly, this structured approach with fewer messages enables performance improvements in tasks, as demonstrated by our results on PrefixSum, which aren't achievable with standard (parallel) message passing.\\n\\n> One minor point, please make the tables more readable...\\nWe will adjust this, thank you for the input.\\n\\n> Why do you pick FCrossConv to exchange messages between nodes at the same distance?\\n\\nWe include FCrossConv to include the entire graph structure and all edges. While the model could function without these cross-connections (equivalent to FCrossConv having no effect), including them provides flexibility. This allows the model to learn whether and how to utilise these additional connections based on the task requirements.\\n\\n> Why the FConv and FCrossConv are reversed in the echo phase?\\n\\nWe reverse the order in the echo phase to maintain consistency with the overall information flow pattern. Since the echo phase represents information flowing back toward the origin, we thought it to be more natural to first process nodes at the same distance level (ECrossConv) before passing messages inward (EConv), mirroring but reversing the outward flow pattern of the flooding phase.\\n\\n> For fixed mode, how exactly do you design which node to be the origin node?\\n\\nFor the fixed mode, the origin node is determined by the task. In the SALSA-CLRS experiments, we use the starting node provided by the task (e.g., source node in BFS or Dijkstra). When no special node is specified by the task, we default to using the node with id 0 as the origin.\\n\\n> The authors claim FENet message passing is more efficient. If I understand correctly, in a whole phase, the number of node updates as well as messages conveyed are not reduced. It's just the origin node is able to reach further neighbors in a phase.\\n\\nAt each individual computation step you have less updates/messages with the FE Net. If you consider an entire phase the number of messages is the same as one round in a standard MPNN. However information has propagated throughout the entire graph - and not just the origin node is influenced by this, but all other nodes (through their update) as well.\\n\\n> Is there a reason why you evaluate the framework on algorithmic alignment? Certainly it performs well, but it can also be a general framework for other tasks, say molecular prediction etc.\\n\\nAs of now we have primarily focused on the setting of algorithm learning on graphs. We thought the generalisation to larger graphs while performing computation throughout the entire graph could be best illustrated in that setting. However, it is true that one could use it in other settings as well instead of regular MPNNs. It is possible that for molecular predictions that have long-range interactions, reducing message complexity through alignment could be of interest. However, how such interactions could be learned directly on large systems (whereas in our setting they are always smaller during training) is still a challenging open research question, so it might not yet be directly applicable.\"}", "{\"comment\": \"We thank the reviewer for his insights and will go into the raised points and questions in the following.\\n> I recommend revising the introduction ...\\n\\nWe will revise our introduction and put more emphasis on neural algorithmic reasoning, our intention was to outline the proposed architecture first and then dive into nar on graphs. While our novel take on the message-passing mechanism that could be relevant for general graph learning, this work specifically investigates algorithmic reasoning and size generalisation. As such, application to semi-supervised tasks remains an interesting direction for future work (possibly in the context of NAR), but is not the current focus of this paper\\n\\n> The exact aggregation and update operations used in FE Net are not clearly discussed. For instance, GIN has been shown to be more expressive with sum pooling as the message aggregator. To establish the expressiveness of FE Net, it would be beneficial to specify these operators explicitly. Furthermore, Theorem 4.2 could be strengthened by showing that FE Net does not fail in cases where the 1-WL test succeeds, in addition to distinguishing graphs where the 1-WL test fails.\\n\\nWe provide details in Appendix F or in the attached code base, the FE net intends to do a GIN like update step including a GRU. During the experiments the same aggregation was used consistently among the baselines. \\nRegarding expressivity, we believe that Theorem 4.1 should already provide the suggested strengthening. Namely, that FE Net can simulate any MPNN, thus preserving 1-WL expressivity. As such, if the WL test distinguishes two graphs, so should our method. In the case of testing the same graph twice, the origin node needs to be chosen the same in order for the test to succeed. \\n\\n> Some assumptions in the paper require further clarification. For instance, it seems to assume that a typical MPNN exchanges O(m) messages per layer, where m is the number of edges. However, the actual number of messages depends on the computational graph. If there is a batch of k nodes with an average degree of d, a 2-layer GCN will involve O(kd^2) messages. Additionally, models like GraphSAGE use sampling to reduce message volume, so O(m) messages per layer may not apply to most APNNs.\\n\\nThe reviewer raises an important point about message complexity. Our analysis assumes the standard MPNN setting where messages are passed along in the entire graph, including all edges, leading to O(m) complexity per round. While sampling techniques like in GraphSAGE can reduce this by redefining the graph in question to be limited to a specific subgraph, our comparison focuses on architectures operating on the full graph topology to ensure fair evaluation of algorithmic capabilities. Note, that in principle you could also use the FE Net in combination with these subsampling techniques, then there would be O(m\\u2019) messages where m\\u2019 = k*d^2 the number of edges in the subgraph.\\n\\n> FE Net operates similarly to a BFS traversal, where nodes at the same distance from the source are processed together. Given that the algorithms studied also use BFS-like traversal, it is unsurprising that FE Net outperforms some other GNN models. It is unclear, however, how FE Net would perform with algorithms that don\\u2019t resemble BFS, as FE Net (and other GNNs) struggled with generalizing to tasks like DFS and MST.\\n\\nExtrapolation is a very hard objective and a fundamental challenge in neural algorithmic reasoning. Our results demonstrate that architectural alignment is a viable option to significantly improve generalisation. Another promising indicator of this are the results on MIS (maintaining >80% accuracy at 160 nodes) and the consistent improvement in performance when increasing the number of phases (Figure 7). These findings suggest that our approach of aligning neural architectures with algorithmic principles is a promising direction for improving generalisation, even for tasks not directly aligned.\\n\\n> Most of the experiments in the paper are conducted with Erd\\u0151s-R\\u00e9nyi (ER) graphs, which may not fully represent practical graph structures. Showing results on scale-free graphs or real-world graphs could provide more comprehensive insights into FE Net\\u2019s performance.\\n\\nWhile our evaluation primarily uses ER graphs following the SALSA-CLRS and CLRS benchmark setup, the full results in Appendix I also include performance across WS and Delaunay graphs. Besides, Distance and PrefixSum is evaluated on Trees and Line graphs, where the latter has very high diameter. As such, we think we already provide a reasonably general and challenging family of graphs. Is there a particular reason to include scale-free networks in the ablations, which might even apply to NAR in general?\"}", "{\"summary\": \"This work proposes a framework, Flood and Echo network (FENet), which breaks the synchronous limit of MPNNs. Theoretically they prove that FENet has higher expressivity than 1WL. Empirically they show great performance of FENet on three algorithmic alignment tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, the idea is pretty interesting and highly novel.\", \"The writing is clear and good, also the illustrations make sense for understanding the framework.\", \"The theory is sound.\", \"The performance of FENet is validated in experiment part.\"], \"weaknesses\": [\"One concern of FENet is over-squashing problem. The sensitivity of a node to the original node is unclear, the message from the origin node pass out to the furthest nodes, then gathered back, the over-squashing issue on large graph may not be solved, even aggravated.\", \"Though the theoretical runtime complexity is the same as MPNN, the messages are executed in sequence, while in MPNN it is in parallel. Modern GPUs can handle large batches of data executed in parallel, but this sequential property of FENet might make it super slow on GPU, especially with a lot of phases.\", \"One minor point, please make the tables more readable, for example, highlight the best candidate in the table, so it is clearer to readers.\", \"Some design details are not quite clear to me. See questions below.\"], \"questions\": [\"Why do you pick FCrossConv to exchange messages between nodes at the same distance?\", \"Why the FConv and FCrossConv are reversed in the echo phase?\", \"For _fixed_ mode, how exactly do you design which node to be the origin node?\", \"The authors claim FENet message passing is more efficient. If I understand correctly, in a whole phase, the number of node updates as well as messages conveyed are not reduced. It's just the origin node is able to reach further neighbors in a phase.\", \"Is there a reason why you evaluate the framework on algorithmic alignment? Certainly it performs well, but it can also be a general framework for other tasks, say molecular prediction etc.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewer for his insights and will go into the raised points and questions in the following.\\n> The paper says that the enhanced expressiveness comes from the \\u2018unique message propagation strategy\\u2019 and the \\u2018structured activations of the nodes\\u2019. However, Theorem 4.3 suggests that the expressivity improvements comes purely from marking a node...\\n\\nIn general it is true that a randomised selection policy might not be optimal. One could think of other heuristics (sample upon some graph statistics) or even learn a policy as in the paper you suggested. In our studied context, we found that many tasks yield a natural starting point or tiebreak for an appropriate selection - this might be related to the fact that for many (attributed) graphs the 1-WL expressiveness already suffices. But it would be an interesting future direction to explore if learning the start can improve performance in general. \\n\\n> Firstly, the theoretical argument for improved generalization isn\\u2019t convincing. You argue that the new message-passing scheme is more natural \\u2018as the computation inherently involves the entire graph\\u2019...\\n\\nWe believe that the FE Net should improve generalisation due to multiple factors. On one side, there is the involvement of the entire graph, without relying on external inputs such as scaling the number of rounds. Note that essentially RecGNN does exactly this, but seems to perform worse in comparison. Moreover, FE net only updates the nodes sparsely with few messages throughout the computation - this can allow it to find and redistribute information using fewer updates to the graph, always using O(m) messages to do so. \\n\\nThe mentioned variants of the FE Net don\\u2019t perform as good as fixed - this is likely due to the fact that training them is more difficult. We have conducted an ablation to assess this in Appendix G.3 where we find that different models can differ, but within a model there is not as much randomness. However, for many tasks you can find an appropriate choice as a starting node, so this is why we put more emphasis on the fixed variant, and put less emphasis on stabilising the other two modes.\\n\\nRegarding the reference to [2]: As far as we understood, in this paper they also consider the SALSA-CLRS tasks, however the actual data that is used is not the same. Recall that the FE-Net does not use any hints throughout training. DNAR on the other side cannot do that (outlined in section 6) as they heavily rely on hints during training. Moreover, if we understood the description and code correctly they actually use custom hints closer to their formulation instead of the hints that are included with CLRS/SALSA-CLRS. Whereas we propose an algorithmic alignment of the general message passing architecture and train without any tailored supervision. Therefore, I don\\u2019t think the results are at all comparable. \\n\\n> Time comparisons to RecGNN would improve the paper, given that it outperforms your approach on some tasks.\\n\\nWe do provide some timing results in the Appendix. However, keep in mind that the fixed variant of FE Net consistently outperforms RecGNN. \\n\\n> The benchmarks chosen seem to have a natural ordering of nodes. Your method breaks symmetries through the origin node and so may be less beneficial for tasks without this ordering.\\n\\nThe method does introduce a component of symmetry breaking as nodes are able to figure out the distance to the origin node. As the nodes already have an order, there should be no more symmetries that can be broken. Therefore, we do not see how this aspect of the mechanism should be helpful. But it is likely that we misunderstood the question and we would be glad if the reviewer could clarify their intentions if this is the case.\\n\\n> GIN will struggle to solve tasks that require long-range interactions due to over-squashing and under-reaching. For example, for path graphs of size 100, you may need a large number of layers so that two nodes interact. If your method improves over GIN on unseen larger graphs - does that actually imply that it is better at generalizing? (Given that GIN won\\u2019t be able to solve the task on these larger graphs even when they are in the training set). To me, it is less about generalization and more about efficient long-range interactions.\\n\\nLong-range interactions certainly are an important aspect, as we involve the entire graph instead of executing a constant number of graph convolutions. However, even if the computation is scaled appropriately for GIN (i.e. RecGNN or on SALSA) the FE Net seems to perform better on these tasks. We would argue that it is more about generalisation as you apply it on larger unseen graphs rather than efficient long-range, which you could study without size generalisation during testing. Moreover, learning general long-range interactions on large graphs might be its own challenge, whereas you could learn a principle on small and generalise the same principle to larger instances and incorporate some long range effects.\"}", "{\"comment\": \"We thank the reviewer for his insights and will go into the raised points and questions in the following.\\n> The proposed algorithm does not generalize to other algorithmic tasks as well as the tasks it is designed for.\\n\\nThe architecture seems to perform better where the algorithmic tasks are aligned with the flooding/echo patterns. However, we would like to point out that extrapolation is a very hard objective and a fundamental challenge in neural algorithmic reasoning. Our results demonstrate that architectural alignment is a viable option to significantly improve generalisation. Another promising indicator of this are the results on MIS (maintaining >80% accuracy at 160 nodes) and the consistent improvement in performance when increasing the number of phases (Figure 7). These findings suggest that our approach of aligning neural architectures with algorithmic principles is a promising direction for improving generalisation, even for tasks not directly matching the flooding/echo pattern.\\n> The impact of the algorithm is not clear, which real world use-cases would the proposed approach be the most beneficial for?\\n\\nWhile we focus on leveraging algorithmic alignment on algorithmic tasks due to our interest in size generalisation, the concepts of sparse but parallel computation patterns involving the entire graph efficiently could be of independent interest for other applications across graph learning. Especially in dealing with long-range interaction systems, reducing message complexity through alignment could be of interest. However, how such interactions could be learned directly on large systems (whereas in our setting they are always smaller during training) is still a challenging open research question.\\n\\n> What is the dependency of the proposed method to the chosen origin node?\\n\\nWe have conducted an experiment in Section G.3, Table 8 to assess the effect of the origin node. Our analysis shows that while random starts lead to some training instability, as not all models achieve perfect accuracy. However, individual trained models show consistent performance with narrow deviations across 50 runs on 1000 graphs.\"}", "{\"comment\": \"Thank you again for the clear answers to my questions.\\n\\nThe approach outlined in the paper does seem promising in terms of both expressivity and improving the generalisation abilities of the network. However, I still remain concerned about the performance of \\\"Flood and Echo random\\\". This highlights that the method may not perform well on tasks where an origin node is not easily defined. Additionally, whilst the possible improved generalisation of the propagation scheme is argued in a \\\"intuitive\\\" manner, in the paper and in the rebuttal, it is not theoretically grounded in the manuscript. Given that the performance is much improved with the same propagation scheme but with a different chosen node (fixed vs random), it is not clear (beyond some intuition) why/how this propagation scheme sufficiently improves generalisation. \\n\\nIt is for these concerns, that I will be maintaining my original score.\"}", "{\"comment\": \"Thank you for your great effort on the work and the rebuttal. I agree the EF Net is pretty interesting and fits some of the algorithmic reasoning setting. The authors' responses answer my questions. However, through reading other reviewers' comments, there still exist some limitations of the work. Therefore I would like to keep my scores.\"}", "{\"summary\": \"The authors propose a novel mechanism where messages are propagated outwards from an origin node and then back to the node. This is shown to be more expressive than 1-WL and have less memory complexity than standard MPNNs. The authors then demonstrate through experiments that the method can generalize better to larger graph sizes on some tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"To my knowledge, the mechanism is novel and using an origin node relates to other methods (eg. Subgraph GNNs) whilst the propagation mechanism is more efficient. Seeing this approach will be beneficial to the community and could impact other areas outside algorithmic alignment such as improving long-range interactions.\\n\\nThe paper is well-written and the memory complexity and expressivity improvements are argued with diagrams, text and theorems. It is very clear what the contributions and goals of the method are and many experiments have been run to ablate the model.\", \"weaknesses\": \"[Enhanced Expressiveness]\\n\\nThe paper says that the enhanced expressiveness comes from the \\u2018unique message propagation strategy\\u2019 and the \\u2018structured activations of the nodes\\u2019. However, Theorem 4.3 suggests that the expressivity improvements comes purely from marking a node. Therefore, the expressivity advantages from the actual propagation scheme is not actually shown. Could you clarify or provide evidence for how the propagation scheme itself contributes to enhanced expressiveness? Additionally, the choice of node to mark can effect expressivity and randomly choosing a node can be suboptimal [1] - this may be an issue when the task doesn't align with a specific choice of node. \\n\\n\\n\\n[Generalize to large graph sizes]\\n\\nFirstly, the theoretical argument for improved generalization isn\\u2019t convincing. You argue that the new message-passing scheme is more natural \\u2018as the computation inherently involves the entire graph\\u2019. You could increase the neighbourhood size of standard message-passing to the whole graph (eg. Graph Transformers). It is not clear that this would improve generalization (it should perform worse without positional encodings), so what specific aspects of the architecture contribute to improved generalization, beyond just involving the entire graph? Secondly, the experimental section isn\\u2019t convincing. For example, your method improves on PrefixSum (this is a path graph which means all nodes interact in your scheme) but [all, random] are worse than RecGNN on the other two tasks. Additionally, your method is only better than **old baselines** on **some** of the SALSA-CLRS tasks. This is in contrast to concurrent work [2] that seems to solve these tasks [I don\\u2019t expect a comparison to concurrent work but it does suggest that the improvement of your method is not substantial].\\n\\n[Minor Weaknesses]\\n\\n- Paper is limited to size generalization and does not account for other factors such as change in connectivity distributions.\\n- Time comparisons to RecGNN would improve the paper, given that it outperforms your approach on some tasks.\\n- The benchmarks chosen seem to have a natural ordering of nodes. Your method breaks symmetries through the origin node and so may be less beneficial for tasks without this ordering.\", \"questions\": [\"GIN will struggle to solve tasks that require long-range interactions due to over-squashing and under-reaching. For example, for path graphs of size 100, you may need a large number of layers so that two nodes interact. If your method improves over GIN on unseen larger graphs - does that actually imply that it is better at generalizing? (Given that GIN won\\u2019t be able to solve the task on these larger graphs even when they are in the training set). To me, it is less about generalization and more about efficient long-range interactions.\", \"Is your method easily parallelizable? Although message complexity may be less it is not clear to me that this method would have a favorable runtime.\", \"How would your method work with directed or disconnected graphs? Wouldn\\u2019t this mean that some nodes will never receive information from other nodes. Additionally for Theorem 4.2, as well as being connected, do you not also need the graphs to be undirected? For example, a path graph where the direction of the edges is the same way. If I pick the last node as the origin node then would I be less expressive (I guess depends on the implementation)?\", \"[1] Efficient Subgraph GNNs by Learning Effective Selection Policies. Bevilacqua et al. ICLR 2024.\", \"[2] Discrete Neural Algorithmic Reasoning. Rodionov et al.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Is your method easily parallelizable? Although message complexity may be less it is not clear to me that this method would have a favourable runtime.\\n\\nThere is a tradeoff between involving the entire graph and sequential operations. In order to facilitate information exchange across the entire graph, sequential operations are required for our approach. Note that this impacts any approach - even MPNNs require more sequential multiple rounds to achieve similar reach. As a benefit, our wave-like pattern activates only relevant nodes at each step, potentially reducing memory usage compared to simultaneous updates of all nodes. Most importantly, this structured approach with fewer messages enables performance improvements in tasks, as demonstrated by our results on PrefixSum, which aren't achievable with standard (parallel) message passing.\\n\\n> How would your method work with directed or disconnected graphs? Wouldn\\u2019t this mean that some nodes will never receive information from other nodes. Additionally for Theorem 4.2, as well as being connected, do you not also need the graphs to be undirected? For example, a path graph where the direction of the edges is the same way. If I pick the last node as the origin node then would I be less expressive (I guess depends on the implementation)?\\n\\nBecause the graphs are usually undirected and connected, we mainly focused on this scenario. The principle could be generalised for disconnected graphs by just choosing a starting node in each component, for the directed case it depends what you would like to achieve. You could treat each edge as undirected and just encode the direction as a feature, or really enforce that the message only flows one way. In that case it would not be favourable to start in a sink, but instead you could start in one (or all) sources instead. Are there any direct applications for direct graphs that you are considering?\"}", "{\"comment\": \"Thank you for the rebuttal, I acknowledge that I have read it and would like to keep my score.\"}", "{\"metareview\": [\"**(a) Scientific Claims and Findings:**\", \"The paper introduces the Flood and Echo Net, a novel architecture that aligns neural computation with distributed algorithm principles. Unlike traditional Graph Neural Networks (GNNs) that rely on simultaneous message-passing among neighboring nodes, this method employs sparse activations upon message receipt, creating a wave-like activation pattern that traverses the graph. This approach enhances expressiveness beyond the limitations of the 1-WL test and improves message-passing efficiency. The architecture's ability to generalize across graphs of varying sizes makes it suitable for algorithmic learning tasks. Empirical evaluations on synthetic tasks demonstrate that the Flood and Echo Net's algorithmic alignment improves generalization.\", \"**(b) Strengths:**\", \"Innovative Approach: The introduction of sparse, wave-like activations in GNNs offers a novel method for information propagation, potentially overcoming limitations of traditional message-passing mechanisms. Multiple reviewers appreciate the high novelty.\", \"Enhanced Expressiveness: By aligning neural computation with distributed algorithm principles, the Flood and Echo Net demonstrates increased expressiveness, surpassing the constraints of the 1-WL test.\", \"Scalability: The architecture's ability to generalize across graphs of varying sizes positions it as a practical solution for algorithmic learning tasks involving diverse graph structures.\", \"Validation: The method's effectiveness is supported by empirical results on synthetic tasks and the SALSA-CLRS benchmark.\", \"Presentation: The paper is well written and understandable.\", \"**(c) Weaknesses:**\", \"Computational Efficiency: The paper does not provide a detailed analysis of the runtime of the Flood and Echo Net, particularly concerning large-scale graphs.\", \"Theoretical Justification: While the paper introduces a novel architecture, it lacks a comprehensive theoretical analysis explaining why the sparse, wave-like activation pattern leads to improved generalization performance (beyond just using the whole graph according to Reviewer ), which would strengthen the understanding of the method's advantages.\", \"Comparative Analysis: The paper would benefit from a more detailed comparison with existing GNN architectures to clearly highlight the advantages and potential limitations of the proposed approach.\", \"Real world impact/application relevance: The real world application use cases are unclear. The authors have proposed long-range benchmarks but have not conducted related experiments. The experiments are limited to unrealistic graph structures (mostly ER graphs).\", \"(Minor) Limitations: The approach is limited to undirected and connected graphs.\", \"Parallelisation potential and potential of memory reduction should be discussed and quantified/analyzed in the paper (and not only the rebuttal).\", \"Raised concern (Reviewer tssR): Performance of method unclear on tasks where an origin node is not easily defined.\", \"**(d) Reasons for Rejection:**\", \"After careful consideration, the decision to reject the paper is based on the following reasons:\", \"1. Insufficient Computational Analysis: The lack of a detailed examination of the Flood and Echo Net raises concerns about its practical applicability to large-scale graphs and the generalization performance on real world tasks.\", \"2. Need for Comparative Evaluation: A more thorough comparison with existing state-of-the-art GNN models is necessary to substantiate the claimed improvements and to position the proposed method within the current research landscape. In particular, a comparison with other approaches that are more expressive than 1-WL would support the story of the paper.\", \"Addressing these concerns would enhance the paper's contribution to the field and its potential for acceptance.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewers largely have appreciated the novelty of the proposal. Yet, concerns (as detailed under weaknesses) remain. In particular, a more thorough experimental analysis that highlights the relevance of the Flood and Echo Net to solve real world tasks was requested.\"}", "{\"comment\": \"> Since FE Net begins from a source node and reaches all nodes in a connected component, it is unclear how it would handle graphs with multiple connected components. Providing a discussion on this aspect would improve understanding of FE Net's applicability to such cases.\\n\\nThe current implementation focuses on connected graphs, but could be naturally extended by selecting an origin for each connected component. Since no information can be exchanged between disconnected components (which also applies to standard MPNNs), this extension is straightforward but was not our primary focus.\\n\\n> Do the authors have any insights on how EF Net performs on scale-free graphs?\\n\\nWe have not specifically evaluated FE Net on scale-free graphs. The current experiments focus on ER, WS and Delaunay graphs following the SALSA-CLRS benchmark setup and additionally includes Trees and Linegraphs.\\n\\n> Can EF Net be applied to semi-supervised learning tasks, such as node classification?\\n\\nIn principle, FE Net can be used like any GNN for semi-supervised tasks. However, our work focuses on size generalisation for algorithmic reasoning where architectural alignment is likely to tap into unused potential.\\n\\n> Is EF Net suitable for non-BFS style algorithms, like clustering or triangle counting?\\n\\nFor tasks like clustering and triangle counting that primarily rely on local neighborhood information, the global information exchange of FE Net may not provide significant advantages over standard MPNNs.\\n\\n> What specific aggregation and update operations are employed in EF Net?\\n\\nWe use GRU-based updates and GIN-style aggregations, with full implementation details provided in Appendix F and our code repository.\\n> In the experiments with GIN, how many layers were used?\\n\\nFor the algorithmic tasks we use 5 layers for GIN as specified in the text. This setup was chosen as more layers led to instability where GIN did not learn anything useful.\\n\\n> Does EF Net's performance decline when applied to high-diameter graphs, such as road networks?\\n\\nPerformance naturally depends on whether information from distant nodes is relevant for the task. Our results suggest FE Net maintains better performance if a correct solution is learned than standard GNNs when such long-range information is important.\\n\\n> How does EF Net handle graphs with multiple connected components?\\n\\nThe current implementation focuses on connected graphs. Since information cannot be exchanged between disconnected components (also true for MPNNs), extending to multiple components is straightforward by selecting origins for each component.\"}", "{\"summary\": \"The paper proposes the flood and echo algorithm, that simply works by selecting an origin node, sending messages outward from this node. When the messages reach the end of the graph when centered at the origin node, they reflect back and trace the same path back to the origin node. It is theoretically proven that for the distance, path-finding and prefix sum tasks, the proposed model is a perfect fit.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed approach is theoretically and empirically validated.\", \"The paper is well written.\", \"There are other algorithmic tasks in the experiments than the ones tailored to the proposed approach.\"], \"weaknesses\": [\"The proposed algorithm does not generalize to other algorithmic tasks as well as the tasks it is designed for.\"], \"questions\": [\"The impact of the algorithm is not clear, which real world use-cases would the proposed approach be the most beneficial for?\", \"What is the dependency of the proposed method to the chosen origin node?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BZYIEw4mcY
Efficient and Trustworthy Causal Discovery with Latent Variables and Complex Relations
[ "Xiu-Chuan Li", "Tongliang Liu" ]
Most traditional causal discovery methods assume that all task-relevant variables are observed, an assumption often violated in practice. Although some recent works allow the presence of latent variables, they typically assume the absence of certain special causal relations to ensure a degree of simplicity, which might also be invalid in real-world scenarios. This paper tackles a challenging and important setting where latent and observed variables are interconnected through complex causal relations. Under a pure children assumption ensuring that latent variables leave adequate footprints in observed variables, we develop novel theoretical results, leading to an efficient causal discovery algorithm which is the first one capable of handling the setting with both latent variables and complex relations within polynomial time. Our algorithm first sequentially identifies latent variables from leaves to roots and then sequentially infers causal relations from roots to leaves. Moreover, we prove trustworthiness of our algorithm, meaning that when the assumption is invalid, it can raise an error signal rather than draw an incorrect causal conclusion, thus preventing potential damage to downstream tasks. We demonstrate the efficacy of our algorithm through experiments. Our work significantly enhances efficiency and reliability of causal discovery in complex systems. Our code is available at: https://github.com/XiuchuanLi/ICLR2025-ETCD
[ "causal discovery", "latent variables", "complex causal relations" ]
Accept (Poster)
https://openreview.net/pdf?id=BZYIEw4mcY
https://openreview.net/forum?id=BZYIEw4mcY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4xS08isbz", "vodINPFwcA", "vgfG8CcLM5", "v49r5G4zML", "tofr4H6u2v", "rH7nkMEwQA", "mYeYKMtwXj", "kVm1lJGhBe", "jV4pmtkY80", "i5tzqowwff", "eRRNV7rwMX", "WayFf9W5yj", "WTjNBrzv0S", "VcAnshSCjZ", "PnKFAzBpK3", "PBjYdzADS5", "ObGbHsGbNR", "NMBAyuWCdB", "FrVwhRzCgW", "Dm7iyurmkq", "DUeEVL5ECO", "Cz4AxmqNZf", "C0wQWMIA04", "BqJbHOXm9T", "BMVz1AzGdx", "BAurOujFI4", "AboDcwq8Ho", "7iI7I1CGO5", "78PVlfF7xn", "5gBV9d7lhy", "57QFvQt2Od", "4e4XR0gHmu", "2aRaWpcLGx", "0mV6B0SWfK", "0euyUNfdBV" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731969497044, 1732499283661, 1732499187690, 1731968969217, 1731180117592, 1733183303522, 1732703750579, 1732219831041, 1732610652402, 1731969239574, 1730623096256, 1734645278091, 1731969395549, 1731969159551, 1737523384705, 1732727574271, 1731968755467, 1733183789967, 1732626911485, 1731969042103, 1732503111976, 1730693596464, 1732706246938, 1731968811101, 1732702570199, 1731969552307, 1733188374336, 1730690204494, 1732663897798, 1732610529526, 1732503292351, 1731968940471, 1732499225406, 1732536441163, 1732499256826 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_qEDV" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_SqCS" ], [ "ICLR.cc/2025/Conference/Submission215/Area_Chair_KLsL" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_BQvB" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_d5Lc" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_BQvB" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_BQvB" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_SqCS" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_d5Lc" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ], [ "ICLR.cc/2025/Conference/Submission215/Reviewer_SqCS" ], [ "ICLR.cc/2025/Conference/Submission215/Authors" ] ], "structured_content_str": [ "{\"comment\": \"# Part (3/4)\\n\\n> Q8: Why not use $(V _i, V _j) \\\\in \\\\mathbb{S}$\\n\\nAccording to our Definition 2, $\\\\mathbb{S} = \\\\mathbb{S} _1 \\\\cup \\\\mathbb{S} _2 \\\\cup \\\\mathbb{S} _3$. Although the identifiable paris in $\\\\mathbb{S} _1 \\\\cup \\\\mathbb{S} _3$ are ordered, those in $\\\\mathbb{S} _2$ are not ordered, so it is not advisable to use the notation $(V_1, V_2) \\\\in \\\\mathbb{S}$. \\n\\nAlso, even the notation $(V _i, V _j) \\\\in \\\\mathbb{S} _1$ may also cause some problems. Specifically, in Theorem 3, we need to check whether an identifiable pair in $\\\\mathbb{S} _1$ and another in $\\\\mathbb{S} _2$ has a common element. Suppose there is $(V_i, V_j) \\\\in \\\\mathbb{S} _1$ and $\\\\\\\\{V' _i, V' _j\\\\\\\\} \\\\in \\\\mathbb{S}$, the intersection operation between $(V_i, V_j)$ and $\\\\\\\\{V' _i, V' _j\\\\\\\\}$ is not well-defined.\\n\\nBased on these considerations, we think it might be more advisable to use $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S}$. For $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _1$ or $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _1$, we explicitly indicate whether $V _i \\\\in \\\\mathrm{Pa}(V _j)$ or $V _j \\\\in \\\\mathrm{Pa}(V _i)$ when necessary. We remain open to better alternatives and would gladly incorporate them.\\n\\n> Q9: Purposes of definitions / theorems / algorithmic steps.\\n\\nWe have included more explanations to help readers grasp the purposes of definitions / theorems / algorithmic steps. Some examples are given as follows.\\n\\n- For $\\\\mathbf{V} _f, \\\\mathcal{H} _2$, the purpose of the definitions becomes readily apparent from the intuitive explanation. Specifically, while $\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _p$ consists of all identified variables, we define $\\\\mathbf{V} _f$ to represent all unidentified variables; while $\\\\mathcal{H} _1$ consists of all identified causal relations, we define $\\\\mathcal{H} _2$ to represent the graph consisting of all unidentified causal relations.\\n\\n- For Theorem 1, the purpose of the theorem becomes readily apparent from from its Remark. Specifically, Theorem 1 is used to provide a method for locating identifiable pairs from $\\\\mathbf{V} _c$ via statistical analysis.\\n\\n- For the algorithmic step of identifying identifiable pairs from $\\\\mathbf{V}_ c$, the purpose is clarified in \\u00a7 Locating Pure Children in Section 3.1. \\\"Ideally, we want to locate pure children in a single step, but this is impossible because of the existence of complex causal relations. Instead, we first locate identifiable pairs from $\\\\mathbf{V} _c$ and then locate pure children from these identifiable pairs.\\\"\\n\\n> Q10: What \\\"let $\\\\\\\\{V_ {i_ 1}, V_ {i_ 1}\\\\\\\\} \\\\subset \\\\mathrm{Ch}^ {\\\\mathcal{H}_ 1}(V_ i)$\\\" in Theorem 2 means\\n\\nThis just means $\\\\\\\\{V_ {i_ 1}, V_ {i_ 2}\\\\\\\\}$ are any two variables in $\\\\mathrm{Ch}^ {\\\\mathcal{H}_ 1}(V_ i)$. At the first iteration when $\\\\mathbf{V}_ c = \\\\mathbf{O}_ 0$, both $V_ {i_ 1}$ and $V_ {i_ 2}$ are variables in $\\\\mathbf{O} _1$. But after that, they might be variables in $\\\\mathbf{L}$ or $\\\\mathbf{O} _0$. \\n\\n> Q11: Whether $e' _1$, $e' _2$, ... in Intuition of Definition 4 refer to noises used to construct $\\\\mathbf{O} _1$\\n\\nIn Intuition of Definition 4 (definition of the quintuple constraint), each of $e _i, e _j, e' _1, e' _2,...$ does not refer to any specific instance. Instead, the equation $V _1 = \\\\lambda _1 e _i + \\\\gamma _1 e _j + e' _1, V _2 = \\\\lambda _2 e _i + \\\\gamma _2 e _j + e' _2, ...$ means that $V _1$ can be expressed as the sum of three random variables $\\\\lambda _1 e _i, \\\\gamma _1 e_j, e'_1$, $V _2$ can be expressed as the sum of three random variables $\\\\lambda _2 e _i, \\\\gamma _2 e_j, e'_2$, ..., where $e _i, e_j, e' _1, e' _2, ...$ satisfies some constraints, e.g., $e _i, e_j, e' _1, e' _2, ...$ are mutually independent. By the way, this Intuition is moved to Lemma 2 in Appendix C.1 in the revised manuscript.\\n\\nFrom another perspective, as stated in response to Q10, $V_ {i_ 1}$ and $V_ {i_ 2}$ might not be variables in $\\\\mathbf{O} _1$ after the first iteration. Suppose $e' _1$, $e' _2$, ... refer to noises used to construct $\\\\mathbf{O} _1$, then the quintuple constraint cannot be applied to the case where $\\\\\\\\{V _{i _1}, V _{i _2}\\\\\\\\} \\\\not \\\\subset \\\\mathbf{O} _1$, which leads to contradiction.\\n\\n> Q12: What $\\\\mathcal{S} _i \\\\in \\\\mathbb{S} _2$ in Theorem 3 means\\n\\nThis is equivalent to $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _2$. Following your advice, we replace $\\\\mathcal{S} _i \\\\in \\\\mathbb{S} _2$ with $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _2$ to maintain consistency.\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer SqCS:\\n\\nWe thank you again for your careful reading and assessment of our work. We have substantially improved clarification of our manuscript, e.g., we have provided an algorithm overview and more intuitive explanations for concepts and theorems. Moreover, we have taken our maximum effort to address your every concern.\\n\\nIt is really important to us that you could kindly read our rebuttal and provide further questions if there are any. Thank you so much and hope you have a good day.\\n\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer qEDV,\\n\\nWe really appreciate your efforts to help improve this paper. We have carefully addressed your concerns. It is really important to us that you could kindly read our rebuttal and provide further questions if there are any. \\n\\nThank you so much and hope you have a good day.\\n\\nBest,\\n\\nAuthors.\"}", "{\"comment\": \"# Part (2/2)\\n\\n> Q4: More explanations or algorithm overview\\n\\nWe totally agree that more explanations are helpful. We have added illustrative examples for assumptions and expanded Remarks of theoretical results. For example, \\n- We have provided an example for Assumption 1 \\\"The graph in Figure 2(a) satisfies this assumption, where $\\\\mathrm{PCh}^{\\\\mathcal{G}_0}(L_1) = \\\\\\\\{L_3, L_4\\\\\\\\}$ and $\\\\mathrm{PCh}^{\\\\mathcal{G}_0}(L_1) = \\\\\\\\{L_2, L_3, L_4, O_2, O_6\\\\\\\\}$.\\\"\\n- We have added the content \\\"This theorem provides a method for locating identifiable pairs from $\\\\mathbf{V}_ c$ via statistical analysis\\\" to Remark of Theorem 1, which helps readers quickly grasp the practical implication of this theorem.\\n- We have added the content \\\"This theorem provides a method to divide $\\\\mathbb{S}$ into $\\\\mathbb{S}_ 1, \\\\mathbb{S}_ 2, \\\\mathbb{S}_ 3$ via statistical analysis, that is, we can locate pure children from identifiable pairs.\\\" to Remark of Theorem 2, which explicitly specifies what task this theorem serves.\\n\\nAlso, we have added an overview version of our algorithm into the main text (Algorithms 1 and 2), where we have explicitly linked each step to its corresponding theorem. The detailed version (Algorithms 3 and 4) is deferred to Appendix.\\n\\n\\n> Q5: Identification guarantees\\n\\nWe have added Section 3.3 and 4.3 to clearly characterize what results our algorithm can deliver under different sets of assumptions. \\n- Theorem 7 in Section 3.3 states that \\\"Suppose the observed variables are generated by a LiNGAM with latent variables satisfying the rank-faithfulness assumption and Assumption 1, in the limit of infinite data, our algorithm correctly identifies the underlying complete causal structure.\\\"\\n- Theorem 13 in Section 4.3 states that \\\"Suppose the observed variables are generated by a LiNGAM with latent variables satisfying the rank-faithfulness assumption and Assumption 2, if Assumption 1 is invalid, in the limit of infinite data, our algorithm raises an error.\\\"\"}", "{\"summary\": \"This paper is concerned with learning the structure of a graphical model that may contain latent variables. In contrast to some prior art which learns an equivalence class but does not explicitly represent latent variables, this procedure aims to both identify and model the relationships with latent variables. The author employ an additive noise assumption, and provide a simple and intuitive algorithm for discovering the latent variables via tree construction. Under assumptions on the generative structure and som restrictions on the purity of the children, theoretical results are provided which give nice recovery guarantees and are in general quite complete. A small experimental evaluation is provided as well.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Overall, I think this paper approaches a challenging problem and provides a nice and elegant solution. The authors do a nice job of making the necessary assumption clear and carefully presenting their results. Though I have some reservations (see below), overall I consider the technical quality of this work to be quite good. Further, the problem it addresses in loosening the structural assumptions required for provable recovery. This is especially important, in my view, since these structural assumptions are unverifiable and can have an arbitrarily bad impact on the learnt structure under violations of those assumptionsl.\", \"weaknesses\": \"I found the writing in the introduction to be very hard to parse, in particular the task definition. The authors are focused on the problem of learning the structure of graph including latent variables under additive noise constraints. The description of the work in the introduction makes these points hard to follow, it also isn't immediately clear from the initial description the delineation between this and FCI and learning of other structures such as ADMGs, which allow for the presence of latent variables without explicitly representing them. It would be very useful to more clearly describe the problem.\\n\\nMost of the paper reads as a step by step walk through of the proof techniques. While this is interesting and useful, it limits the potential audience for the paper, and in many areas, obscures some of the underlying machinery. In my view, much of this should be moved to the supplement. For example, there's no central place in the text which walks through the algorithm as a whole, rather it is strewn out across the paper in analysis steps. \\n\\nMany of the proofs in the supplement lack the necessary text to provide sufficient context and intuition for the result. For example, it was not immediately clear to me why proposition 1 in App.C.2.1 implied the statement given in the remark of the main text until working it through. The results should be presented with sufficient detail such that they are reasonably easy to ingest and contextualize within the context of the paper. \\n\\nExperimental evidence is small and limited to very simple settings.\", \"questions\": \"Please see weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": \"Dear reviewers, AC, and SAC:\\n\\nWe sincerely thank all reviewers for taking their valuable time to read our paper, provide constructive suggestions, and actively engage in discussions. We are also very grateful to AC and SAC for organizing high-quality review process. In the last couple of weeks, we have tried our best to address each concern of all reviewers and integrated these changes into our revised manuscript. The main revisions are summarized as follows.\\n\\n1. To help readers better grasp the key concepts, we have enriched the manuscript with clear motivations, intuitive explanations, and illustrative examples. For example, we have added clear motivations of $\\\\mathbf{O}_ 1$ in footnote 1, provided intuitive explanations for $\\\\mathbf{V}_ f$ and $\\\\mathcal{H}_ 2$ immediately after their definitions in line 183~185, and included illustrative examples for $\\\\mathbf{V}_ f$ and $\\\\mathcal{H}_ 2$ in Figures 3, 4, 5.\\n\\n2. To help readers better comprehend the theoretical results, we have enhanced both the main theorems (e.g., Theorems 1, 2) and supporting results in the Appendix (e.g., Lemma 1, Corollary 1, Proposition 1) with detailed interpretations and discussions.\\n\\n3. To help readers better follow our algorithm, we have provided an algorithm overview in the main text (Algorithms 1, 2), a detailed pseudo-code in Appendix (Algorithms 3, 4), and released our source code through an anonymous link (https://anonymous.4open.science/r/Fveds1C055gvGWsdvs345). \\n\\nWe believe these revisions have substantially improved our paper, and we will continue to refine our paper in the future. Also, we would like to highlight the main contributions of our paper below.\\n\\n1. We investigate an understudied setting where latent and observed variables are interconnected through complex causal relations.\\n\\n2. Under a pure children assumption, we develop a series of theoretical results, leading to an efficient algorithm which is the first one capable of handling the setting with both latent variables and complex relations within polynomial time.\\n\\n3. We prove that our algorithm can raise an error rather than return an incorrect result when the pure children assumption is invalid, ensuring trustworthiness. To the best of our knowledge, no prior work on causal discovery with latent variables has provided such rigorous trustworthiness guarantees.\\n\\nWe sincerely hope our work could contribute to the community and advance the development of causal discovery. Thanks again for your efforts.\\n\\nSincerely,\\\\\\nSubmission 215 Authors.\"}", "{\"comment\": \"Dear Reviewer SqCS,\\n\\nWe sincerely thank you for your patience and valuable suggestions, which have significantly improved the quality of our manuscript. We are pleased to have addressed your concerns.\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"title\": \"Further Discussion\", \"comment\": \"We sincerely thank all reviewers for their insightful and valuable feedback on our manuscript. Their constructive comments have helped us significantly improve the quality of our work. We have carefully addressed each concern raised and made substantial improvements to the manuscript. The major revisions are summarized below.\\n\\n1. We have clearly described our task in Introduction. As an example, we discuss this improvement in detail in our response to Q1 of Reviewer qEDV. This upfront clarity prevents potential confusion and allows readers to better follow our technical approach and appreciate our results in the proper context.\\n\\n2. We have provided detailed motivations, intuitive explanations, and illustrative examples for key concepts. As an example, we discuss this improvement in detail in our response to Q1 of Reviewer BQvB. These additions enhance the paper's accessibility and help readers better understand the underlying principles.\\n\\n3. We have provided more detailed interpretations and discussions for both theoretical results in the main text and those in Appendix. As examples, we discuss this improvement in detail in our response to Q3 of Reviewer qEDV and Q4 of Reviewer BQvB. This comprehensive revision helps readers better understand the theoretical contributions of our work.\\n\\n4. We have added an overview version of our proposed algorithm in the main text and also a detailed version in Appendix. As examples, we discuss this improvement in detail in our response to Q1 and Q2 of Reviewer SqCS. This two-tier structure allows readers to grasp the core idea while having access to the full technical depth of our work.\\n\\n5. We have added Section 3.3 and 4.3 to clearly characterize what results our algorithm can deliver under different sets of assumptions. As an example, we discuss this improvement in detail in our response to Q5 of Reviewer BQvB.\\n\\nWe welcome any additional feedback that would further enhance our manuscript and remain committed to addressing any remaining concerns to ensure the highest quality of our research.\\n\\nThank you again for your time and expertise in reviewing our work.\"}", "{\"comment\": \"# Part (2/2)\\n\\n> Q17: quintuple constraint\\n\\nFirst, we agree with you that the quintuple constraint can be considered as a special case of GIN condition, except for one distinction: the former avoids to test independence between the linear combination of a set of variables and all of another set of variables.\\n\\nSecond, we choose quintuple constraint not because it can significantly improve algorithmic efficiency, but because it makes theory more concise. Specifically, with the quintuple constraint, we eliminate the need to prove whether an additional independence relationship holds or not, which simplifies the theoretical analysis.\\n\\nThird, as mentioned in response to Q16, there exists a gap between our current identifiability result and the theoretical maximum identifiability. Given the flexibility of the GIN condition, it has potential to uncover more information when properly applied. We believe it could lead to an algorithm that substantially narrows this gap while maintaining algorithmic efficiency. This is a promising direction that is beyond the scope of our present work. We will explore this direction further in our future research.\\n\\n> Q18: Executable code\", \"we_have_further_refined_our_pseudo_codes_in_appendix_and_also_provide_an_executable_code_via_this_anonymous_github_link\": \"https://anonymous.4open.science/r/Fveds1C055gvGWsdvs345 \\\\\\n(Updated: We find that the original link is not very stable, if it does not work for you, please use this backup link: https://anonymous.4open.science/r/Fveds1C055gvGWsdvs751)\\n\\nThanks again for your active participation in discussion.\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"comment\": \"# Part (1/4)\\n\\nThank you very much for your careful reading and assessment of our work as well as the suggestions. We believe that there are several misunderstandings of our approach. To address this, we have improved the presentation of our manuscript substantially (where major revisions are marked in purple) and provide comprehensive clarification below, which we hope will resolve these misunderstandings. We slightly reordered your questions to match the sequence of concepts or theorems as they appear in the paper.\\n\\n> Q1: Algorithm overview for input, output, assumptions, etc.\\n\\nWe totally agree that an algorithm overview makes it easier for readers to grasp the main workflow of our approach. We have added an overview version of our algorithm in the main text (Algorithms 1 and 2). The input of Algorithm 1 is observed variables $\\\\mathbf{O} _0$ and $\\\\mathbf{O} _1$. Algorithm 1 produces intermediate results that serves as the input of Algorithm 2, and Algorithm 2 outputs a complete causal structure (which is a directed acyclic graph that explicitly represents both observed and latent variables along with their causal relations) or raises an error finally.\\n\\nAs a further supplement, we have added Section 3.3 and 4.3 to clearly characterize what results our algorithm can deliver under different sets of assumptions. Specifically,\\n- Theorem 7 in Section 3.3 states that \\\"Suppose the observed variables are generated by a linear latent non-Gaussian model satisfying the rank-faithfulness assumption and Assumption 1, in the limit of infinite data, our algorithm correctly identifies the underlying complete causal structure.\\\"\\n- Theorem 13 in Section 4.3 states that \\\"Suppose the observed variables are generated by a linear latent non-Gaussian model satisfying the rank-faithfulness assumption and Assumption 2, if Assumption 1 is invalid, in the limit of infinite data, our algorithm raises an error.\\\"\\n\\n> Q2: Detailed pseudo-code\\n\\nThanks for your valuable comment. We have refined the detailed version of our algorithm (Algorithms 3 and 4) in Appendix, where each step is precisely described in math/set language, which eliminates ambiguity and ensures precise interpretation of each algorithmic step. Also, we use assert line to for intermediate results, which helps readers clearly understand what properties are guaranteed at each critical step of the algorithm.\\n\\n> Q3: Why introduce $\\\\mathbf{O} _1$\\n\\nIt is correct that $\\\\mathbf{O} _1$ give no more information than those already encoded in $\\\\mathbf{O} _0$. Instead, the purpose of introduce $\\\\mathbf{O} _1$ is to streamline technical details in our paper and keep the presentation focused on the core ideas. The detailed explanation is as follows.\\n\\nIn this paper, we consider the scenario with both latent and observed variables. As stated in footnote 1 in the revised manuscript, \\\"While the values of observed variables are directly accessible for causal discovery, the causal relations of latent variables can only be inferred indirectly, e.g., through their pure children. By introducing $\\\\mathbf{O}_1$ to create pure children for each observed variable, we can handle both types of variables through analyzing their pure children, thereby eliminating the need to repeatedly distinguish between treatments of latent and observed variables and keeping the core methodology clear.\\\"\\n\\n> Q4: Whether the pure children or neighbors in Assumption 1 must be observed variables\\n\\nActually, the assumption allows both pure children and neighbors to be latent variables. In Example of Assumption 1, we have clarified that \\\"the graph in Figure 2(a) satisfies this assumption, where $\\\\mathrm{PCh}^{\\\\mathcal{G}_0}(L_1) = \\\\\\\\{L_3, L_4\\\\\\\\}$ and $\\\\mathrm{PCh}^{\\\\mathcal{G}_0}(L_1) = \\\\\\\\{L_2, L_3, L_4, O_2, O_6\\\\\\\\}$.\\\"\\n\\nWe would like to highlight that even if the latent variable $L _1$ in Figure 2(a) only have two latent pure children $L _3, L _4$, we can still identify it. This is clarified in \\u00a7 Repeating This Process in Section 3.1. Specifically, when we need to do independence/correlation test involving any latent variable, we can directly replace it with its any observed descendant in $\\\\mathcal{H} _1$.\"}", "{\"summary\": \"This paper considers the problem of causal discovery with latent variables, in a more general setting where the causal relations between observed variables are allowed, with some other structural assumptions (e.g., pure children). Moreover, those assumptions are claimed to be testable, i.e., when the assumptions are not satisfied, the algorithm can raise an error.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea to testify the graphical assumptions and give an error signal instead of giving possibly incorrect result is very significant for causal discovery with latent variables.\\n\\n2. The generalization to allow edges in between observed variables is good.\", \"weaknesses\": \"Overall, the current draft's presentation **requires substantial improvement**. It's cluttered with theorem after theorem, while the core algorithm is scattered throughout and described with vague language. This makes it easy for readers (at least me) to lose track, unclear on the purpose of each theoretical statement, and difficult to assess the correctness of all the assertions. I totally understand that the work involves dense technical material and possibly complex algorithms, but this shouldn't excuse its current lack of clarity.\\n\\nHere are my specific comments/questions:\\n\\n1. In assumption 1., I guess what the authors meant is pure children or neighbors in the observed variables? Though the observed variables is not within the scope for definition 1.\\n\\n2. For definition 2 (Identifiable pair, IP),\\n - The definition heavily relies on H2, and the definition for H2 heavily relies on Vf. But so far (when definition 2 is presented) we only have \\\"Vf is unknown\\\", \\\"H2 is unknown\\\", which makes the understanding to definition 2 at that point very difficult, if not impossible.\\n - Please explain every definition/theorem/algorithmic step's purpose before directly delving into the technical sides (same applies elsewhere). E.g., here what is the general purpose of H2 (instead of the technical definition on \\\"induced subgraph of G over Vf and VC\\\")? What does Vf's \\\"launched in the future\\\" mean? It's until pg.6 did I notice that the authors want to use Vf for latent variables..\\n - Since VC and Vf can be updated, does the \\\"identifiable pairs\\\" changes in each epoch, based on different VC and Vf? Intuitively this shouldn't happen, as it seems describing something regarding the true structure. Then please show that there's no such dependence.\\n - Why is it named \\\"identifiable pairs\\\"? Theorem 1 shows a way to sufficiently identify them from data, but are they necessarily all the pure children information that can be identified (e.g., using Adam's formulation of equivalence class)?\\n - Instead of {V1, V2} \\u2208 S, please use e.g., (V1, V2) to indicate that they are ordered.\\n\\n3. In definition 4's \\\"Intuition\\\", what are those \\\"e1', e2', ...\\\"? Are they referring to the noise added to get Oi' in O1 variables?\\n\\n4. In theorem 2, what does \\\"let {Vi1, Vi2}\\u2282ChH1(Vi)\\\" mean? I understand that at the initialization epoch, ChH1(Vi) is just the corresponding two added O1 variables. But after that, does it mean \\\"for any two variables in ChH1(Vi), the followings hold\\\"?\\n\\n5. In theorem 3, what does \\\"\\u2200Si \\u2208 S2\\\" mean? Before this all the notations for enumerating items in S is in the form of {Vi, Vj} \\u2208 S. If the authors intend to express a same thing, please be consistent throughout.\\n\\nOverall, for all the clarity/presentation issues that prevents me to further evaluate the work, I would appreciate it if the authors could provide:\\n - A clear overview of the algorithm: What is the input? What are all the assumptions (except for the structural ones, e.g., at least there should be some faithfulness)? What is the output (to which equivalence class does it identify -- e.g., does it achieve Adam's? which parts are assumed to be correct? when (if and only if) will the algorithm give an error signal)?\\n - A detailed pseudocode for the algorithm. For each step, instead of vague language (like those in the current \\\"Updating the Active Set\\\" paragraph), please use formal math/set language. For all the claims on the intermediate results (e.g., those reflected by condition 1, theorems 1, 2), please use \\\"assert\\\" lines.\\n\\n---\", \"some_other_methodological_questions\": \"6. Why do the authors need to add new simulated variables O1 into the system? Intuitively they couldn't give any more information than those already encoded in O0. In other words, all information found by O1 should be able to be found by O0. Could the authors please give an example where such intuition is incorrect? Otherwise the O1 seems only adding more unnecessary complexities.\\n\\n7. What is the relationship between the two main constraints (Pseudo-residual and Quintuple constraint) used in this paper and the GIN condition? If they are special cases for GIN condition, why not directly use the GIN condition? Or can they identify something beyond GIN?\\n\\n8. Regarding the at-least-2-pure-children assumption, can this algorithm be applied to violation cases e.g., measurement error?\\n\\n9. Regarding the testability of the assumption, could the authors please give a brief review and compare to how other (linear non-Gaussian acyclic models with latents) work testify their assumptions?\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper provides novel ideas to learn the structure of complex linear latent variable models, under the non-Gaussianity assumption. One key point is that it can detect when the necessary sparsity conditions (a sufficient number of \\\"pure children\\\") fail and flag those accordingly. Reviewers were unanimous that the contributions of the paper are of interest, although the density of presentation and ideas needs further work. I believe these are solvable.\\n\\nOne key point though is that the authors seem to misunderstand some of the results in the literature, including Silva et al. (2006). The contribution states that\\n\\n\\\"We prove trustworthiness of our algorithm, meaning that when the pure children assumption\\nis invalid, it can raise an error rather than return an incorrect result, thus preventing potential\\ndamage to downstream tasks. To the best of our knowledge, there is a lack of similar results in\\nthe literature of causal discovery with latent variables\\\"\\n\\nThe latter statement, about the lack of results on validity under the failure of pure children assumptions, is not true. As a matter of fact, Silva et al. does not assume the existence of pure children. Quite the opposite, the main idea of that paper exploits the fact that *if* pure children exist in some subset of the model, the proposed algorithm exploits them and returns a corresponding submodel of the causal generative process. It is typically the case that only a strict subset of the true model is returned, and it is often the case that an empty solution is returned (analogous to the PC algorithm not returning any oriented edges) simply because no \\\"pure children\\\" structure exists (arguably a more standard way of reporting causal discovery solutions than raising an \\\"error\\\"). This confusion may be common in the literature, and it would be useful for this paper to clarify it.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers engaged productively with the authors. It is clear that the paper is dense, as witnessed by the dense set of replies provided by the authors. It is important that the authors recognize that the effort that they put in the replies should be carried out to the paper if this is to be published in ICLR.\"}", "{\"comment\": \"# Part (2/4)\\n\\n> Q5: Definition of $\\\\mathbf{V} _f, \\\\mathcal{H} _2$\\n\\nIn the revised manuscript, we have refined the definition, supplemented it with intuitive explanations, and provided additional illustrative examples to enhance comprehension.\\n\\n- Before defining $\\\\mathbf{V} _f, \\\\mathcal{H} _2$, at the beginning of \\u00a7 Initialization in Section 3.1, we first define $\\\\mathbf{V}_c, \\\\mathbf{V}_p, \\\\mathcal{H}_1$ as two sets of variables and a graph with specific initialization and update rules. Immediately following this, we offer an intuitive explanation. \\\"Intuitively, $\\\\mathbf{V} _c$ consists of identified variables whose causal relations (i.e., both incoming and outgoing edges of the variable in the underlying causal graph) are not fully identified, $\\\\mathbf{V} _p$ consists of identified variables whose causal relations are fully identified, and $\\\\mathcal{H} _1$ consists of all identified causal relations. Considering the initial case (when $\\\\mathbf{V} _c = \\\\mathbf{O} _0$, $\\\\mathbf{V} _p = \\\\mathbf{O} _1$, and $\\\\mathcal{H} _1$ consists of edges from $\\\\mathbf{O} _0$ to $\\\\mathbf{O} _1$), such intuitions become particularly apparent.\\\" Moreover, we include more illustrative figures (left subfigures of Figures 3, 4, 5) to display $\\\\mathbf{V} _c, \\\\mathbf{V} _p, \\\\mathcal{H} _1$ at different iteration during stage 1.\\n\\n- After defining $\\\\mathbf{V}_c, \\\\mathbf{V}_p, \\\\mathcal{H}_1$, we define $\\\\mathbf{V} _f$ as $\\\\mathbf{V} \\\\backslash (\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _p)$ and $\\\\mathcal{H} _2$ as the induced subgraph of $\\\\mathcal{G}$ over $\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _f$. Immediately following this, we offer an intuitive explanation. \\\"Intuitively, while $\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _p$ consists of all identified variables, $\\\\mathbf{V} _f$ consists of all unidentified variables. While $\\\\mathcal{H} _1$ consists of all identified causal relations, $\\\\mathcal{H} _2$ consists of all unidentified causal relations. Considering the initial case (Initially we have identified no latent variable and no causal relation in $\\\\mathcal{G} _0$, when $\\\\mathbf{V} _f = \\\\mathbf{L}$ and $\\\\mathcal{H} _2 = \\\\mathcal{G} _0$), such intuitions become particularly apparent.\\\" Moreover, we include more illustrative figures (right subfigures of Figures 3, 4, 5) to display $\\\\mathbf{V} _f, \\\\mathcal{H} _2$ at different iteration during stage 1.\\n\\n> Q6: (1) Why the name \\\"identifiable pairs\\\" (2) Whether all pure children information can be identified\\n\\n(1) The definition of identifiable pairs relies on $\\\\mathcal{H} _2$. Although $\\\\mathcal{H} _2$ consists of unidentified causal relations, identifiable pairs can still be located from $\\\\mathbf{V}_c$ via statistical analysis (Theorem 1), this is what \\\"\\\"identifiable\\\" means. This explanation has been added to Remark of Definition 2.\\n\\n(2) Although not all pure children information can be identified from identifiable pairs, this is not a concern. Please note the goal of stage 1 is to identify all latent variables (rather than to identify all pure children information) and Theorem 4 guarantees that we can identify all latent variables at the end of stage 1. If some pure children information are not identified in stage 1, it will be identified in stage 2. For example, consider a simple case where there are no latent variable and only two observed variables $O _1, O _2$ where $O _1 \\\\to O _2$. According to \\u00a7 Initialization in Section 3.1, we initialize $\\\\mathbf{V} _c$ as $\\\\\\\\{ O _1, O _2 \\\\\\\\}$. Because there is no identifiable pair (Particularly, $\\\\\\\\{ O _1, O _2 \\\\\\\\} \\\\notin \\\\mathbb{S} _1$ because $\\\\mathrm{Ne} ^{\\\\mathcal{H} _2}(O _1) \\\\backslash \\\\\\\\{O _2\\\\\\\\} = \\\\emptyset$), we cannot determine that $O _2$ is $O _1$'s pure child in stage 1. Instead, we will discover $O _1 \\\\to O _2$ in stage 2.\\n\\n\\n> Q7: Whether identifiable pairs changes at each iteration\\n\\nActually, identifiable pairs changes at each iteration. Any identifiable pair is composed of two variables in $\\\\mathbf{V} _c$ according to Definition 2, as $\\\\mathbf{V} _c$ changes at each iteration, identifiable pairs changes at each iteration naturally. We provide two examples as follows.\\n\\n- Suppose at some iteration, $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _2$, then according to the update rules, both $V _i$ and $V _j$ will be moved into $\\\\mathbf{V} _p$, so $\\\\\\\\{V _i, V _j\\\\\\\\}$ will not be an identifiable pair at the next iteration.\\n\\n- Suppose at some iteration, $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\in \\\\mathbb{S} _2$. It is entirely possible that at the previous iteration, $\\\\\\\\{V _i, V _j\\\\\\\\} \\\\subset \\\\mathbf{V} _f$. Clearly, $\\\\\\\\{V _i, V _j\\\\\\\\}$ was not an identifiable pair at the previous iteration.\"}", "{\"comment\": \"# Part (2/2)\\n\\n> Q3: Intuition in obtaining efficiency\\n\\nIn this paper, we claim our algorithm is efficient because the only existing algorithm capable of handling complex causal relations, PO-LiNGAM proposed by Jin et al. 2024, has exponential time complexity with the number of variables while ours has only cubic time complexity. We intuitively explain why our algorithm is more efficient than theirs in the following. This part can also be found in line 514~520 in our paper.\\n\\nAs mentioned in Introduction, PO-LiNGAM alternates between inferring causal relations and identifying latent variables from leaves to roots, whereas ours first identifies latent variables from leaves to roots and then infers causal relations from roots to leaves. The efficiency gap arises from distinct approaches for inferring causal relations. While we have provided an illustrative example in our paper, here we present a more typical case for reference.\\n\\nConsider a causal graph $\\\\mathcal{G} _0$ with latent variables $\\\\mathbf{L} = \\\\emptyset$ and observed variables $\\\\mathbf{O} _0 = \\\\\\\\{ O _1,..., O _n \\\\\\\\}$, where $O _1$ and $O _n$ are respectively the common parent and common child of $O _2, ..., O _{n-1}$, and there is no causal relation among $O _2, ..., O _{n-1}$. \\n- PO-LiNGAM first identifies $O _n$ as a leaf node by finding a subset $\\\\mathbf{P} \\\\subset \\\\mathbf{O} _0 \\\\backslash \\\\\\\\{ O_n \\\\\\\\}$ s.t. a particular linear combination of $\\\\mathbf{P} \\\\cup \\\\\\\\{O_n\\\\\\\\}$ is independent of $\\\\mathbf{O} _0 \\\\backslash \\\\\\\\{ O_n \\\\\\\\}$, where $\\\\mathbf{P}$ is just the parents of $O _n$. Clearly, PO-LiNGAM needs to traverse the power set of $\\\\mathbf{O} _0 \\\\backslash \\\\\\\\{O _n\\\\\\\\}$.\\n- Assigning each observed variable $O _i \\\\in \\\\mathbf{O} _0$ with two surrogates $X _{2i-1}$ and $X _{2i}$, our algorithm first identifies $O _1$ as a root node because for any $O _i \\\\in \\\\mathbf{O} _0 \\\\backslash \\\\\\\\{ O_1 \\\\\\\\}$, $\\\\mathrm{R}(X _{2i-1}, X _1 | X _2) \\\\perp X _2$. Clearly, our algorithm only needs to traverse $\\\\mathbf{O} _0 \\\\backslash \\\\\\\\{O _1\\\\\\\\}$ itself.\\n\\nFurthermore, we would like to explain why PO-LiNGAM cannot infer causal relations from roots to leaves while we can. PO-LiNGAM alternates between inferring causal relations and identifying latent variables, that is, when it infers causal relations, there is no guarantee that the root variable has been identified, so it cannot infers causal relations from roots to leaves. In contrast, Theorem 4 guarantees that our algorithm identifies all latent variables in stage 1, so when inferring causal relations in stage 2, it can do this from roots to leaves.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer BQvB,\\n\\nThanks for your interest in our work. We are pleased to provide further clarification.\\n\\n> Q9: Regarding $\\\\mathbf{O} _1$\\n\\nFirst, you are right that $\\\\mathbf{O} _1$ are always needed at the first iteration of Algorithm 3.\\n\\nSecond, for any $O_ j \\\\in \\\\mathbf{O}_ 0$, denote its two created children by $\\\\\\\\{ O'_ j, O''_ j\\\\\\\\} \\\\subset \\\\mathbf{O}_ 1$, we do imply that if we let both $O'_ j$ and $O''_ j$ be identical to $O_ j$, line 13-16 and 19-25 in Algorithm 3 are not affected. Specifically, the validity of lines 13-16 relies on Theorem 2(1) while the validity of lines 19-25 relies on Theorem 2(3). In Page 22, we have added Remark after the proof of Theorem 2, which explicitly demonstrates that the proof remains valid if $V_ {i _ 1}$ and $V_ {i_ 2}$ are both identical to $V_ i$. It is clear that when $V_ i$ refers to $O_ j \\\\in \\\\mathbf{O}_ 0$, $\\\\\\\\{ V_ {i _ 1}, V_ {i _ 2} \\\\\\\\}$ exactly refers to $\\\\\\\\{ O'_ j, O''_ j\\\\\\\\}$. Therefore, we can actually create $O'_j$ and $O''_j$ by making two copies of $O_j$\\n\\n> Q10: Rank-faithfulness assumption\\n\\nThanks for this suggestion. In Page 16, we have added Remark after Intuition of the rank faithfulness, which explicitly demonstrates that we only utilize these two properties rather than work directly with the rank faithfulness assumption. These two properties can also be derived from the bottleneck faithfulness assumption, so we can replace the rank faithfulness assumption with the bottleneck faithfulness assumption in our work.\\n\\nThanks again for your careful reading.\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"comment\": \"# Part (1/2)\\n\\nThank you for your time and effort reviewing our paper. We appreciate the thoughtful feedback. We revise our manuscript (where major revisions are marked in purple) and address your concerns point by point in the following.\\n\\n> Q1: Task definition\\n\\n\\nThanks for your valuable suggestion, we agree that clearly defining our task and highlighting its distinctions from prior work in Introduction helps readers immediately grasp our paper's unique positioning and contributions. This upfront clarity prevents potential confusion and allows readers to better follow our technical approach and appreciate our results in the proper context.\\n\\nFirst, we have added the task definition into Introduction. \\\"Given observational data generated by a linear non-Gaussian acyclic model (LiNGAM) with latent variables, we aim to correctly identify the underlying complete causal structure, which is a directed acyclic graph (DAG) that explicitly represents both observed and latent variables along with their causal relations, in an important and challenging setting where latent and observed variables are interconnected through complex causal relations, where ``complex\\\" means that none of the above three assumptions (measurement, purity, and non-triangle assumptions) is employed.\\\"\\n\\nSecond, we have also clearly distinguished our work from prior studies on causal discovery with latent variables.\\n- Although some previous works such as FCI allows the presence of latent variables, their results such as partial ancestral graphs (PAGs) and acyclic directed mixed graphs (ADMGs) are not informative of the number of latent variables and their causal relations. By utilizing linear models, some recent works can represent latent variables and their causal relations explicitly in their results, which is of significant importance in some fields such as psychology. For instance, responses to psychometric questionnaires (observed variables) are usually thought of as noisy views of various traits (latent variables), and the researcher is predominately interested in the causal relations between the latter. In this regard, our work aligns more closely with the latter works.\\n\\n- To identify latent variables and infer their causal relations, recent works often assume the absence of certain special causal relations to ensure a degree of simplicity, including the purity (there is no edge between observed variables), measurement (no observed variable is a parent of any latent variable), or no-triangle assumptions (there exists no three mutually adjacent variables). Unfortunately, these assumptions are invalid in many real-world scenarios, an example in business contexts where none of these three assumptions hold is provided in our paper. Our work can handle the case with complex causal relations, where none of these three assumptions is employed.\\n\\n> Q2: Algorithm overview\\n\\nWe totally agree that adding an algorithm overview in the main text can help readers grasp the big picture. We have moved many technical details such as the proof sketch from the main text to Appendix. With the space saved, we have added an overview version of our algorithm into the main text (Algorithms 1 and 2), where we have explicitly linked each step to its corresponding theorem. The detailed version (Algorithms 3 and 4) is deferred to Appendix.\"}", "{\"comment\": \"Dear Reviewer qEDV:\\n\\nThanks again for your valuable time and effort in reviewing our paper. As the discussion period approaches its end, we would like to confirm whether our rebuttal has adequately addressed all your concerns. If you have any remaining question or require further clarification, we are glad to provide explanations. Also, we would greatly appreciate any suggestions you might have for further improving the quality and presentation of our paper.\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"comment\": \"Sorry for the late follow-up. I thank the authors for the response, and would like to increase my score from 5 to 6. Three minor questions:\\n1. I noticed that the authors introduced $V'_i$ and $V''_i$ in Theorems 2 & 3 of the updated draft. My understanding is that these notations do not share the same meaning as $O'_i$ and $O''_i$ (i.e., $\\\\mathbf{O}_1$). If this interpretation is correct, I would suggest that the authors either use different notations or clarify the distinction more explicitly.\\n2. It seems that samples of variables in $\\\\mathbf{O}_1$ need to be synthetically generated in practice, as they are used in the Algorithm. Could the authors provide additional details on how the noise distributions and noise variances are selected?\\n3. Is rank-faithfulness assumption equivalent to the bottleneck faithfulness assumption in (Adams et al., 2021)?\"}", "{\"comment\": \"# Part (1/2)\\n\\nWe are grateful for your valuable comments. We revise our manuscript (where major revisions are marked in purple) and address your concerns in the following response.\\n\\n> Q1: The purity assumption.\\n\\nAs explained in the first paragraph of Introduction, the purity assumption means that there is no edge between observed variables. The proof of **none** of our theoretical results utilizes this assumption. $\\\\mathcal{G} _0$ shown as Figure 2(a) violates the purity assumption, but our algorithm can correctly identify it.\\n\\nWe would like to highlight the difference between the purity assumption and the pure children assumption. While the purity assumption is not employed throughout, our identification results in Section 3 relies on the pure children assumption, which is exactly Assumption 1 in our paper. Following previous works[1, 2, 3], we make the pure children assumption such that we can identify latent variables and infers their causal relations through their pure children. \\n\\nAlso, we investigates the scenarios where the pure children assumption is invalid in Section 4. We prove that our algorithm can raise an error rather than return a wrong result in such scenarios, ensuring trustworthiness. This trustworthy mechanism marks both a novel and crucial contribution to the field, as no previous work in causal discovery with latent variables has demonstrated such capability. The ability to systematically identify invalid assumptions, rather than silently producing potentially misleading results, represents a significant step forward in ensuring the reliability and validity of causal discovery methods\\n\\n**Reference**\\n\\n[1] Ruichu Cai, et al. \\\"Triad constraints for learning causal structure of latent variables.\\\" NeurIPS 2019.\\n\\n[2] Feng Xie et al. \\\"Generalized independent noise condition for estimating latent variable causal graphs.\\\" NeurIPS 2020.\\n\\n[3] Songyao Jin et al. \\\"Structural estimation of partially observed linear non-gaussian acyclic model: A practical approach with identifiability.\\\" ICLR 2024.\\n\\n\\n> Q2: Identification results under the impurity setting.\\n\\nWe have added Section 3.3 and 4.3 to clearly characterize what results our algorithm can deliver under different sets of assumptions. \\n- Theorem 7 in Section 3.3 states that \\\"Suppose the observed variables are generated by a LiNGAM with latent variables satisfying the rank-faithfulness assumption and Assumption 1, in the limit of infinite data, our algorithm correctly identifies the underlying complete causal structure.\\\"\\n- Theorem 13 in Section 4.3 states that \\\"Suppose the observed variables are generated by a LiNGAM with latent variables satisfying the rank-faithfulness assumption and Assumption 2, if Assumption 1 is invalid, in the limit of infinite data, our algorithm raises an error.\\\"\"}", "{\"comment\": \"Thanks for the response, which addresses my concerns. I raise my score accordingly.\"}", "{\"summary\": \"The authors consider the problem of causal discovery in latent variable LiNGAM models under less restrictive graphical assumptions (described in Assumption 1). Specifically, they allow for some observed variables to be connected to each other and some latent variables to be children of observed variables. They propose a two-stage recovery algorithm that primarily relies on the properties of pseudo-residuals in linear non-Gaussian models. Finally, they consider the case where the graphical assumptions are violated and evaluate the performance of the algorithm through simulations.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors provide a detailed theoretical analysis of the proposed algorithm, along with illustrative examples. They also include detailed comparisons of their assumptions and conditions with those of existing work.\", \"weaknesses\": \"The main weakness lies in the presentation of the results. Some definitions and concepts are not sufficiently motivated or explained (see Q1 and Q2 below), making certain technical details difficult to understand. Additionally, it would be helpful to include more detailed descriptions of the assumptions and theorems. For instance, in Algorithm 1, Theorem 2 appears to be used solely for partitioning $\\\\mathbb{S}$ into $\\\\mathbb{S}_1$, $\\\\mathbb{S}_2$ and $\\\\mathbb{S}_3$, rather than adding new IPs to $\\\\mathbb{S}$. It would be helpful if this kind of explanation (or a very sketched version of Alg 1 & 2) is provided.\", \"questions\": \"1. What is the exact definition of neighbor? The examples on line 179 rely on the fact that $\\\\text{Ne}^{\\\\mathcal{H}_2}(O_1)\\\\setminus{O_3}=\\\\emptyset$. This indicates that $\\\\text{Ne}^{\\\\mathcal{H}_2}(O_1)$ does not correspond to the sibling set (which is $L_1$) nor the neighbor set in the undirected graph (which is $(O_2, O_3, O_4, O_5)$).\\n\\n2. How are the augmented nodes ($O'$, $O''$) used in the identification results and algorithms?\\nMy understanding is that they are only used in the algorithm, where the algorithm duplicates each observed variable with two extra copies and add non-Gaussian noises to them. \\n\\n3. Are there any identification guarantees for the recovery output? Specifically, does Theorems 1-6 imply that in linear non-Gaussian model, under Assumption 1 and rank-faithfulness, the underlying model can be uniquely identified? Similarly, doesn Theorems 7-11 imply that if Assumption 1 is not satisfied, then there will be no output?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the author for the further clarification.\\n> Regarding $\\\\mathbf{O}_1$\\n\\nMy understanding is that variables in $\\\\mathbf{O}_1$ are always needed in Algorithm 3, at least in the first iteration. Please correct me if I am wrong. \\n\\nDo you imply that here, by selecting $V_{i_1}$ and $V_{i_2}$ exactly the same as $V_{i}$, lines 13-16 and 19-25 in Algorithm 3 are not affected? If this is the case then I believe it is worth calling out.\\n\\n> Rank-faithfulness assumption\\n\\nThe properties here correspond to the case when $|J|, |K| \\\\leq 2$ in bottleneck faithfulness assumption. If these are the only two properties needed in the proof, then I would encourage the authors to explicitly state this connection.\"}", "{\"comment\": \"# Part (2/2)\\n> Q3: Explanations for theoretical results in Appendix\\n\\nTo improve readibility, we have added explanations for intermediate theoretical results presented in Appendix. Since some theoretical results (such as Lemma 1 and Corollary 1) are necessarily detailed and techinical in their formal statements, the readers might get lost in the techincal details. To help readers grasp the key implications, we have added Remarks for Lemma 1 and Corollary 1 as follows.\\n\\n- Remark of Lemma 1. \\\"(1) provides a sufficient condition for independence involving the pseudo-residual to hold while (2, 3) provides two sufficient conditions for independence involving the pseudo-residual to not hold.\\\"\\n \\n- Remark of Corollary 1. \\\"This corollary reveals the properties of variables in $\\\\mathbf{V}_p$, $\\\\mathbf{V}_f$, and $\\\\mathbf{V}_c$. (1) means that for each variable in $\\\\mathbf{V}_p$, its parents and children in the underlying causal graph $\\\\mathcal{G}$ are exactly its parents and children in $\\\\mathcal{H}_1$. (2) means that for each variable in $\\\\mathbf{V}_f$, its parents and children in the underlying causal graph $\\\\mathcal{G}$ are exactly its parents and children in $\\\\mathcal{H}_2$. (3) means that for each variable in $\\\\mathbf{V}_c$, its children in the underlying causal graph $\\\\mathcal{G}$ are the union of its children in $\\\\mathcal{H}_1$ and its children in $\\\\mathcal{H}_2$ while its parents in $\\\\mathcal{G}$ are exactly its parents in $\\\\mathcal{H}_2$. This corollary is widely used in the following proofs. To maintain fluency, we will use it without further citation.\\\"\\n\\nAlso, to help readers better connect the theoretical results in Appendix with the main text, e.g., how Proposition 1 helps to reduce computational cost, we have added Remarks of Proposition 1 as follows.\\n\\n- Remark of Proposition 1 states that \\\"Given $\\\\\\\\{V_ i, V_ j\\\\\\\\} \\\\subset \\\\mathbf{V}_ c$, denote $\\\\\\\\{V \\\\in \\\\mathbf{V}_ c \\\\backslash \\\\\\\\{V_ i, V_ j\\\\\\\\} | \\\\mathrm{Cov}(V_ i, V_ j) \\\\mathrm{Cov}(V, V_ i) \\\\mathrm{Cov}(V, V_ j) \\\\neq 0\\\\\\\\}$ by $\\\\mathbf{V}_ {ij}$, this proposition means that there exists no $\\\\\\\\{V_k, V_l\\\\\\\\} \\\\subset \\\\mathbf{V}_ {ij}$ s.t. $\\\\mathrm{R}(V_i, V_j | V_k) \\\\perp \\\\mathbf{V}_ c \\\\backslash \\\\\\\\{V_i, V_j\\\\\\\\}$ and $\\\\mathrm{R}(V_i, V_j | V_l) \\\\not \\\\perp \\\\mathbf{V}_ c \\\\backslash \\\\\\\\{V_i, V_j\\\\\\\\}$. Therefore, if we want to know whether for each $V \\\\in \\\\mathbf{V}_ {ij}$, $\\\\mathrm{R}(V_i, V_j | V) \\\\perp \\\\mathbf{V}_ c \\\\backslash \\\\\\\\{V_i, V_j\\\\\\\\}$, we only need to consider any single $V_k \\\\in \\\\mathbf{V}_ {ij}$.\\\"\\n\\nWe appreciate this valuable suggestion and believe our added Remarks can significantly enhance readability.\\n\\n> Q4: Experimental evidence\\n\\nSimilar to most works in this line, the preliminary contribution of our work is theoretical, where the experiments serve as a proof-of-concept of how the algorithm derived from our theoretical results performs in applications. While the existing experiments have demonstrated the effectiveness of our algorithm, we acknowledge that more comprehensive experiments would be beneficial. Therefore, given the observational data generated by $\\\\mathcal{G} _0$ shown as Figure 2(a) consisting of 4 latent variables, 16 observed variables, and many complex causal relations, the experimental results are summarized as follows. With sufficient (10k) samples, our algorithm can achieve the best Error in latent Variables and F1-Score, its Correct-Ordering Rate is only slightly lower than that of PO-LiNGAM, and it is far more efficient than PO-LiNGAM.\\n\\n\\n| | Error in latent Variables | | | Correct-Ordering Rate | | | F1-Score | | | Running Time | | |\\n|-|-|-|-|-|-|-|-|-|-|-|-|-|\\n| | 2k | 5k | 10k | 2k | 5k | 10k | 2k | 5k | 10k | 2k | 5k | 10k | \\n| GIN | 0.8\\u00b10.4 | 0.9\\u00b10.3 | 0.8\\u00b10.4 | 0.15\\u00b10.02 | 0.14\\u00b10.03 | 0.14\\u00b10.02 | 0.36\\u00b10.07 | 0.39\\u00b10.06 | 0.38\\u00b10.04 | **5.94\\u00b11.05** | **6.52\\u00b10.70** | **7.24\\u00b10.82** |\\n| LaHME | 3.4\\u00b10.5 | 2.8\\u00b10.7 | 2.2\\u00b11.0 | 0.28\\u00b10.02 | 0.27\\u00b10.02 | 0.28\\u00b10.04 | 0.45\\u00b10.04 | 0.46\\u00b10.07 | 0.47\\u00b10.07 | 7.72\\u00b11.00 | 8.38\\u00b10.81 | 10.12\\u00b11.05 |\\n| PO-LiNGAM | **1.0\\u00b10.6** | **0.5\\u00b10.7** | **0.2\\u00b10.4** | **0.83\\u00b10.12** | **0.85\\u00b10.23** | **0.96\\u00b10.08** | 0.42\\u00b10.09 | 0.61\\u00b10.19 | 0.68\\u00b10.10 | 661.25\\u00b1214.86 | 768.70\\u00b1261.47 | 925.68\\u00b1325.07 |\\n| Ours | 1.2\\u00b10.7 | 0.7\\u00b10.6 | **0.2\\u00b10.4** | 0.72\\u00b10.12 | 0.78\\u00b10.17 | 0.93\\u00b10.12 | **0.58\\u00b10.14** | **0.75\\u00b10.13** | **0.94\\u00b10.11** | 16.80\\u00b11.31 | 20.65\\u00b13.89 | 24.55\\u00b11.28 |\"}", "{\"comment\": \"Thank you the authors for providing thorough answers to my questions. I know they are a lot. My major concerns have been addressed. I have raised my score accordingly. Thank you.\"}", "{\"comment\": \"# Part (4/4)\\n\\n> Q13: The relationship between two main constraints (pseudo-residual and quintuple constraint) and the GIN condition\\n\\nWe first detail the relationship between pseudo-residual and GIN condition, then the relationship between quintuple constraint and the GIN condition.\\n\\n- Pseudo-residual and GIN condition are fundamentally different. The former is a specific linear combination of two variables while the latter consists of a set of independence relations. In Section 3.1, pseudo-residual always appears in independence relations. In principle, these independence relations can be replaced with GIN condition seamlessly, but we still use pseudo-residual because it is simpler in form and more accessible. In Section 3.2, pseudo-residual is also used to update $X_ {2j-1}$ in Equation (6) where $X_ {2j-1} := \\\\mathrm{R}(X_ {2j-1}, X_ {2i-1} | X_ {2i})$, whereas GIN is typically not suitable for such use. While the authors of GIN [1] proposed a GIN-based method for inferring causal orders that requires no update of $X_ {2j-1}$ throughout, its time complexity is $\\\\mathcal{O}(|\\\\mathbf{V}_ c|^ 4)$, whereas ours has only $\\\\mathcal{O}(|\\\\mathbf{V}_ c|^ 3)$ time complexity.\\n\\n- Quintuple constraint is similar but not identical to a special case of GIN condition. Specifically, \\\" $( V _{i _1}, V _{i _2}, V _j, V _k, V _l )$ satisfies quintuple constraint\\\" is most similar to \\\" $( \\\\\\\\{V _{i _2}, V _l\\\\\\\\}, \\\\\\\\{V _{i _1}, V _j, V _k\\\\\\\\} )$ satisfies GIN condition\\\". The former implies that if there exists $\\\\alpha, \\\\beta$ s.t. $V _{i _1} + \\\\alpha V _j + \\\\beta V _k$ is uncorrelated to $V _{i _2}$ and $V _l$, then $V _{i _1} + \\\\alpha V _j + \\\\beta V _k$ is independent of $V _{i _2}$. The latter implies that if there exists $\\\\alpha, \\\\beta, \\\\gamma$ s.t. $\\\\alpha V _{i _1} + \\\\beta V _j + \\\\gamma V _k$ is uncorrelated to $V _{i _2}$ and $V _l$, then $\\\\alpha V _{i _1} + \\\\beta V _j + \\\\gamma V _k$ is independent of $V _{i _2}$ and $V _l$. We have proven that the quintuple constraint is sufficient and necessary to identify identifiable pairs in $\\\\mathbb{S} _3$ in Theorem 2, so we do not consider GIN condition that requires an additional independence test.\\n\\nIn our opinion, the main advantage of GIN lies in its superior capability to handle n-factor models. In fact, we plan to extend our approach to more challenging scenarios using GIN.\\n\\n[1] Feng Xie et al. \\\"Generalized independent noise condition for estimating latent variable causal graphs.\\\" NeurIPS 2020.\\n\\n> Q14: Whether this algorithm can be applied to violation cases such as measurement error\\n\\nIt should be noted at the outset that our algorithm relies on the fact that in LiNGAMs, each variable can be expressed as a linear combination of its parents plus an independent non-Gaussian noise. Suppose the measurement error is an additive noise, this algorithm can be applied to the case where all observed variables have no child in the underlying causal graph (such as Case 1 and Case 2 in Figure 9), because each observed variables can still be expressed as a linear combination of their parents plus an independent term, which is the sum of the exogenous noise and the measurement error. However, this algorithm cannot be applied to the cases where some observed variables have descendants. For example, if $\\\\mathrm{Pa}(O _2) = \\\\\\\\{O _1\\\\\\\\}$, then $O _2 = a _{21} O _{1} + \\\\epsilon _{O _2}$. With measurement errors $e _1, e _2$, $\\\\tilde{O} _1 = O _1 + e _1$ and $\\\\tilde{O} _2 = a _{21} O _{1} + \\\\epsilon _{O _2} + e _2$. In general, $\\\\tilde{O} _2$ cannot be expressed as scaled $\\\\tilde{O} _1$ plus a term independent of $\\\\tilde{O} _1$ in this case.\\n\\n> Q15: How other work testify their assumptions\\n\\nAs emphasized in the paper, to the best of our knowledge, no existing work on causal discovery with latent variables has rigorously discussed how to testify their assumptions, especially the widely-used pure children assumption. Without validating these assumptions, there is no guarantee that their recovered causal graph correctly reflects the true causal relations. This lack of verification could be potentially harmful in practical applications. For instance, in financial markets, a plausible but incorrect causal conclusion might mislead investors to make poor investment choices and cause significant financial losses. Even worse, users might not realize the unreliability of these results since the assumptions were never verified.\\n\\nTherefore, our proposed algorithm represents both an innovative advancement and a significant contribution to the field. Unlike previous methods that might silently return incorrect results when assumptions are violated, our trustworthy algorithm can actively detect invalid pure children assumptions and raise an error accordingly. This capability ensures that users are protected from acting on potentially incorrect causal conclusions, marking a crucial step toward more reliable causal discovery in practice.\"}", "{\"title\": \"Backup link of our source code\", \"comment\": \"Dear all,\\n\\nWe find that the original link of our source code (https://anonymous.4open.science/r/Fveds1C055gvGWsdvs345) is not very stable, the webpage may sometime display \\\"The requested file is not found.\\\" If it does not work for you, please use this backup link: https://anonymous.4open.science/r/Fveds1C055gvGWsdvs751\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"summary\": \"This paper proposes an efficient and trustworthy causal discovery method for discovering latent variable structure. The main difference compared with the previous work is that it will raise an error rather than draw an incorrect causal conclusion when the purity assumption is not met.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Some theoretical results are proposed along with an efficient causal discovery algorithm.\", \"The overall writing is clear and well-structured.\"], \"weaknesses\": [\"Although this paper allows violating the purity assumption, the overall identification results and the algorithm mostly rely on the purity assumption, e.g., by locating the pure children like the previous work (e.g., Jin et al., 2024). Whether the purity for identifying the causal structure necessary? What is the complete identifiability under the impurity setting?\", \"What is the intuition in obtaining the efficient causal discovery compared with the other works, e.g., which step is the key step in providing a faster discovering ability?\"], \"questions\": \"See the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer BQvB,\\n\\nThanks for your kind response, we are glad to answer your remaining questions.\\n\\n> Q6: $V_i'$ and $V_i''$ in Theorem 2&3 of the revised manuscript.\\n\\nWe made this modification as it was requested by Reviewer SqCS. We apologize for causing you extra confusion, we have replaced $\\\\\\\\{V_ i', V_ i''\\\\\\\\}$ with $\\\\\\\\{V_ {i'}, V_ {i''}\\\\\\\\}$ in the updated manuscript.\\n\\n> Q7: Details about $\\\\mathbf{O}_ 1$.\\n\\nFirst, the noises used to create $\\\\mathbf{O}_ 1$ can be mutually independent random variables with any non-Gaussian distribution and any variance. For instance, they can be mutually independent random variables that all follow uniform distribution between [-1, 1].\\n\\nSecond, as stated in response to your Q1, where we state the motivation behind $\\\\mathbf{O}_ 1$, \\\"we introduce $\\\\mathbf{O}_ 1$ purely to streamline technical details in our paper and keep the presentation focused on the core ideas\\\", and \\\"the values of observed variables are directly accessible for causal discovery\\\". Therefore, in actual implementation, we can use the values of $\\\\mathbf{O}_ 0$ directly. More specifically, whenever we need to use $O_ i'$ or $O_ i''$, we can directly replace it with $O_ i$.\\n\\n> Q8: Connection between rank-faithfulness and bottleneck faithfulness.\\n\\nFirst, according to their respective definitions, rank-faithfulness implies bottleneck faithfulness, but bottleneck faithfulness may not imply rank-faithfulness.\\n\\nSecond, in our paper, rather than working directly with the rank-faithfulness itself, we derive and utilize two properties that follow from it, which are stated in Intuition of Assumption 3 in App. C.\\n- $m_ {ij} \\\\neq 0$ iff $V_ j \\\\in \\\\mathrm{GAn}(V_ i)$.\\n- Suppose $m_ {ik} m_ {jk} m_ {il} m_ {jl} \\\\neq 0$, $m_ {ik} / m_ {jk} \\\\neq m_ {il} / m_ {jl}$ iff there exists two non-intersecting paths from $\\\\{V_ k, V_ l\\\\}$ to $\\\\{V_ i, V_ j\\\\}$.\\n\\nThese two properties can also be derived from bottleneck faithfulness. In other words, although rank-faithfulness is not strictly equivalent to bottleneck faithfulness, we can readily replace rank-faithfulness with bottleneck faithfulness in our work.\\n\\nThanks again for your time and labor in helping us improve our manuscript.\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"comment\": \"# Part (1/2)\", \"dear_reviewer_sqcs\": \"Thanks for your response. We are glad to answer your further questions as follows.\\n\\n> Q16: Static way & gap\\n\\nFirst, we would like to highlight that research on causal discovery typically encompasses two aspects: identifiability result that establishes whether the underlying causal structure can be identified from data under certain conditions, and identification method that describes how to actually identify the underlying causal structure from data if it is identifiable. In the following, we systematically analyze both our work and that of Adams et al. from these two aspects. Finally, we provide answers to your questions.\\n\\n**1. Identifiability result.** Given observed variables generated by a LiNGAM with latent variables satisfying the rank-faithfulness assumption, the identifiability result of Adams et al. is that the underlying causal structure can be identified if it satisfies both the bottleneck condition and the strong non-redundancy conditions presented in their paper; our identifiability result is that the underlying causal structure can be identified if it satisfies Assumption 1 presented in our paper. Details of these conditions are omitted here due to space limit, we refer interested readers to the original papers.\\n\\n**2. Discussion on identifiability result.** Both the work of Adams et al. and our work imply that the underlying causal structure can be identified if it satisfies some conditions. All the conditions, including their bottleneck and strong non-redundancy conditions and our Assumption 1, are defined in a static way. Particularly, our identifiability result itself (excluding intermediate results) relies solely on Assumption 1, without involving the concept of identifiable pairs defined in a rolling-based way. Furthermore, Adams et al. prove that their identifiability result is exactly the theoretical maximum identifiability. There indeed exists a gap between their identifiability result and ours. For instance, our identifiability result requires that each latent variable has at least two pure children while theirs does not impose such a requirement. More specifically, their identifiability result covers Case 5 and Case 6 shown as Figure 10, which fall outside the scope of our identifiability result. The identifiability result in the work of Adams et al. is superior to ours.\\n\\n**3. Identification method.** The identification method in Adams et al. first estimates the mixing matrix from the noise terms to the observed variables, and then recovers the causal adjacency matrix from it. Our identification method first sequentially identifies latent variables from leaves to roots, and then sequentially infers causal relations from roots to leaves. Technical details are omitted here due to space limit, we refer interested readers to the original papers.\\n\\n**4. Discussion on identification method.** Our identification method is a rolling-based method while that proposed by Adams et al. is a static one. Although the latter can work in theory, it is not advisable in practice as acknowledged by Adams et al. themselves. Specifically, the procedure of estimating the mixing matrix requires the number of latent variables as prior knowledge and is computationally intractable, because it is based on overcomplete independent component analysis (OICA). Moreover, the procedure of recovering the causal adjacency matrix from the estimated mixing matrix is rather sensitive to noise. In contrast, our algorithm is not only practical but also efficient. Our identification method is superior to that in the work of Adams et al.\\n\\n**5. Answers to your questions.** (1) static way: our identifiability result is static while our identification method is rolling-based. The identification method in the work of Adams et al. is static. Although it can work in theory, it is not advisable in practice as acknowledged by Adams et al. themselves. Please refer to point 4 for more details. (2) gap: The identifiability result in the work of Adams et al. is exactly the theoretical maximum identifiability, which is more general than our identifiability result. Please refer to point 2 for more details. As future work, we plan to explore how to relax Assumption 1 while maintaining algorithmic efficiency.\"}", "{\"comment\": \"We deeply appreciate your prompt reply! It has been our pleasure to address your concerns.\"}", "{\"comment\": \"# Part (1/2)\\n\\nThank you for your time and effort put into our work. We have substantially improved presentation of our manuscript (where major revisions are marked in purple) and also addressed your concerns as follows. We will be grateful if you can re-evaluate our work.\\n\\n> Q1: Motivation or explanation of concepts\\n\\nWe have provided clearer motivation and explanation for key concepts in the revised manuscript. In the following, we give two representative examples.\\n\\n- For augmented nodes $\\\\mathbf{O}_ 1$, we have explained the motivation behind it. In brief, we introduce $\\\\mathbf{O}_ 1$ to streamline technical details in our paper and keep the presentation focused on the core ideas. According to the footnote 1 in the revised manuscript. \\\"While the values of observed variables are directly accessible for causal discovery, the causal relations of latent variables can only be inferred indirectly, e.g., through their pure children. By introducing $\\\\mathbf{O}_ 1$ to create pure children for each observed variable, we can uniformly handle both types of variables through analyzing their pure children, thereby eliminating the need to repeatedly distinguish between treatments of latent and observed variables and keeping the core methodology clear.\\\"\\n\\n\\n- We fully understand your concerns about the difficulty in understanding the concepts of $\\\\mathbf{V} _f$ and $\\\\mathcal{H} _2$, as many related definitions were concentrated in Chapter 3 without sufficient explanation. To address this issue, we provide both intuitive explanations and illustrative examples. For $\\\\mathbf{V} _f$ and $\\\\mathcal{H} _2$, which are defined as $\\\\mathbf{V} \\\\backslash (\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _p)$ and the induced subgraph of $\\\\mathcal{G}$ over $\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _f$ in \\u00a7 Initialization in Section 3.1, we have provided the intuitive explanation immediately after defining them. \\\"Intuitively, while $\\\\mathbf{V} _c \\\\cup \\\\mathbf{V} _p$ consists of all identified variables, $\\\\mathbf{V} _f$ consists of all unidentified variables. While $\\\\mathcal{H} _1$ consists of all identified causal relations, $\\\\mathcal{H} _2$ consists of all unidentified causal relations. Considering the initial case (Initially, we have identified no latent variable and no causal relation in $\\\\mathcal{G} _0$, when $\\\\mathbf{V} _f = \\\\mathbf{L}$ and $\\\\mathcal{H} _2 = \\\\mathcal{G} _0$), such intuitions become particularly apparent.\\\" Moreover, we include more illustrative figures (right subfigures of Figures 3, 4, 5) to display $\\\\mathbf{V} _f, \\\\mathcal{H} _2$ at different iteration during stage 1.\\n\\n> Q2: (1) Definition of neighbor (2) Whether $\\\\\\\\{ O_1, O_3 \\\\\\\\} \\\\in \\\\mathbb{S}_1$ relies on $\\\\mathrm{Ne} ^{\\\\mathcal{H} _2} (O _1) \\\\backslash \\\\\\\\{ O _3 \\\\\\\\} = \\\\emptyset$\\n\\n(1) Our definition of neighbor adheres to the standard terminology in graph theory. Specifically, $X$ is a neighbor of $Y$ iff there exists an edge $X \\\\to Y$ or $Y \\\\to X$. For instance, in the initial $\\\\mathcal{H}_ 2$ shown on the right of Figure 3, $\\\\mathrm{Ne} ^{\\\\mathcal{H}_ 2} (O_ 1) = \\\\\\\\{ O_ 2, O_ 3, O_ 4, O_ 5, L_ 2\\\\\\\\}$.\\n\\n(2) Actually, $\\\\\\\\{ O_1, O_3 \\\\\\\\} \\\\in \\\\mathbb{S}_ 1$ does not rely on $\\\\mathrm{Ne} ^{\\\\mathcal{H}_ 2} (O_ 1) \\\\backslash \\\\\\\\{ O_ 3 \\\\\\\\} = \\\\emptyset$. In contrast, it relies on $\\\\mathrm{Ne} ^{\\\\mathcal{H}_ 2} (O_ 1) \\\\backslash \\\\\\\\{ O_ 3 \\\\\\\\} \\\\neq \\\\emptyset$. According to our Definition 2(1), \\\"..., $\\\\mathrm{Ne}^ {\\\\mathcal{H}_ 2} (V_1) \\\\backslash \\\\\\\\{V_ 2\\\\\\\\} \\\\neq \\\\emptyset$, we denote this by $\\\\\\\\{V_1, V_2\\\\\\\\} \\\\in \\\\mathbb{S}_ 1$, ...\\\". Please note that there is a $\\\\neq$, not a $=$.\\n\\n> Q3: How $\\\\mathbf{O} _1$ is applied in identification results\\n\\nIn response to Q1, we have introduced the motivation of the augmented nodes $\\\\mathbf{O} _1$. Here we use Theorem 2 as an example to illustrate how $\\\\mathbf{O} _1$ is applied in identification results.\\n\\nAccording to \\u00a7 Initialization in Section 3.1, $\\\\mathbf{V} _c$ is initialized as $\\\\mathbf{O} _0$, $\\\\mathbf{V} _p$ is initialized as $\\\\mathbf{O} _1$, and variables in $\\\\mathbf{O} _1$ are children of variables in $\\\\mathbf{O} _0$ in the initialized $\\\\mathcal{H} _1$. According to Theorem 2, given $\\\\\\\\{V_i, V_j\\\\\\\\} \\\\in \\\\mathbb{S}$, we need to leverage $\\\\\\\\{V _{i _1}, V _{i _2}\\\\\\\\} \\\\subset \\\\mathrm{Ch} ^{\\\\mathcal{H} _1}(V _i)$ to determine whether $\\\\\\\\{V_i, V_j\\\\\\\\} \\\\in \\\\mathbb{S} _1$ and whether $\\\\\\\\{V_i, V_j\\\\\\\\} \\\\in \\\\mathbb{S} _3$. Clearly, at the first iteration, $V _i \\\\in \\\\mathbf{V} _c = \\\\mathbf{O} _0$ and $\\\\\\\\{V _{i _1}, V _{i _2}\\\\\\\\} \\\\subset \\\\mathbf{V} _p = \\\\mathbf{O} _1$.\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer BQvB:\\n\\nWe want to express our appreciation for your valuable suggestions, which greatly helped us improve the quality of this paper. We have taken our maximum effort to address your concerns on clarification. Could you please kindly re-evaluate our work?\\n\\nYour further opinions are very important for evaluating our revised paper and we are hoping to hear from you. Thank you so much.\\n\\nBest,\\n\\nAuthors.\"}", "{\"comment\": \"Thank the authors for the detailed and prompt response. My concerns on presentation are relatively addressed. A few more questions:\\n\\n1. Regarding \\\"identifiable pairs\\\", since it is defined in a rolling-based way, do we also have a static way that can determine whether an edge (or other graphical pattern) is globally identifiable from data? Would that be the same as the ones defined in [Adams]? How is the gap between the final results you could identify to the ones that can be maximally identified?\\n\\n2. Can I under the quintuple constraint as a specific case of GIN condition, where by some graphical condition, one avoids to test between the linear combination's independence to all of another set of variables. \\\"saving another time of CI test\\\" is not that accurate since for HSIC test, one-dim or multi-dim variables are almost the same and can be checked once. But my biggest curiosity is that, e.g., what if using GIN as in [Jins] so that more conditions (than quintuple constraints) can be used? Will this work identify more or that Assumption 1 can be further relaxed?\\n\\n3. I appreciate the authors' pseudocode. It would be better if the authors could provide an executable code of the algorithm in the appendix, where e.g., CI test, constraint checker can be done in oracle, and the expect outputs' correctness is checked in an `assert' way, for arbitrary input graph structures. This is because for a highly technical paper that is workflow/algorithm driven with different graphical patterns, it is not easy for reviewers to grasp all the intuition/motivation, and thus it would be hard to evaluate its correctness. However, for an algorithm based paper, such correctness check is always necessary.\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer d5Lc:\\n\\nWe really appreciate your constructive opinions that helped us improve this paper. If there is any concern unresolved, we would be glad to have further discussions.\\n\\nThanks again for your time, looking forward to hearing from you soon.\\n\\nBest,\\n\\nAuthors.\"}" ] }
BZWssJoYEv
Towards Holistic Multimodal Interaction: An Information-Theoretic Perspective
[ "Zequn Yang", "HaoTian Ni", "Yake Wei", "Di Hu" ]
Multimodal interaction, which assesses whether information originates from individual modalities or their integration, is a critical property of multimodal data. The type of interaction varies across different tasks and subtly influences the effectiveness of multimodal learning, but it remains an underexplored topic. In this paper, we present an information-theoretic analysis to examine how interactions affect multimodal learning. We formulate specific types of information-theoretical interactions and provide theoretical evidence that an effective multimodal model necessity comprehensive learning across all interaction types. Moreover, we analyze two typical multimodal learning paradigms—joint learning and modality ensemble—and demonstrate that they both exhibit generalization gaps when faced with certain types of interactions. This observation underscores the need for a new paradigm that can isolate and enhance each type of interaction. To address this challenge, we propose the Decomposition-based Multimodal Interaction learning (DMI) paradigm. Our approach utilizes variation-based decomposition modules to segregate multimodal information into distinct types of disentangled interactions. Then, a new training strategy is developed to holistically enhance learning efficacy across various interaction types. Comprehensive empirical results indicate our DMI paradigm enhances multimodal learning by effectively decomposing and targeted improving the learning of interactions.
[ "Multimodal learning", "Information theory", "Multimodal interaction" ]
Reject
https://openreview.net/pdf?id=BZWssJoYEv
https://openreview.net/forum?id=BZWssJoYEv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zyZuAzaGuv", "zQAhcJBib8", "z0uJQuF2AY", "wxfrxTMg6w", "vCkQGfIV74", "ta7Ch5d2yo", "n0Rp8GGrzX", "mvGd4ZLXwK", "kpuodjbayD", "e2GGFd1RHH", "bVQXSgmxk9", "XqeTmSETBR", "VBMKtkbzth", "QMe1idQ1fs", "OFTFkq0Xat", "MmpxRZuDm2", "MjEUtliHk9", "MR99EvrGpI", "JYdI4L0Qjd", "IEe0YQFtV5", "Ej1zP6ouw1", "DBBHiQNJFd", "CaxoFX3IWa", "CSIy2gQ7Sn", "BOIpPybctX", "BNe1DG1WVL", "81ro1Xj7K1", "2h52uxXZGM", "1TuDuzGiff" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733226750495, 1732646635689, 1732657470563, 1734668027123, 1732645607873, 1733164098877, 1730685438990, 1737523641627, 1732645690706, 1729230329247, 1733162076588, 1732643044068, 1733039487014, 1732646807569, 1732644583921, 1730702582927, 1732907581820, 1732644631142, 1733162463680, 1732907661779, 1732646216219, 1733074450608, 1732727929330, 1732646460768, 1731198051107, 1733161365449, 1732908196528, 1732646169359, 1732907991194 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_oH8U" ], [ "ICLR.cc/2025/Conference/Submission4463/Area_Chair_DtAx" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_a4Ud" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_a4Ud" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_9UkZ" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_9UkZ" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_oH8U" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Reviewer_HrvF" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ], [ "ICLR.cc/2025/Conference/Submission4463/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you!\", \"comment\": \"We greatly appreciate your invaluable comments and positive feedback, and we are pleased to hear that the clarifications in the method section have addressed your questions.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 2. About synthetic experiment:**\\n \\n> Question 2a. The statistical relationships of three types of interactions\\n\\nThank you for highlighting this point. For synthetic data, we specifically construct samples to correspond to particular types of interactions. Each sample exclusively contains one type of interaction, and for simplicity, different interactions are encoded in distinct dimensions of the inputs. When a sample is determined by specific interaction, dimensions corresponding to other interactions will be set into noise. This approach ensures that each sample uniquely represents one type of interaction. We provide further clarification of this setting in Appendix B.4.\\n\\n> Question 2b. Interaction generation for $\\\\frac{1}{4} U + \\\\frac{3}{4} R$.\\n\\nThank you for raising this point. In Figure 1, two types of interactions are depicted: redundancy and uniqueness. The notation $\\\\frac{1}{4} U + \\\\frac{3}{4} R$ indicates that $ \\\\frac{1}{4} $ of the data is sampled from unique interactions, while $ \\\\frac{3}{4} $ is sampled from redundant interactions. Each sample corresponds with a certain type of interaction. We clarify this description in the revised manuscript, Appendix B.4.\\n\\n> Question 2c. Synthetic data and DMI architecture.\\n\\nThank you for your question. Here, we use 10000 samples in the synthetic dataset, and the DMI architecture is similar to the real-world experiment (detailed in Appendix B.2), with the backbone changing into a 3-layer neural network with ReLU as an activate function. \\n\\n> Question 2d. More boolean mixture of interactions.\\n\\nThank you for pointing this out. In response, we have incorporated additional types of interactions, including combinations of (OR + XOR) and (AND + OR + XOR), to assess the effectiveness of our method in handling more complex interactions. The detailed results indicate that the DMI approach can effectively capture these complex interactions, even when multiple Boolean variables are involved. These results are presented below and are also summarized in Table 4 of the revised manuscript.\\n\\n| | | OR+XOR | | | | AND+OR+XOR | | |\\n|--------|:-----:|:------:|:-----:|:-----:|:-----:|:----------:|:-----:|:-----:|\\n| Method | $R$ | $U_1$ | $U_2$ | $S$ | $R$ | $U_1$ | $U_2$ | $S$ |\\n| DMI | 27.65 | 4.76 | 0 | 67.59 | 20.79 | 0 | 2.36 | 76.85 |\\n| CVX | 33.66 | 0.37 | 0.14 | 65.83 | 21.16 | 0 | 0.58 | 78.26 |\\n| Truth | 25.51 | 0 | 0 | 74.49 | 19.1 | 0 | 0 | 80.9 |\\n\\n**Question 3. Why synergistic information are considered task-irrelevant features?**\\n\\nThank you for addressing this important aspect. Task-irrelevant information is defined as **information within unimodality** that is **not directly** related to the task at hand. According to the definition of synergy data (refer to Equation 11 and [1]), synergy arises when **individual modalities alone provide no information for task completion**, yet their integration generates emergent information that is crucial for the task. This emergent information, which we term synergy, arises indirectly from unimodality. Consequently, the information derived from synergy interaction belongs to the task-irrelevant category within each unimodality. Thus, we characterize the information emergent from the combination of two task-irrelevant features as the learned synergy. We clarify this distinction in Section 3.4.\"}", "{\"title\": \"Insufficient rigor remains\", \"comment\": \"I thank the authors for their response and the updated version of the paper.\", \"positives_in_the_rebuttal\": \"- Inclusion of other modalities\\n- Slightly better performance compared to baselines on new results\\n\\nUnfortunately, most of my concerns persist:\\n- Doing eval on CREMA with only 2 frames, while better than one frame, still has the same issue that I originally described. This does not properly represent the affective information in a general video, and could be seen as 'cheating' in an acted dataset like CREMA where the facial expression will contain the prototypical emotional expression at a specific time in the video.\\n\\n- The apple-to-apple comparison with the protocol established in the baseline still remains.\\n\\n> Question 2. Specific differences in MMML.\\n> Thank you for your comment. To ensure a fair comparison across different methods, we standardized the backbone across all approaches, followed by [6], applying distinct strategies of different methods to this common framework. For the MMML approach, we incorporated its fusion module on the aligned backbone. This module includes both an attention mechanism and a multi-loss strategy.\\n\\nWhy not use the same settings from the baseline instead?\\n\\n- The backbones are still insufficient\\n\\n> The chosen backbones, ResNet and LeNet, are widely utilized in multimodal research\\n\\nThese are fine for basic experiments, but there are significantly stronger choices like ViT's. Also, the results in Table 3 are much lower than the ones reported in the original papers which suggests a lot of room for improvement in the experimental protocol.\"}", "{\"metareview\": \"This paper studies how to learn representations capturing different multimodal interactions during multimodal learning. They propose information-theoretical evidence that learning all types of interactions (redundancy, uniqueness, synergy) is necessary for good performance and shows that naive joint and ensemble learning cannot learn all types of interactions equally well. Motivated by this finding, they proposed a method called Decomposition-based Multimodal Interaction learning (DMI) to decompose multimodal information into different types of interactions learned via a three-phase training.\\n\\nThe authors generally appreciated the theoretical framework, the proposed method, and promising performance on several datasets.\\n\\nThere were concerns initially regarding lack of multimodal datasets in the experimental, lack of clarity in the model architecture and overall in the proposed method, and requests for further validation experiments. The authors provided experiments on more datasets and added several useful analyses in their rebuttal, which 2 reviewers appreciated, and gave a score of marginal accept. 2 reviewers stayed with their scores of marginal reject. Due to the exactly borderline split of reviews, I read the paper in detail, all reviews, and all discussions. I am inclined to lean towards rejection, since there is a severe issue: the paper offers almost no details as to the model architecture, training objectives, and overall algorithm (just as reviewer HrvF pointed out). There is a lot of math to show that the method can work in theory to capture different information-theoretic quantities, but how these are learned in practice is completely vague. The authors also respond in extremely vague terms, saying 'the unimodal encoder varies for different tasks' and 'the decomposition module is architecture like Variational Autoencoder (VAE)'. This is an unacceptable level of rigor and detail for a conference like ICLR, and I would recommend the authors to be extremely upfront about all the design decision, modeling architectures, and training objectives used in the work.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion process, reviewers a4Ud and 9UkZ confirmed that their concerns were addressed and gave a final score of 6.\\n\\nReviewer oH8U and the authors also engaged in several back-and-forth discussions, primarily about results on new datasets, different backbone architectures, and experimental settings to ensure fair comparison. Reviewer oH8U maintained their score of 5, and from what I've seen, the authors provided many more results during the discussion, but most of these results are only 1-2% better than the baselines and ablations, so without rigorous statistical tests I'm not sure if these results are significant.\\n\\nFor reviewer HrvF, they also maintained their score of 5, and after looking through the rebuttals I find that the concern they raised on unclear model architecture and overall poor clarity in the paper remains.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 1. Lack for Holistic Experimental Results**\\n\\nThank you for your valuable suggestions regarding the holistic validation of our methods. In response, we have incorporated extensive experiments in the revised manuscript, aligning with your recommendations. Below, we summarize the key questions addressed in these experiments:\\n\\n> Question 1a. More modalities.\\n\\nIn addition to our initial audio-visual methods, we expand our experiments to include additional modalities. Specifically, we validate RGB + Optical Flow on the UCF101 dataset [1] for action recognition, Audio + Text on the UR-FUNNY dataset [2] for humor detection, and mRNA + methylation data on the ROSMAP dataset [3] for Alzheimer\\u2019s Disease diagnosis. \\nWe updated Table 2 to replace the synthetic dataset AV-MNIST with the UCF-101 dataset to better showcase the effectiveness of our method across real-world datasets, and add other experiments in Appendix B.3. These experimental results demonstrate the flexibility of our approach across different modalities and complex tasks.\\n\\n| Dataset | UR-FUNNY | | UCF | | ROSMAP | |\\n|----------|:--------:|:-----:|:-----:|:-----:|:------:|:-----:|\\n| Metric | ACC | F1 | ACC | F1 | ACC | F1 |\\n| Joint | 63.8 | 63.7 | 78.8 | 78.0 | 84.0 | 83.8 |\\n| Ensemble | 64.1 | 64.0 | 82.3 | 81.8 | 83.0 | 83.0 |\\n| DMI | **65.0** | **64.7** | **84.2** | **83.9** | **84.9** | **84.9** |\\n\\n> Question 1b. Richer temporal information.\\n\\nThis is an insightful question of considering richer temporal information. To address this, we expanded our application of the method across more frames within the CRMEA-D and KS datasets. Our findings indicate that increased richer temporal information enhances task performance to some degree. Moreover, our propose DMI paradigm still demonstrates improvements with this abdundant temporal information. Detailed results and analyses are provided in Appendix B.3 of the revised manuscript.\\n\\n| Temporal | CREMA-D-2frame | | KS-8frame | |\\n|----------|----------------|-------|-----------|-------|\\n| Metric | ACC | F1 | ACC | F1 |\\n| Joint | 77.8 | 78.3 | 85.3 | 85.3 |\\n| Ensemble | 77.7 | 78.2 | 87.1 | 87.1 |\\n| DMI | **78.5** | **79.3** | **87.5** | **87.5** |\\n\\n> Question 1c. Larger scale.\\n\\nAfter careful consideration of time and feasibility, we choose the VGGsound dataset[4], which encompasses over 210,000 entries across 309 categories. Detailed results are presented in the following table and further detailed in Appendix B. These results demonstrate that our DMI method significantly outperforms existing approaches on large-scale datasets.\\n\\n\\n| VGGsound | Joint | Ensemble | DMI |\\n|----------|-------|----------|------|\\n| ACC | 55.1 | 56.7 | **58.5** |\\n| F1 | 53.3 | 55.1 | **57.0** |\\n\\n> Question 1d. Different backbone.\\n\\nThe chosen backbones, ResNet and LeNet, are widely utilized in multimodal research and are consistent with prior studies [5,6]. Also, we validate our method on the Hierarchical Multimodal Transformer backbone [7] as detailed in Table 3, which serves to further validate the scope of our approach.\\nAdditionally, we conducted expanded experiments on the KS dataset using ResNet34 and on the CMU-MOSEI dataset using an LSTM backbone. These experiments further verify the effectiveness of our method. Detailed results are presented in Appendix B.3.\\n\\n| Dataset | MOSEI | | KS | |\\n|----------|:-----:|:----:|:--------:|:----:|\\n| Backbone | LSTM | | ResNet34 | |\\n| Metric | ACC | F1 | ACC | F1 |\\n| joint | 62.4 | 62.2 | 86.0 | 85.8 |\\n| ensemble | 62.0 | 61.7 | 86.8 | 86.3 |\\n| DMI | **62.9** | **62.9** | **87.8** | **87.7** |\\n\\n**Question 2. Specific differences in MMML.** \\n\\nThank you for your comment. To ensure a fair comparison across different methods, we standardized the backbone across all approaches, followed by [6], applying distinct strategies of different methods to this common framework. For the MMML approach, we incorporated its fusion module on the aligned backbone. This module includes both an attention mechanism and a multi-loss strategy.\"}", "{\"comment\": \"Dear Authors,\\n\\nSorry for the late response and thanks for your rebuttal and hard work. Most of my concerns and questions regarding the method section are clarified. I appreciate it. However, I will maintain my current rating as I believe it still reflects my overall assessment.\"}", "{\"summary\": \"The paper introduces an information-theoretic framework that shows the importance of learning from different interactions and the shortcomings of the traditional multimodal methods. To solve the issue, the paper proposed a decomposition-based multimodal interaction learning model that disentangles the interactions within the multi-modal data. Experiments on the various datasets and straightforward visualizations show the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The visualizations are clear and straightforward. The figures are well-designed.\\n2. Theoretic analysis demonstrates the importance of learning from different interactions and the shortcomings of traditional multimodal methods.\\n3. The proposed method shows promising performance on most of the datasets and tasks against competitive SOTA baselines.\", \"weaknesses\": \"1. The method section lacks sufficient detail and thus is a bit confusing to me. Please refer to Questions 1-6.\\n2. DMI\\u2019s improvements over the best baseline on AV-MNIST and CMU-MOSEI (V+T) are not statistically significant.\", \"questions\": \"1. Line 322, why does synergy only contain task-irrelevant information? My understanding is that the integration of synergy has additional information to unimodal data which is task-relevant.\\n2. Equation 13, V and M are symmetric, which implies they can be interchanged without affecting the outcome. Given that, how do you control the learning to make one vector task-relevant and the other task-irrelevant? (which is mentioned in Line 352)\\n3. Line 368, training stage 1 lacks sufficient detail. Can you explain how you warm-up the encoder? Specifically, what are the input, output, and objective during this phase? Additionally, Figure 3 shows two encoders within distinct decomposition components. Are both encoders warmed up in this stage?\\n4. Line 371, training stage 2, \\u201cwe freeze the encoder and focus solely on training the decomposition module\\u2026\\u201d Could you clarify if this means that all encoders are frozen, with only the decoders being fine-tuned in this stage?\\n5. In Figure 3, the decomposition modules include decoders. Does the learning objective (Equations 13 and 14) have a reconstruction loss to guide the decoders during training?\\n6. After decomposition is complete, how is the pre-trained model utilized for downstream tasks? Specifically, is there any further fine-tuning involved, or do you directly apply the representations learned from the decomposition modules to the downstream tasks?\\n7. Line 416, can you explain the several modifications specific to each modality in ResNet18?\\n8. For MOSEI, are you working on sentiment analysis or emotion recognition?\\n9. Tables 2 and 3, performance of the uni-modal method is missing.\\n10. Section 4.4 ablation study, the study shows that using a single decomposition method (DMI-TC and DMI-CD) yields worse performance than the approach without any decomposition at all (DMI-FC). Can you explain the reason?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Question 3. Ablation study setting.**\\n\\nThank you for your comment. The primary goal of our ablation study is to verify the effectiveness of each module within our proposed decomposition method. Specifically, we aim to understand how these modules perform across various datasets and various data scales.\\nWe conducted extensive experiments on the CMU-MOSEI dataset, which includes both audio and text (A+T) and Visual+Text (V+T) modalities. The results in the Table below also demonstrate the effectiveness of each module. We have updated the results in Appendix B.3.\\n\\n| Dataset | MOSEI(A+T) | MOSEI(V+T) |\\n|----------|------------|------------|\\n| DMI-FC | 62.6 | 61.6 |\\n| DMI-CD | 61.4 | 63.2 |\\n| DMI-TD | 61.3 | 62.2 |\\n| DMI | **63.1** | **63.4** |\\n\\n**Question 4. Extended to modality larger than 2.**\\n\\nThank you for this valuable suggestion. Our proposed Decomposition-based Multimodal Interaction learning (DMI) approach is adaptable to scenarios involving three modalities. By implementing only the Task-related Decomposition on DMI (DMI-TD, illustrated in Figure 4), we can extend our framework to accommodate three modalities.\", \"we_have_conducted_empirical_evaluations_on_two_datasets_that_each_incorporate_three_modalities\": \"MOSEI, which includes Visual (V), Audio (A), and Text (T) modalities, and UCF101, which consists of RGB, Optical Flow (OF), and Frame Difference (Diff) modalities. Detailed results of these experiments are presented in Appendix B.3.3, verifying that our method remains effective when extending to three modalities.\\n\\n\\n| Dataset | MOSEI (V+A+T) | | UCF (RGB+OF+Diff) | |\\n|:--------:|:-------------:|:-----:|:-----------------:|:-----:|\\n| Metric | ACC | F1 | ACC | F1 |\\n| joint | 63.3 | 63.2 | 78.6 | 78.2 |\\n| ensemble | 63.4 | 62.7 | 84.4 | 83.9 |\\n| DMI-TD | **64.3** | **64.5** | **84.8** | **84.2** |\\n\\n\\n[1] K. Soomro, \\u201cUcf101: A dataset of 101 human actions classes from videos in the wild,\\u201d *arXiv preprint arXiv:1212.0402*, 2012. \\n\\n[2] M. K. Hasan, W. Rahman, A. Zadeh, J. Zhong, M. I. Tanveer, L.-P. Morency et al., \\u201cUr-funny: A multimodal language dataset for understanding humor,\\u201d *arXiv preprint arXiv:1904.06618*, 2019. \\n\\n[3] P. L. De Jager, Y. Ma, C. McCabe, J. Xu, B. N. Vardarajan, D. Felsky, H.-U. Klein, C. C. White, M. A. Peters, B. Lodgson et al., \\u201cA multi-omic atlas of the human frontal cortex for aging and alzheimer\\u2019s disease research,\\u201d *Scientific data*, vol. 5, no. 1, pp. 1\\u201313, 2018. \\n\\n[4] H. Chen, W. Xie, A. Vedaldi, and A. Zisserman, \\u201cVggsound: A large-scale audio-visual dataset,\\u201d in *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP). IEEE, 2020, pp. 721\\u2013725. \\n\\n[5] Y. Fan, W. Xu, H. Wang, J. Wang, and S. Guo, \\u201cPmr: Prototypical modal rebalance for multimodal learning,\\u201d in *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 2023, pp. 20 029\\u201320 038. \\n\\n[6] P. P. Liang, Y. Lyu, X. Fan, Z. Wu, Y. Cheng, J. Wu, L. Chen, P. Wu, M. A. Lee, Y. Zhu et al., \\u201cMultibench: Multiscale benchmarks for multimodal representation learning,\\u201d *arXiv preprint arXiv:2107.07502*, 2021. \\n\\n[7] P. Xu, X. Zhu, and D. A. Clifton, \\u201cMultimodal learning with transformers: A survey,\\u201d *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2023.\"}", "{\"summary\": \"The paper investigates the role of multimodal interaction in multimodal learning. It provides information-theoretical evidence that learning all types of interactions (redundancy, uniqueness, synergy) is necessary for good performance and shows that naive joint and ensemble learning cannot learn all types of interactions equally well. Motivated by this finding, the paper proposed the Decomposition-based Multimodal Interaction learning (DMI) paradigm that uses variation-based approach to decompose multimodal information into different types of interactions learned via a three-phase training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Novel theoretical analysis**: the paper presents an original theoretical analysis on the role of multimodal interaction in multimodal learning, being the first to establish a theoretical connection between multimodal interaction and multimodal learning performance. The analysis is clear, well-written, accompanied by sound proofs, which provides theoretical insights to understanding the importance of multimodal interaction in addition to prior works. The paper also proves that naive joint and ensemble learning are not able to capture all types of necessary interactions, resulting in suboptimal performance and generalization gap.\\n2. **New learning paradigm for interaction learning**: the paper proposes a new learning paradigm, DMI, designed to explicitly disentangle and capture the three types of interactions and corresponding training strategy for DMI learning. Evaluation on real-world datasets corroborate some claims of effectiveness of this learning paradigm.\", \"weaknesses\": [\"1. **Generally weaker experiment section**: the paper perform a relatively comprehensive evaluation of relevant methods that also target at capturing interactions in multimodal learning; however, the analysis on the results is generally very limited.\", \"There is no comparison/analysis between the proposed DMI and regulation methods and other interaction methods (the paper only mentions that the improvement attributes to the effective decomposition and holistic learning of different interactions, which is weakly supported by ablations, further analysis, or the additional experiment details in appendix). Meanwhile, the reviewer is not sure whether it could be a fair comparison (e.g. are modality-specific encoders and model size standardized?)\", \"The improvement from existing best-performing methods / baselines seems marginal ($\\\\le$ 2% in accuracy), which could undermine the claims about importance of learning interactions in multimodal learning.\", \"The choice of evaluation datasets / benchmarks are not justified and are also limited in terms of scope, given a wide range of multimodal datasets / benchmarks exist.\", \"2. **Documentation of the synthetic setup needs more details**: the section on validating DMI on synthetic data and Figure 1 are interesting, but the documentation of data generation and experimental setup need more details. For example, the following aspects need more clarifications:\", \"How do the informative dimensions \\\"maintain statistical relationships with the label\\\" specifically? What are the statistical relationships that represent the three types of interactions respectively?\", \"How is different interaction combination (e.g. $\\\\frac14 U+\\\\frac34 R$, Figure 1) achieved in the data generation?\", \"How many synthetic data are used in this validation setting? What is the DMI architecture evaluated in this setting?\", \"Comparing to CVX, DMI indeed show better approximation but the evaluation is limited to one complex setting AND+XOR. Maybe adding a few evaluations on other complex logical relations (e.g. OR+XOR, AND+XOR+OR) could strengthen the claim.\", \"In general, the reviewer agrees that if the paper is primarily a theoretical contribution, it does not need to incorporate evaluations on real-world benchmarks as comprehensively as other empirical studies, but the reviewer also believes it is necessary to document the details of data generation and experimental setup of the presented results in the appendix, which is currently missing. The reviewer believes that increasing the completion and soundness of the experiments and analysis section (compared to the theoretical section) will make this submission much stronger.\"], \"questions\": [\"General questions / Clarifications:\", \"Major questions mentioned in the Weaknesses section, including more details on experimental setup for fair comparison across methods, justification / limitation in the choice of evaluation datasets, more details on synthetic data generation and more validation settings\", \"Can the author clarify why synergistic information are considered as task-irrelevant features? Synergy can be very important for pure-synergy tasks such as XOR.\", \"**Concerns about DMI assumption**: DMI assumes a complete separation of synergistic features and features of other interactions (redundancy, uniqueness). However, can overlaps in features exist? For example, in detecting sarcasm, if a person is criticizing with a smiley face, the positive sentiment in facial expression can be unique in the visual modality, while the negative sentiment in speech is only available in the text modality, but both these features are also necessary for the inconsistency, i.e. the synergistic information, which is essential for sarcasm detection. Is DMI able to handle such cases, and how does it perform on sarcasm detection?\"], \"notations\": [\"A.1 Proof for Proposition 3.1: the first term should be $\\\\log\\\\frac{\\\\mathbb{E}_\\\\boldsymbol{c}p(z,y|\\\\boldsymbol{c})}{p(z,y|\\\\boldsymbol{c})}$ instead of $\\\\frac{\\\\log\\\\mathbb{E}_cp(z,y|\\\\boldsymbol{c})}{\\\\log p(z,y|\\\\boldsymbol{c})}$ in equation 17, 18\", \"A.5 Explanation of decomposition: inconsistent notation $u,v$ for the first independent features\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Feedback Before Rebuttal Deadline\", \"comment\": \"Dear Reviewer oH8U,\\n\\nWe would like to sincerely thank you for reviewing our paper and providing valuable feedback. In response to your suggestion, we have added experimental results and clarified the experimental setting in the revised manuscript.\\n\\nPlease feel free to reach out if you have any further questions or require additional clarification before the rebuttal period concludes (**less than one day remains**). \\n\\nThank you once again for your insightful comments.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thanks for reviewing our work\", \"comment\": \"Dear reviewers, we sincerely appreciate all your constructive comments and encouraging remarks (e.g., Reviewer HrvF: *Interaction Decomposition Module is very creative...informative*; Reviewer oH8U: *clear breakdown and explanation of the R, U, and S*; Reviewer a4Ud: *promising performance*; Reviewer 9UkZ: *Novel theoretical analysis and New learning paradigm for interaction learning*). Although some absence of experimental details and the need for more comprehensive experiments raised concerns among the reviewers, we have carefully addressed these points during the revision process, making substantial enhancements to strengthen these aspects. Below, we summarize the key contributions and revisions made to our manuscript:\\n\\n**Key Contributions:**\\n\\nIn this paper, we have provided a comprehensive theoretical analysis that emphasizes **the importance of considering various multimodal interactions in multimodal learning**. This analysis highlights the crucial role that learning from holistic interactions plays in improving multimodal learning performance. Building upon this, we introduce a novel learning paradigm, the Decomposition-based Multimodal Interaction learning(DMI) framework, **which leverages interaction decomposition to enhance multimodal learning**. Our method decomposes multimodal interactions into three distinct types: redundancy, uniqueness, and synergy, enabling the model to effectively learn from each type of interaction.\\n\\n**Revisions:**\\n\\n*Explicit Elaboration about Method and Experimental Details*: \\n\\nWe have refined unclear explanations and notations in our analysis (Section 3.3, Table 1), addressed inaccuracies in our model descriptions (Section 3.4 and Appendix B.2), and provided a detailed description of the synthetic dataset used (Appendix B.4).\\n\\n*Expanded Experimentation*:\\n\\nWe have conducted extensive experiments involving larger datasets (VGGsound), multiple modalities (UCF RGB + Optical Flow, ROSMAP mRNA + METH), and different tasks (UR-FUNNY for humor detection). We have expanded our studies to include more than two modalities (UCF & MOSEI), introduced richer temporal information, and changed the backbone validated on datasets like KS. Detailed descriptions of these experiments are provided in Appendix B.3.\"}", "{\"comment\": \"Thank you for the efforts in the additional experiments and clarifications. They have mostly addressed the questions I raised, especially strengthening the experiment section. I think with incorporating the changes, the submission now can be considered as a good contribution to the research in improving multimodal learning from a better understanding (theory) and capturing (the proposed framework) of multimodal interactions. Therefore, I have raised my rating to 6 (soundness to 3). My other ratings remain unchanged.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 4. Does overlap lie in DMI assumption?**\\n\\nThank you for this invaluable question. To illustrate this point, let us consider the use of a smiley face in sarcasm detection within the visual modality. The interpretation of the smiley face as conveying unique information is highly context-dependent. For example, if every data point includes a smiley face, there is no discernible correlation between its presence and sarcasm\\u2014actually, the smiley face conveys no information about sarcasm in this scenario. Conversely, if the smiley face appears exclusively under specific conditions, such as during moments of happiness or sarcasm, it then strongly correlates with sarcasm, thus illustrating the unique interaction our method aims to capture.\\n\\nOur approach is designed to decompose each data point into multiple types of interactions. In the scenario described, the uniqueness interaction captures task-related correlations, while the synergy interaction identifies emergent correlations.\\n\\nWe have also applied our methodology to the humor detection dataset UR-FUNNY, which is known to contain an amount of synergy information [7]. The experimental results demonstrate that our method can effectively capture this synergy information, leading to improved performance.\\n\\n| Dataset | UR-FUNNY (V+T) | |\\n|:--------:|:--------------:|:-----:|\\n| Method | ACC | F1 |\\n| Joint | 63.8 | 63.7 |\\n| Ensemble | 63.2 | 63.2 |\\n| DMI | 65.0 | 64.7 |\\n\\n\\n[1] P. P. Liang, Y. Lyu, X. Fan, Z. Wu, Y. Cheng, J. Wu, L. Chen, P. Wu, M. A. Lee, Y. Zhu et al., \\u201cMultibench: Multiscale benchmarks for multimodal representation learning,\\u201d *arXiv preprint arXiv:2107.07502*, 2021. \\n\\n[2] H. Wang, S. Luo, G. Hu, and J. Zhang, \\u201cGradient-guided modality decoupling for missing-modality robustness,\\u201d in *Proceedings of the AAAI Conference on Artificial Intelligence*, vol. 38, no. 14, 2024, pp. 15 483\\u2013 15 491. \\n\\n[3] M. K. Hasan, W. Rahman, A. Zadeh, J. Zhong, M. I. Tanveer, L.-P. Morency et al., \\u201cUr-funny: A multimodal language dataset for understanding humor,\\u201d *arXiv preprint arXiv:1904.06618*, 2019. \\n\\n[4] K. Soomro, \\u201cUcf101: A dataset of 101 human actions classes from videos in the wild,\\u201d *arXiv preprint arXiv:1212.0402*, 2012. \\n\\n[5] P. L. De Jager, Y. Ma, C. McCabe, J. Xu, B. N. Vardarajan, D. Felsky, H.-U. Klein, C. C. White, M. A. Peters, B. Lodgson et al., \\u201cA multi-omic atlas of the human frontal cortex for aging and alzheimer\\u2019s disease research,\\u201d *Scientific data*, vol. 5, no. 1, pp. 1\\u201313, 2018. \\n\\n[6] H. Chen, W. Xie, A. Vedaldi, and A. Zisserman, \\u201cVggsound: A large-scale audio-visual dataset,\\u201d in *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP). IEEE, 2020, pp. 721\\u2013725. \\n\\n[7] P. P. Liang, Y. Cheng, X. Fan, C. K. Ling, S. Nie, R. Chen, Z. Deng, F. Mahmood, R. Salakhutdinov, and L.-P. Morency, \\u201cQuantifying & modeling multimodal interactions: An information decomposition framework,\\u201d in *Advances in Neural Information Processing Systems*, 2023.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 1. Expression in formulation**\\n\\n> Question 1a. Theoretical contributions of Lemma 3.3.\\n\\nThank you for your insightful comment. Lemma 3.3 describes that under redundancy-type interactions, the modality ensemble approach can reduce the generalization gap between the theoretical mutual information, $I(Z;Y|\\\\textbf{c})$, and the empirical mutual information, $I_S(Z;Y|\\\\textbf{c})$. The ensemble paradigm aims to solely learn from each unimodality to complete the task. Consequently, we introduce $\\\\tilde{S}$ (referenced in Lemma 3.3) to provide a more accurate representation of the learning paradigm. Due to the nature of redundancy interaction, both modality contains sufficient task-related information. Therefore, every sample within $\\\\tilde{S}$ contributes to training the multimodal task. \\u00a0With $2n$ samples available in $\\\\tilde{S}$, according to Proposition 3.2, a reduction in the generalization gap is achieved. This reduction in the generalization gap implies that enhancing the empirical mutual information contributes to an increase in the theoretical mutual information, similar to the Probably Approximately Correct (PAC) learning theory. Hence, we present in Table 1 that modality ensemble achieves a tighter upper bound than joint learning under redundancy-type interactions.\\n\\n> Question 1b. Confusion of big-O notion\\n\\nThank you for your invaluable comment. Our analysis of Big-O notation is to illustrate the differences in the upper bounds of the generalization gap (as defined in Equation 8) between the modality ensemble and joint learning paradigms. This presentation may have lacked clarity. To address this, we have revised the relevant equation to improve both its clarity and simplicity. The revised of Table 1 is presented below: \\n| \\t| Redundancy \\t| Uniqueness \\t| Synergy \\t|\\n| ---------- | ------------- | ---------------------- | -------------- |\\n| Joint \\t| $\\\\leq \\\\xi + \\\\sqrt{\\\\frac{\\\\omega}{n}} $ \\t| $\\\\leq \\\\xi + \\\\sqrt{\\\\frac{\\\\omega}{n}} $ \\t| $\\\\leq \\\\xi + \\\\sqrt{\\\\frac{\\\\omega}{n}} $ \\t|\\n| Ensemble \\t| $\\\\leq \\\\xi + \\\\sqrt{\\\\frac{\\\\omega}{2n}} $ \\t| $\\\\leq \\\\xi + \\\\sqrt{\\\\frac{\\\\omega}{n}} $ \\t| $\\\\geq \\\\max \\\\left(I^{syn}_S(Z^{(1)}; Y), I^{syn}_S(Z^{(2)}; Y)\\\\right)$ \\t|\\n\\nThe revised table now more accurately represents the differences, emphasizing that the main variance in redundancy is the factor $\\\\sqrt{\\\\frac{\\\\omega}{n}}$ for joint learning compared to $\\\\sqrt{\\\\frac{\\\\omega}{2n}}$ for modality ensemble. Further details of the modifications are discussed on Page 5, Section 3.3 of our manuscript. Thank you again for this invaluable comment.\\n\\n**Question 2. Explanation of model architecture.**\\n\\nThank you for raising this point. The architecture of our proposed DMI (see Figure 3) consists of unimodal encoder $\\\\phi^{(m)}$ to obtain unimodal representation $Z^{(m)}$, and two decomposition modules, decomposing the representation into different interactions $R, U^{(1)}, U^{(2)}, S$. The unimodal encoder varies for different tasks. The decomposition module is architecture like Variational Autoencoder (VAE). Each decomposition module is structured on a VAE framework, where the encoders, composed of Multi-Layer Perceptrons (MLPs), predict the mean and variance. Conversely, the decoders are designed as multilayer networks to ensure minimal information loss during the decomposition process. The alignment of features across modalities is enforced by minimizing the Kullback-Leibler (KL) divergence between the corresponding distributions. We add more detailed descriptions in Appendix B.2.\\n\\n**Question 3. The idea paper conveys.**\\n\\nThank you for your insightful observation. In addition to your mention of how **interaction decomposition improves multimodal interaction learning**, we have provided a comprehensive theoretical analysis that underscores **the importance of considering various multimodal interactions within multimodal learning**. This analysis explains the crucial role that learning from holistic interactions helps better multimodal learning performance. It also illuminates the underlying mechanics of our interaction decomposition method, further justifying its application.\"}", "{\"summary\": \"The paper presents a method to explicitly decouple the redundancy (R), synergy (S), and uniqueness (U) of the information in a pair of modalities when doing multimodal learning. The proposed method first breaks down the information from each modality into task-specific (T) and task-irrelevant (V) information, followed by aligning the redundant parts of the information in T1 and T2 (R) while keeping the unique parts separate (U1 and U2), while trying to learn the synergy between V1 and V2. To improve the quality of the decomposition, the method first trains the modality encoders, following by freezing them and only training the decomposition module, and finally training the whole model end-to-end.\\n\\nThe paper presents results on audiovisual datasets like CREMA-D (for emotion recognition in an acted setting), AV-MNIST, Kinetics Sounds (for sound event detection), amd CMU MOSEI (for multimodal sentiment) and compares against baselines based on joint/ensemble learning, and those based on unimodal regulation and multimodal interaction.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"I thank the authors for sharing their ideas.I think that some of the contributions of the paper are interesting, well thought out, and clearly presented:\", \"I found the clear breakdown and explanation of the R, U, and S type of multimodal interactions and how they can be explicitly optimized as part of a loss function while doing multimodal learning to be a strength of the paper. Intuitively, this idea makes sense, and has been illustrated well to the reader in the equations and the figures.\", \"The chosen baselines make sense.\", \"The presented results on the chosen datasets are strong when compared to the chosen baselines.\", \"The technical details of the experiments are clearly presented, and the setup is understandable.\"], \"weaknesses\": [\"While I think that the core motivation of the paper is solid, and the proposed approach makes sense, I believe where the paper in its current state falls short is the rigor and comprehensiveness of the experiments. The same method (along with the baselines), when demonstrated with more convincing technical choices/design would make for a much stronger paper.\", \"The paper presents itself as a general contribution to multimodal learning, however the demonstrated experiments are only on 3 limited audio-visual datasets (and CMU MOSEI in a limited way for audio-text and visual-text). I believe that only using these datasets is not as strong a result as, for example, also showing strong and comprehensive results on vision+text, audio+vision+text, or other sensory modalities like LIDAR, ultrasonic sensors, physiological sensors (EEG / eye tracking / EMG etc). The presented results are fine if the claim of the paper was slightly more focused to the audiovisual setting (or to only CMU MOSEI for multimodal sentiment analysis). However, I do not think the experiments are \\\"Comprehensive\\\" and \\\"holistic\\\" like the authors have claimed.\", \"Even within the audiovisual datasets, I have significant concerns about the methodology used to extract information from each modality. For example, in the Kinetics sound dataset (which itself is dominated by the audio modality), using 1 frame per second (and a total of 10 frames per video) is not ideal (as opposed to using every frame at 25 or 30 FPS).\", \"For the CREMA-D datasets, the authors take 1 single frame from the entire video. This is incredibly limiting for affect recognition (especially on this specific dataset), due to not capturing the temporal dynamics of emotion (such as the onset-apex-offset (see https://www.researchgate.net/figure/Typical-development-of-a-facial-expression-with-onset-apex-and-offset-from-the-survey_fig1_326023674) dynamics). Using the temporal information from the video is critical here in my opinion (it just happens to be that in an acted dataset like CREMA-D the *acted* facial expression usually coincides with the center timestep of the video) to have a method that is more generally applicable.\", \"I also think that the scale of the chosen datasets was limited. For example, to demonstrate the approach on solely audiovisual interactions, there are alternative datasets like AVSpeech, LRS, LRW, Audioset, along with any of the datasets in https://openaccess.thecvf.com/content/CVPR2024/papers/Singh_Looking_Similar_Sounding_Different_Leveraging_Counterfactual_Cross-Modal_Pairs_for_Audiovisual_CVPR_2024_paper.pdf\", \"In my opinion, the choices of the backbones (ResNet18 and LeNet) are also sufficient for basic experiments, but not to draw holistic conclusions.\"], \"questions\": [\"Are there any direct comparisons to the numbers presented in the baselines? E.g. Wu et al (2024) in MMML do an extensive set of results on CMU MOSI and CMU MOSEI, whereas their method\\u2019s baseline results in this paper are quite a bit lower. What specific differences exist in these two experimental settings?\", \"CMU MOSEI is by far the most appropriate dataset to demonstrate the method out of the chosen datasets, since it allows for multiple different modality pairs. Why did the authors choose to do the ablations on the smaller CREMA dataset and (larger) Kinetics sounds, which are both audiovisual only, instead of on MOSEI?\", \"Could the method be easily extended to interactions between additional modalities than just 2? For example, multiple visual streams (along with depth, LIDAR etc) in robotics, or just audio + vision + text in MOSEI etc. Why or why not?\", \"Certain typos (CRAME instead of CREMA) and broken links (e.g. hyperref link in Figure 1, citations in the tables) were broken\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Look forward to your feedback!\", \"comment\": \"Dear Reviewer HrvF,\\n\\nThank you once again for your valuable comments and suggestions. We would appreciate your confirmation on whether our responses have addressed your concerns. Please let us know if you have any further questions or comments.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 4. Other paradigms over joint learning and modality ensemble.**\\n\\nThank you for highlighting these points. In supervised learning that involves multiple modalities, the central challenge lies in leveraging multimodal data effectively to accomplish multimodal tasks. According to previous literature [1], joint learning of multiple modalities in a shared space and separate learning within each modality's own space are two primary paradigms in multimodal learning. Based on this, two prevalent paradigms\\u2014joint learning and modality ensemble\\u2014represent different approaches to this challenge. The modality ensemble approach utilizes individual modalities independently to complete tasks, whereas joint learning integrates all modalities collectively for task completion. Additionally, in the domain of self-supervised learning, alignment, and contrastive learning are commonly employed paradigms. The insights gained from analyses within these paradigms are invaluable and will be considered in our future research.\\n\\n[1] T. Baltrusaitis, C. Ahuja, and L.-P. Morency, \\u201cMultimodal machine learning: A survey and taxonomy,\\u201d *IEEE Transactions on pattern analysis and machine intelligence*, vol. 41, no. 2, pp. 423\\u2013443, 2018.\"}", "{\"title\": \"Look forward to your feedback before deadline!\", \"comment\": \"Dear Reviewer a4Ud,\\n\\nWe would like to sincerely thank you for your time and effort in reviewing our paper and providing invaluable feedback. In response to your suggestions, we have clarified the method architecture and provided a more detailed explanation of the experimental setup, along with extended experimental results in the revised manuscript.\\n\\nIf you have any further questions or require additional clarification, we would greatly appreciate it if you could inform us before the rebuttal period ends (**less than one day remaining**).\\n\\nThank you once again for your insightful comments.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Look forward to your feedback!\", \"comment\": \"Dear Reviewer a4Ud,\\n\\nThanks again for your insightful suggestions and comments.\\u00a0We would appreciate knowing if our responses have fully addressed your concerns. We are happy to answer any further questions or comments you may have.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 2. Questions with the experiments:**\\n\\n> Question 2a. Limited improvements on AV-MNIST and CMU-MOSEI (V+T).\\n\\nThank you for pointing this out. The AV-MNIST and CMU-MOSEI (V+T) tasks are inherently challenging to learn. AV-MNIST is a synthetic dataset derived from MNIST, which has shown only modest improvements with previous methods compared to joint training. For example, prior work reports the best performance at 72.8%, while joint training (LF) achieves 71.7% [3]. In our experiments, our method surpasses joint training by 0.9%, validating the effectiveness of our approach. \\nFurthermore, we replace this synthetic dataset with the real-world dataset, UCF-101, which includes Optical Flow (OF) and RGB modalities, as detailed in Table 2. Below, we present partial results, demonstrating the effectiveness of our method across real-world datasets:\\n\\n| Dataset | Metric | RGB | OF | Joint | Ensemble | DMI |\\n|----------|--------|-------|-------|--------|----------|-------|\\n| UCF101 | ACC | 76.9 | 67.8 | 78.8 | 82.3 | **84.2** |\\n| | F1 | 76.1 | 67.6 | 78.0 | 81.8 | **83.9** |\\n\\nBesides, sentiment analysis on the CMU-MOSEI dataset is also challenging. Previous studies have reported a significant improvement of 1.5% over joint learning in binary classification tasks across three modalities [3]. In this paper, we consider a tougher task with only two modalities. Our method achieves a 1.2% increase in the (A+T) setup for a three-way classification task, and demonstrates improvement in the (V+T) modality, whereas some comparison methods experience a drop in performance. This highlights both the complexity of learning in this setup and the effectiveness of our proposed approach.\\n\\n> Question 2b. Modifications specific to each modality in ResNet18.\\n\\nThank you for your question. The modification involves adapting the network to accommodate modalities that differ from the typical three-channel RGB input. Specifically, for the audio modality, which has a single channel, and the optical flow modality, which consists of two channels, we modify the channel dimension of the first layer of ResNet18 to correspond with the channel dimensions of these modalities.\\n\\n\\n> Question 2c. Details about MOSEI task.\\n\\nThank you for the inquiry. The task conducted on MOSEI is a sentiment analysis task. These samples are divided into positive, negative, and neutral, following the setting in the previous study [4].\\n\\n> Question 2d. The performance of the unimodal method is missing.\\n\\nThank you for your suggestion. We have supplemented the unimodal performance in Table 2 and Table 3 in the Experiment section, highlighting the improvements achieved by the multimodal methods.\\n\\n> Question 2e. Why DMI-TC and DMI-CD worse than DMI-FC.\\n\\nThank you for your valuable comment. These three methods are based on our proposed DMI architecture. For DMI-FC, we adopt the principle of decomposition but replace the variational decomposition with a fully-connected layer. This can degrade the decouplement among modalities. DMI-TC and DMI-CD, on the other hand, represent partial decompositions within the DMI framework. The superior performance of DMI-FC showcases the significance of comprehensive decomposition. But with the decoupling among modalities, DMI can achieve better decomposition and thus achieve better performance.\\n\\n[1] P. P. Liang, Y. Cheng, X. Fan, C. K. Ling, S. Nie, R. Chen, Z. Deng, F. Mahmood, R. Salakhutdinov, and L.-P. Morency, \\u201cQuantifying & modeling multimodal interactions: An information decomposition framework,\\u201d in *Advances in Neural Information Processing Systems*, 2023. \\n\\n[2] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, \\u201cDeep variational information bottleneck,\\u201d *arXiv preprint arXiv:1612.00410*, 2016. \\n\\n[3] P. P. Liang, Y. Lyu, X. Fan, Z. Wu, Y. Cheng, J. Wu, L. Chen, P. Wu, M. A. Lee, Y. Zhu et al., \\u201cMultibench: Multiscale benchmarks for multimodal representation learning,\\u201d *arXiv preprint arXiv:2107.07502*, 2021. \\n\\n[4] C. Hua, Q. Xu, S. Bao, Z. Yang, and Q. Huang, \\u201cReconboost: Boosting can achieve modality reconcilement,\\u201d *arXiv preprint arXiv:2405.09321*, 2024.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We greatly appreciate your positive feedback and are pleased to hear that your questions have been addressed.\\n\\nThank you again for your valuable insights. If you require any further clarification or additional experiments, please feel free to reach out to us.\"}", "{\"title\": \"Response by authors\", \"comment\": \"Thank you for your thoughtful responses! Below are our replies to your comments:\\n\\n> Prototypical emotional expression in CREMA-D dataset.\\n\\nThank you for your valuable suggestion. We have conducted experiments on the CREMA-D dataset using 8 frames and observed that increasing the number of frames significantly enhances performance on this emotion recognition task. The experimental results validate the effectiveness of our method when incorporating varying amounts of temporal information. These findings are discussed in Appendix B.3.4 of the revised manuscript.\\n\\n\\n| Temporal | CREMA-D-1Frames | | CREMA-D-2Frames | | CREMA-D-8Frames | |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Metric | ACC | F1 | ACC | F1 | ACC | F1 |\\n| Joint | 70.2 | 71.0 | 77.8 | 78.3 | 85.5 | 85.9 |\\n| Ensemble | 68.8 | 69.5 | 77.7 | 78.2 | 86.6 | 87.0 |\\n| DMI | **73.1** | **73.8** | **78.5** | **79.3** | **87.5** | **87.9** |\\n\\n> The settings of our experiment, and results in Table 3.\\n\\nWe appreciate your insightful comment regarding the experimental settings. Our method is compared against a variety of approaches, including methods with unimodal regulation (OGM, PMR, AGM) and those with architecture designs for fusion in interaction captioning (MBT, MIB, QMF, MMML). Conducting a fair comparison is challenging due to the differences in datasets and settings across these methods. To ensure a fair comparison, we follow the MultiBench benchmark [1], which is widely used. Following [1], we utilize a Transformer backbone on pre-extracted features, standardizing the backbones to enable consistent and fair comparison across different fusion methods. Additionally, we have clarified the criteria for backbone selection in Section 4.1 of the revised manuscript.\\n\\n> ViT backbones.\\n \\nThank you for raising this point. We have considered the incorporation of the Vision Transformer (ViT) backbone in our experiments. Specifically, both the visual and audio modalities are processed using a 4-layer ViT structure on the Kinetic-Sound dataset. The results for this experiment are presented in Table 3, as shown below. We clarity the backbone in Section 4.1 of the revised manuscript.\\n\\n| Dataset | Metric | Audio | Visual | Joint | Ensemble | OGM | PMR | AGM | MBT | MIB | QMF | MMML | DMI |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Kinetic-Sound | ACC | 50.5 | 50.9 | 67.9 | 69.3 | 68.9 | 68.0 | 68.9 | 69.9 | 63.1 | 70.6 | 65.3 | **70.8** |\\n| | F1 | 50.3 | 50.5 | 67.6 | 69.2 | 68.6 | 68.2 | 69.1 | 69.9 | 62.9 | 70.3 | 65.5 | **71.4** |\\n\\n[1] P. P. Liang, Y. Lyu, X. Fan, Z. Wu, Y. Cheng, J. Wu, L. Chen, P. Wu, M. A. Lee, Y. Zhu et al., \\u201cMultibench: Multiscale benchmarks for multimodal representation learning,\\u201d *arXiv preprint arXiv:2107.07502*, 2021.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 1. About real-world experiments:**\\n\\n> Question 1a. lack extensive empirical analysis.\\n\\nThank you for your valuable comment. At present, measuring interactions under real data conditions remains imprecise. This challenge is largely due to the lack of an accurate understanding of interactions of each sample, which remains an open problem in the field. To address this, we have taken two approaches. First, we expanded our experimental scope significantly to demonstrate the versatility of our method across various contexts, which is detailed in Appendix B.3. The validation experiments including extensive modalities, tasks (Appendix B.3.2), and backbones (Appendix B.3.5), three modalities (Appendix B.3.3), with temporal dynamics (Appendix B.3.4) and more analysis about ablation (Appendix B.3.6). Second, we conducted validation experiments on synthetic datasets with controllable interactions for each sample to more clearly illustrate the learning mechanisms of our method. Moving forward, with more accurate measurements of multimodal interactions to be investigated, we expect to gain a deeper understanding of our approach.\\n\\n\\n> Question 1b. Marginal improvement over some datasets.\\n\\nThank you for raising this point. These results were observed in the AV-MNIST and CMU-MOSEI (V+T) task, which is inherently challenging.\\nAV-MNIST, a synthetic dataset derived from MNIST, has traditionally seen only modest improvements with previous methods compared to joint training. For example, the best performance in prior work is 72.8%, compared to 71.7% for joint training [1]. In our experiments, our method achieved 73.5%, significantly surpassing the joint training benchmark of 72.6%, thereby validating the efficacy of our approach. Additionally, we have replaced this synthetic dataset with real-world data from the UCF-101 dataset, which includes Optical Flow and RGB modalities, as detailed in Table 2. We present partial results here, demonstrating the effectiveness of our method across real-world datasets.\\n\\n| | | RGB | OF | Joint | Ensemble | DMI |\\n|--------|-----|-------|-------|-------|----------|-------|\\n| UCF101 | ACC | 76.9 | 67.8 | 78.8 | 82.3 | **84.2** |\\n| | F1 | 76.1 | 67.6 | 78.0 | 81.8 | **83.9** |\\n\\nSentiment analysis tasks are particularly challenging on the CMU-MOSEI dataset, where even marginal improvements are valuable [2]. Previous studies have reported a maximum improvement of 1.5% over joint learning (LF-Transformer) in binary classification tasks across three modalities [1]. In this paper, we tackle a more challenging setup with only two modalities, making the task even more difficult. Our method achieves a 1.2% increase in the (A+T) setup for a three-way classification task. Moreover, our method demonstrates improvement in the (V+T) modality, whereas some comparison methods experience a drop in performance. This highlights both the complexity of learning in this setup and the effectiveness of our proposed approach.\\n\\n> Question 1c. Dataset, benchmarks are also limited in terms of scope.\\n\\nThank you for your suggestion. Existing datasets typically include Audio, Visual, and Text modalities, with sample sizes ranging from 5,000 to 30,000. Following your suggestion, we have expanded our experiments to include more modalities and larger-scale datasets.\\nIn the revised manuscript, we validate our methods across various datasets and combinations of modalities: Audio + Text on the UR-FUNNY dataset for humor detection [3], RGB + Optical Flow on the UCF101 dataset for action recognition[4], mRNA + methylation data on the ROSMAP dataset for Alzheimer's Disease diagnosis [5], and Audio + Visual on the VGGsound dataset for audio recognition, which includes an extensive sample size of 200,000 [6]. Detailed results for these diverse modalities are presented in Appendix B.3. Experimental outcomes confirm that our method significantly outperforms the baseline across different scales and modalities.\\n\\n| Dataset | UR-FUNNY| | UCF101 | | ROSMAP | | VGGsound | |\\n|:--------:|:--------------:|:-----:|:------------:|:-----:|:------------------:|:-----:|:--------------:|:-----:|\\n| Method | ACC | F1 | ACC | F1 | ACC | F1 | ACC | F1 |\\n| Joint | 63.8 | 63.7 | 78.8 | 78.0 | 84.0 | 83.8 | 55.1 | 53.3 |\\n| Ensemble | 63.2 | 63.2 | 82.3 | 81.8 | 83.0 | 83.0 | 56.7 | 55.1 |\\n| DMI | **65.0** | **64.7** | **84.2** | **83.9** | **84.9** | **84.9** | **58.5** | **57.0** |\"}", "{\"summary\": \"This paper investigates multimodal interaction through the lens of decomposition. The authors argue that the existing learning paradigms, such as joint learning and modality ensemble, struggle to handle all types of interaction effectively, leading to generalization issues. To address this, the authors propose a new paradigm called decomposition-based multimodal interaction (DMI) learning. DMI decomposes multimodal interaction into separate interaction types and applies a new training strategy to enhance learning across these interactions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The authors tackle the important problem of multimodal learning through mutual information decomposition.\", \"Improvements over state-of-the-art.\", \"The design of the Interaction Decomposition Module is very creative and aims to completely break down mutual information through learning. Figure 3, which shows the Interaction Decomposition Module, is very informative.\"], \"weaknesses\": [\"I'm confused by the results in Table 1 and the associated section 3.3. Reading through the proof for Lemma 3.3 from Appendix A.3, it looks like the N = 2n relationship comes from how the authors constructed the new dataset by separating X(1) and X(2): S(1) = {X(1), \\u03a6(2), Y}, S(2) = {\\u03a6(1), X(2), Y}. However, these are the **same** data, just arranged differently to emulate the unimodal MI instead of multimodal MI to create the bounds in Eqs (25) and (26); as a side effect, there are now twice as many samples. But the underlying data stays exactly the same -- we don't need more data to train each model paradigm. What does this result mean in practice?\", \"The model architecture seems unclear. There isn't much information available beyond Figure 3, and it would be difficult to understand or reproduce this work with the information given.\"], \"questions\": [\"Table 1 uses this form of big-O notation $O(1/\\\\sqrt{N})|_{N=2n}$. Does $N=2n$ simply mean \\\"set N to 2n\\\" (I've not seen this condition before)? Since big-O notation is asymptotic, what difference does it make between $N=n$ vs. $N=2n$?\", \"My takeaway from this paper is \\\"interaction decomposition improves multimodal interaction learning\\\". Is this what the paper is trying to convey?\", \"Are joint learning and ensemble learning the only paradigms available? Are there other paradigms that address the proposed problem?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Feedback Before Rebuttal Deadline\", \"comment\": \"Dear Reviewer HrvF,\\n\\nWe would like to sincerely thank you for reviewing our paper and providing invaluable feedback. Your insights have greatly contributed to enhancing the quality of the manuscript.\\n\\nIf there are any further points that require clarification, we would be grateful if you could let us know before the end of the rebuttal period (**less than 1 day remaining**). \\n\\nThank you once again for your thoughtful and constructive comments.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Look forward to your feedback!\", \"comment\": \"Dear oH8U,\\n\\nThank you for your thoughtful review and constructive feedback. We have carefully reviewed your comments and made the necessary adjustments. We would appreciate it if you could confirm whether the revisions address your concerns. We would be happy to provide additional information if any further clarification is required.\"}", "{\"title\": \"Response by authors\", \"comment\": \"**Question 1. Question with the method section.**\\n\\nThank you for your valuable comment. Here, we provide a detailed explanation of questions and modify the paper to describe our method detailedly.\\n\\n> Question 1a. Why does synergy only contain task-irrelevant information?\\n\\nThank you for addressing this important aspect. Task-irrelevant information is defined as **information within unimodality** that is **not directly** related to the task at hand. For example, in XOR data with two independent binary variables, there is no correlation between the target and either modality. However, when two variables are not independent (e.g., $x^1 = x^2$), task-related information occurs, meanwhile additional interactions beyond synergy are observed. \\nThus, the information derived from synergy interactions falls into the task-irrelevant category within each unimodality. We characterize the information that emerges from the combination of two task-irrelevant features as the learned synergy. This distinction is further clarified in Section 3.4.\\n\\n> Question 1b. How to control the learning to make one vector task-relevant and the other task-irrelevant?\\n\\nThank you for this inquiry. Our approach draws inspiration from the Variational Information Bottleneck (VIB) technique [2], where we utilize unimodal task-relevant features to finish specific tasks. To effectively separate task-relevant and task-irrelevant information, we incorporate an additional task-related loss function.\\nAs detailed in Equation 13, the VIB framework allows us to reduce the information flow between representation $Z^{(m)}$ into two decoupled components, task-related vector $T^{(m)}$ and task-irrelated vector $V^{(m)}$. By minimizing the mutual information terms, $I(Z^{(m)}; T^{(m)})$ and $I(Z^{(m)}; V^{(m)})$, we can reduce the consistent information between $T^{(m)}$ and $V^{(m)}$. Hence, the two vectors can be decoupled and represent task-relevant and task-irrelevant information, respectively.\\n\\n> Question 1c. How to warm-up the encoder.\\n\\nThank you for your thoughtful comment. In our method, we specifically warm-up **the unimodal encoder**, which is crucial for the process of transforming $X^{(m)}$ into $Z^{(m)}$. It is important to note that **the variational encoder does not require warming up**, a distinction we have clarified in the revised manuscript.\\nThe rationale behind warming up the unimodal encoder stems from the observation that, in the early stages of learning, these encoders are not yet capable of extracting specific information effectively. Initiating the learning process without a warm-up phase often results in suboptimal performance due to premature decomposition.\\nTo address this, we train every modality respectively with a few epochs to the target, similar to the modality ensemble paradigm. This approach ensures that each modality\\u2019s encoder is adequately prepared to extract and subsequently decompose the information effectively.\\n\\n> Question 1d. Explanation of stage 2.\\n\\nThank you for your question. In this stage, our objective is to refine the decomposition process to effectively distinguish between different types of interactions. To achieve this, we freeze the **unimodal encoder, $\\\\phi^{(m)}$**, which allows us to focus solely on training the decomposition network without the interference of evolving unimodal representations.\\nWe have updated and clarified this process in Section 3.4.\\n\\n> Question 1e. About reconstruction objective.\\n\\nThank you for your question. Indeed, we incorporate a reconstruction loss as part of our learning objective, in line with other VAE-based decomposition methods. This reconstruction loss is essential for minimizing the information loss that occurs after processing by the variational encoder. For a detailed explanation of our model, please refer to Appendix B.2 in the revised manuscript.\\n\\n> Question 1f. How to fine-tune.\\n\\nThis is an excellent point. In this paper, we employ a straightforward yet effective method for fine-tuning: we directly integrate the interaction variables and use them to complete the specified task. Specifically, we concatenate these variables and project them into a unified space, which is then used to complete the task. This approach allows every interaction in the data to be effectively represented and utilized.\"}", "{\"title\": \"Look forward to your feedback!\", \"comment\": \"Dear Reviewer 9UkZ,\\n\\nThank you for reviewing our work and for providing constructive suggestions. We have carefully considered your comments and made the necessary adjustments. We would be thankful if you could inform us whether the revisions adequately address your concerns. If further clarification is needed, we are happy to provide additional details.\"}" ] }
BZQmpsuW7D
SPARK: Physics-Guided Quantitative Augmentation for Dynamical System Modeling
[ "Fan Xu", "Penghao Zhao", "Zhipeng Xu", "XINLIANG ZHOU", "Xinping Yi", "Qingsong Wen", "Hao Wu", "Kun Wang" ]
In dynamical system modeling, traditional numerical methods have a solid theoretical foundation but are limited by high computational costs and sensitivity to initial conditions. Current data-driven approaches use deep learning models to capture complex spatiotemporal features, but they rely heavily on large amounts of data and assume a stable data distribution, making them ineffective against data scarcity and distribution shifts. To address these challenges, we propose SPARK, a physics-guided quantized augmentation plugin. SPARK integrates boundary information and physical parameters, using a reconstruction autoencoder to build a physics-rich discrete memory bank for data compression. It then enhances selected samples for downstream tasks with this pre-trained memory bank. SPARK then utilizes an attention mechanism to model historical observations and combines fourier-enhanced graph ODE to efficiently predict long-term dynamical systems, enhancing robustness and adaptability to complex physical environments. Extensive experiments on benchmark datasets show that our approach significantly outperforms various baseline methods in handling distribution shifts and data scarcity.
[ "dynamical system", "augmentation" ]
Reject
https://openreview.net/pdf?id=BZQmpsuW7D
https://openreview.net/forum?id=BZQmpsuW7D
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yM5wFl7qk8", "x7a65G43Ov", "vxa8MbSG53", "vd5kYkL3vQ", "vBUPWE8y7X", "tx2lU8B4Vn", "rOHRVSSpgl", "oyopEb8oE9", "ote0e6ipRQ", "oG5inyBc4V", "lOl9SBoHMh", "jx7xL8iHzh", "iy5iCqeUqE", "iAUjpjkFzv", "duclUCOIh7", "dE7NzeU0uF", "YLjd3wOgQd", "UIX1iH8bL4", "PEhuQwi9NM", "OQXeLzzfEP", "N9N5pxsyWU", "KCaT4etrlh", "HMsFoVPZAt", "H6WAhewe6W", "GX3LrFscjX", "EYd32KWsYI", "EQfCieBKlL", "DVh2bDqjh0", "8zAu7r5p2A", "8Omr2kYsu8", "6SIbqgqtjP", "5OdjnBgXA5", "4IX57ihzEW", "34zXIAuXtO", "2rgsOKX1hI", "0ujNBdBfgl" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732763245770, 1732184390994, 1730746676442, 1732193409011, 1733071589069, 1732192667076, 1732192166275, 1734577075032, 1732763107526, 1732459445930, 1732185857412, 1732185326185, 1732763289067, 1732460202646, 1730537189234, 1737523539022, 1732193226118, 1732191708205, 1732461277020, 1732366009730, 1733137903831, 1733199057135, 1732188331399, 1732528327462, 1733204455106, 1732191235805, 1733115934812, 1732770805333, 1732190660852, 1732528230353, 1732191929512, 1732192420778, 1733124658307, 1730439859769, 1733137958754, 1730597151923 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_Rucb" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Area_Chair_JiYn" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_Rucb" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_bjTD" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_bjTD" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_89BR" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_pH3K" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_pH3K" ], [ "ICLR.cc/2025/Conference/Submission2890/Authors" ], [ "ICLR.cc/2025/Conference/Submission2890/Reviewer_89BR" ] ], "structured_content_str": [ "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer bjTD,\\n\\nWe sincerely thank you for your valuable and constructive feedback! Since the Discussion Period Extension provides us with additional time, we are eager to address any further concerns you may have. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper and/or our rebuttal, please let us know, and we will be more than happy to engage in further discussion and improve the paper.\\n\\nThank you so much for devoting time to improving our paper!\\n\\nBest regards,\\n\\nthe Authors\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers,\\n\\nThanks for your time and valuable feedbacks. We acknowledge three reviewers (Reviewer Rucb, Reviewer 89BR, and Reviewer pH3K) comment that the work is novel or nice. We acknowledge the positive comments such as \\\"a new approach\\\" (Reviewer Rucb), \\\"superior performance\\\" (Reviewer Rucb), \\\"an intelligent design choice\\\" (Reviewer Rucb), \\\"many useful tricks\\\" (Reviewer Rucb), \\\"benchmarks are extensive\\\" (Reviewer Rucb), \\\"the theoretical framework is helpful\\\" (Reviewer Rucb), \\\"well-written and well-presented\\\", \\\"the idea is nice\\\" (Reviewer 89BR), \\\"a good number of benchmarks\\\" (Reviewer 89BR), \\\"crucial for real-world applications\\\" (Reviewer bjTD), \\\"extensive experimental results\\\" (Reviewer bjTD), \\\"an interesting idea\\\" (Reviewer pH3K), \\\"important topics\\\" (Reviewer pH3K), \\\"well-written and has a detailed presentation\\\" (Reviewer pH3K). We have also responded to your concerns in the following.\\n\\nPlease let us know if you have any additional questions or concerns. We will try our best to address them.\\n\\nBest regards,\\n\\nthe Authors\"}", "{\"summary\": \"The paper proposes a new approach to spatiotemporal surrogate modeling. Their approach aims to target some of the limitations of data-driven models as they pertain to distribution shift. The framework, SPARK, combines physics-guided data augmentation and compression to enhance generalization. Key architectural innovations include a discrete memory bank for storing previous physical samples, physical prior and BC incorporation with graph neural nets, and a curriculum learning strategy to incorporate augmented data progressively. SPARK's superior performance is then evaluated on a relatively large suite of benchmark datasets.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The incorporation of a data bank to progressively generate augmented samples is an intelligent design choice, enabling the model to store physical information for use in OOD prediction. Combined with the work on storing physical parameters and boundary conditions, the authors have presented many useful tricks for physical surrogate modeling.\", \"Extensive tests across numerous datasets and benchmarks conclusively demonstrate the superior performance of this training strategy, particularly in OOD scenarios. The benchmarks are extensive as well, and helpful in framing the work.\", \"The theoretical framework is helpful, providing solid support for the model's architecture and approach.\", \"The paper is well-written and well-presented.\"], \"weaknesses\": [\"Many new architectural design choices are proposed (handling of physical parameters, boundary conditions, data banks, curriculum learning, etc.). However, it is unclear how much each strategy contributes to the success of the model, and some ablation studies would be useful.\"], \"questions\": [\"Can the data bank be used for direct retrieval-augmentation?\", \"How well does the model perform in the very low-data regime (just a few samples for transfer learning)?\", \"Have the authors explore generalization to 1-D or 3-D data at all?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pH3K (Part 3/3)\", \"comment\": \"> **Q5**. How do you compute PSNR and SSIM for scientific data? Image data has a fixed range of \\\\[0,255\\\\] but scientific data doesn\\u2019t.\\n\\n**A5**. Thank you for your comment. We adapt the calculation of PSNR and SSIM for scientific data, which does not have a fixed range like image data. By normalizing the data based on its dynamic range in each experiment, we ensure the calculations align with traditional definitions and remain comparable across different datasets. This method preserves the physical meaning of the values and provides accurate quantitative assessments of prediction quality.\\n\\n> **Q6**. Energy Spectrum is a common metric for fluid dynamics. Is it also commonly used for reaction-diffusion equations? How does this paper compute the energy spectrum?\\n\\n**A6**. Thank you for your insightful feedback. Energy spectrum analysis, which is widely used in fluid dynamics to characterize energy distribution across spatial scales, is equally applicable to 3D Reaction-Diffusion Equations. The 3D Reaction-Diffusion Equations model diffusion and reaction processes in space using partial differential equations. To compute the energy spectrum, we apply Fourier transforms to decompose spatial variables into wave number components in the frequency domain. Specifically, we calculate the energy spectrum using the formula:\\n\\n$E(k) = \\\\sum\\\\_{|\\\\mathbf{k}| = k} \\\\frac{1}{2} |\\\\hat{u}(\\\\mathbf{k})|^2, \\\\quad \\\\hat{u}(\\\\mathbf{k}) = \\\\int u(\\\\mathbf{x}) e^{-i \\\\mathbf{k} \\\\cdot \\\\mathbf{x}} \\\\, d\\\\mathbf{x},$\\n\\nwhere $\\\\mathbf{k}$ denotes the wave vector, and $|\\\\mathbf{k}|$ corresponds to the wave number. This approach ensures a robust quantitative analysis of spatial energy distributions.\\n\\n> **Q7**. Some minor typos: On Page 2, \\u201ceffectively long-term prediction\\u201d should be \\u201ceffective \\u2026\\u201d.\\n\\n**A7**. Thank you for your comment. We have thoroughly reviewed the entire manuscript, and have corrected the mentioned or other minor errors to enhance the paper's clarity and precision.\\n\\n---\\nThanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.\\n\\nBest,\\n\\nthe Authors\"}", "{\"title\": \"Respectful Inquiry Before Discussion Deadline\", \"comment\": \"Dear reviewer bjTD,\\n\\nThank you for taking the time and effort to provide a valuable review of our work. As we are approaching the end of the discussion, we hope that you have had the chance to review our previous response. If our response has addressed your concerns, we thank you for reconsidering the score, and we are more than willing to engage in further discussion if needed.\\n\\nYours sincerely, \\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer pH3K (Part 1/3)\", \"comment\": \"Dear Reviewer pH3K,\\n\\nWe sincerely appreciate the time you\\u2019ve dedicated to reviewing our paper, as well as your valuable insights and support. Your positive feedback is highly motivating for us. Below, we address your primary concern and offer further clarification.\\n\\n---\\n\\n> **Q1**. The motivation for using each component in SPARK can be further clarified. The paper will benefit from discussing the interconnection between each network component.\\n\\n**A1**. Thank you for your comment. In this work, we design a unified plugin SPARK with three basic modules:\\n\\n- **Physics-incorporated data compression**. We integrate physical parameters, boundary information and input observations into latent space through position encoding and channel attention. \\n\\n- **Memory bank construction**. We then pre-train a discrete memory bank through GNN-based reconstruction and discrete quantization mechanism. \\n\\n- **Downstream augmentation and prediction**. We froze the memory bank's weights and use it to realize augmentation, which introduces diversity into the training set. Further, we design a fourier-enhanced graph ODE to accurately predict complex dynamics.\\n\\nThese three designs contribute jointly to the high accuracy and strong generalization ability in various environments. We will add disscussions about components' interconnection in the revised manuscript.\\n\\n> **Q2**. It would also be good to have ablation studies on incorporated physics. The authors may consider reducing physical information (i.e., boundary information and physical parameters) for pre-training. Then, we can see the contribution of each physical component.\\n\\n**A2**. Thank you for your valuable feedback. We add ablation experiments on incorporated physics by reducing different physical information. The experiments are conducted on OOD scenarios, and the results are shown below. The results demonstrate the effectiveness of each physical component. We will include it in our revised manuscript.\\n\\n| | Ours | w/o parameter | w/o boundary | w/o parameter&boundary |\\n| ------------- | ---------- | ------ | ------ | ------ |\\n| PROMETHEUS | **0.0301** | 0.0357 | 0.0324 | 0.0397 |\\n| NAVIER\\u2013STOKES | **0.0725** | 0.0833 | 0.0764 | 0.0902 |\"}", "{\"title\": \"Response to Reviewer bjTD (Part 3/4)\", \"comment\": \"> **Q3**. The proposed methodology, may be complex to implement in practice. The paper could provide more guidance or examples on how to effectively apply SPARK in different contexts.\\n\\n**A3**. Thank you for your comment. Our framework uses a two-stage design. We demonstrate its application with experiments on the Navier\\u2013Stokes equations.\\n\\n- **In the pretraining stage**, the model takes physical parameters and boundary information as inputs. Once training converges, we save the memory bank for retrieval and augmentation in downstream tasks. \\n\\n- **In the downstream stage**, we use different backbones for spatiotemporal prediction. Our experiments include FNO, CNO, and SimVP as backbones, and the results are shown in the table.\\n\\n| Backbone | MSE | SSIM | \\n|---------------|----------|----------|\\n| FNO | 0.1556 | 0.923 | \\n| FNO + SPARK | 0.1257 | 0.936 | \\n| CNO | 0.1473 | 0.938 | \\n| CNO + SPARK | 0.1341 | 0.945 | \\n| SimVP | 0.1262 | 0.957 | \\n| SimVP + SPARK | 0.1105 | 0.962 |\\n\\n- **Flexible plugin.** The method supports transfer learning. For example, the memory bank pretrained on the Navier\\u2013Stokes equations transfers directly to the Spherical-SWE equations for retrieval and augmentation. The results are shown in the table below.\\n\\n\\n| Task | Memory Bank Source | MSE | SSIM |\\n| - | - | - | - |\\n| Spherical-SWE | Navier\\u2013Stokes Equations | 0.0027 | 0.948 |\\n\\n\\nIn summary, the method is lightweight and works as a plugin that integrates seamlessly with any baseline prediction model. To enhance your understanding, we summarize the detailed processing steps in the table below.\\n\\n| Stage | Description | Inputs | Target |\\n|-------------|-----------------------------------------------------|-------------------------------|-------------------|\\n| Pretraining | Train model with physical parameters and boundaries | Physical parameters, Boundary information | Memory Bank |\\n| Downstream | Spatiotemporal prediction backbone | Memory Bank, Input data | Prediction Results|\\n| Transfer Learning | Transfer Memory Bank to new task for augmentation and prediction | Pretrained Memory Bank, New task data | Enhanced Predictions|\\n\\n> **Q4**. In line 163, it needs references for those methods which simply concatenate boundary information with node features.\\n\\n**A4**. Thank you for your comment. We have carefully reviewed the relevant literature and have included appropriate references[1,2] in the revised manuscript to support this. \\n\\n---\\n[1] Wang H, et al. \\\"BENO: Boundary-embedded Neural Operators for Elliptic PDEs.\\\" ICLR2024.\\n\\n[2] L\\u00f6tzsch W, et al. \\\"Learning the solution operator of boundary value problems using graph neural networks.\\\" ICML2022.\"}", "{\"metareview\": \"This paper introduces SPARK, a physics-guided quantized augmentation plugin for dynamical system modeling. While traditional methods are computationally expensive and sensitive to initial conditions, and current deep learning models depend on large datasets and stable distributions, SPARK overcomes these issues. It uses a reconstruction autoencoder to build a physics-rich memory bank and applies attention mechanisms and Fourier-enhanced graph ODEs for efficient long-term predictions. Extensive experiments show that SPARK outperforms baseline methods in handling data scarcity and distribution shifts.\", \"strengths\": \"The incorporation of a data bank to progressively generate augmented samples is an insightful design choice.\\n\\nThe paper presents extensive experimental results across a variety of benchmark datasets.\", \"weaknesses\": \"The degree of originality and novelty is relatively low when compared to methods like DGODE and BeamVQ.\\n\\nThe motivation behind the approach and the algorithm itself are not clearly articulated.\\n\\nIt is unclear how much each individual strategy contributes to the overall success of the model. Some ablation studies would provide valuable insights here.\\n\\n\\nWhile the proposed approach is interesting and the experimental results cover a broad range of benchmarks, the novelty of the methodology is only marginally significant. Long-term prediction is a common metric in dynamical system modeling, yet the experiments in this paper are limited to 10-50 steps. For PDE modeling, it would be beneficial to evaluate rollout errors for more than 100 ane even 1000 steps [1]. Given these concerns, I suggest a borderline reject but encourage the authors to address the reviewers' feedback and resubmit to a top conference.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed the following points:\", \"ablations\": \"Reviewers Rucb, bjTD, and pH3K requested additional ablation studies. The authors provided comprehensive experiments and effectively addressed these concerns.\", \"novelty\": \"Reviewers 89BR and bjTD questioned the novelty of the approach. The authors clarified their methodology with additional explanations and experimental results that differentiate their work from the methods cited by the reviewers. After reviewing both the authors' responses and the paper, I tend to agree that while the overall architecture is novel, the technical originality of individual components appears incremental.\", \"clarity_of_presentation\": \"Reviewers 89BR, bjTD, and pH3K initially had reservations about the clarity of the presentation. The authors provided additional details and improved the overall clarity of the paper.\", \"plagiarism_concern\": \"Reviewer bjTD raised a potential plagiarism issue, comparing the paper to another ICLR 2025 submission. However, after comparing the tasks, methodologies, and experiments, I believe these are distinct papers.\\n \\nIn summary, the authors have addressed most of the concerns raised by the reviewers. However, reviewer bjTD still holds reservations regarding the novelty. In my opinion, the novelty of the methodology is only marginally significant. More importantly, the significance of the work could be improved by extending the evaluation to long-term predictions, particularly beyond 1000 steps.\\n\\n[1] Encoding physics to learn reaction-diffusion processes, Nature Machine Intelligence. Nature Machine Intelligence, 5(7):765\\u2013779, 2023.\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer 89BR,\\n\\nWe sincerely thank you for your valuable and constructive feedback! Since the Discussion Period Extension provides us with additional time, we are eager to address any further concerns you may have. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper and/or our rebuttal, please let us know, and we will be more than happy to engage in further discussion and improve the paper.\\n\\nThank you so much for devoting time to improving our paper!\\n\\nBest regards,\\n\\nthe Authors\"}", "{\"title\": \"Kindly Request for Feedback of Reviewer\", \"comment\": \"Dear Reviewer pH3K,\\n\\nAs the rebuttal deadline is coming soon, please let us know if our responses have addressed your main concerns. If so, we kindly ask for your reconsideration of the score. If any aspects require additional elaboration or refinement, we will be more than happy to engage in further improvements and discussion.\\n\\nThanks again for your time.\"}", "{\"title\": \"Response to Reviewer Rucb (Part 2/2)\", \"comment\": \"> **Q3**. How well does the model perform in the very low-data regime (just a few samples for transfer learning)?\\n\\n**A3**. Thanks for your feedback. We conduct experiments on the performance of our model in very low-data regime. Specifically, after pre-training on the full ERA5 dataset, we finetune on subsets of the Sevir dataset with varying amounts of data (1\\\\%, 3\\\\%, 5\\\\%, and 10\\\\%). Below is a detailed comparison of baseline models (PredRNN and SimVP) with and without the SPARK plugin.\\n\\n| | 1\\\\% Sevir | 3\\\\% Sevir | 5\\\\% Sevir | 10\\\\% Sevir |\\n|---------------|------------|-----------|-----------|------------|\\n| PredRNN | 3.51\\u21923.38 | 2.57\\u21922.35 | 1.83\\u21921.68 | 1.22\\u21921.16 |\\n| PredRNN+SPARK | 3.37\\u21923.02 | 2.49\\u21922.14 | 1.72\\u21921.45 | 1.14\\u21920.97 |\\n| Simvp | 2.43\\u21922.20 | 1.86\\u21921.55 | 1.29\\u21921.11 | 0.75\\u21920.68 |\\n| Simvp+SPARK | 2.30\\u21921.98 | 1.75\\u21921.23 | 1.21\\u21920.98 | 0.71\\u21920.57 |\\n\\nThe results show that models with SPARK plugin consistently outperform their baseline models even in very low data regime.\\n\\nIn addition, we conduct **zero-shot** experiments on the Navier-Stokes (NS) Equations. Following the setup of Li et al.[1], we train on 64\\u00d764 NS Equations with Reynolds number of 1e-4 and directly tested on 128\\u00d7128 NS Equations. The results below demonstrate that our SPARK plugin possesses well transfer learning capability.\\n\\n| | Zero-shot |\\n| ---------- | ----------- |\\n| FNO | 0.274\\u21920.251 |\\n| FNO+SPARK | 0.256\\u21920.223 |\\n\\n---\\n> **Q4**. Have the authors explore generalization to 1-D or 3-D data at all?\\n\\n**A4**. Thank you for your comment. We have conducted experiments on 3D Reaction-Diffusion Equations in our paper. Here, we add experiments on 1-D data using the Burgers Equations[2]. The results are shown below, which indicate that our method is also applicable to 1-D data.\\n\\n| | U-Net | ResNet | FNO | CNO | NMO | Ours+SPARK |\\n| ---------- | ----- | ------ | ----- | ----- | ----- | ---------- |\\n| w/o OOD | 0.362 | 0.338 | 0.298 | 0.314 | 0.246 | **0.228** |\\n| w/ OOD | 0.397 | 0.351 | 0.325 | 0.338 | 0.273 | **0.243** |\\n\\nAdditionally, we select FNO, CNO, and NMO as baselines to evaluate SPARK's generalization capability across different dimensional data. Specifically, we pre-train on 2-D Navier-Stokes Equations and finetune on 1-D Burgers Equations. The results below validate that model variants with SPARK plugin have better generalization capability than their baseline models. We will include it in our revised version.\\n\\n| | FNO | FNO+SPARK | CNO | CNO+SPARK | NMO | NMO+SPARK | \\n| -------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\\n| Burgers | 0.317\\u21920.294 | 0.308\\u21920.275 | 0.298\\u21920.275 | 0.280\\u21920.256 | 0.241\\u21920.223 | 0.228\\u21920.204 |\\n\\n---\\n[1] Li, Z, et al. \\\"Fourier neural operator for parametric partial differential equations.\\\" ICLR2021.\\n\\n[2] Takamoto M, et al. \\\"Pdebench: An extensive benchmark for scientific machine learning.\\\" NeurIPS2022.\\n\\n---\\nThanks again for appreciating our work and for your constructive suggestions! Please let us know if you have further questions.\\n\\nBest,\\n\\nthe Authors\"}", "{\"title\": \"Response to Reviewer Rucb (Part 1/2)\", \"comment\": \"Dear Reviewer Rucb,\\n\\nWe sincerely appreciate the time you\\u2019ve dedicated to reviewing our paper, as well as your valuable insights and support. Your positive feedback is highly motivating for us. Below, we address your primary concern and offer further clarification.\\n\\n---\\n> **Q1**. It is unclear how much each strategy contributes to the success of the model, and some ablation studies would be useful.\\n\\n**A1**. Thanks for your valuable feedback. To further demonstrate the contribution of each strategy, we conduct ablation experiments with five model variants. The experiments are conducted on Prometheus and Navier\\u2013Stokes datasets with OOD scenarios, and the results are shown below.\\n\\n| | Ours | w/o parameter | w/o boundary | w/o parameter\\\\&boundary | w/o memory bank | w/o curriculum learning |\\n| ------------- | ---------- | ------ | ------ | ------ | ------ | -- |\\n| Prometheus | **0.0301** | 0.0357 | 0.0324 | 0.0397 | 0.0416 | 0.0338 |\\n| Navier\\u2013Stokes | **0.0725** | 0.0833 | 0.0764 | 0.0902 | 0.1058 | 0.0792 |\\n\\nAs observed, removing physical parameters or boundary conditions during pretraining leads to a performance decline, with an even greater drop when the memory bank is not used. This validates the effectiveness of physical compression when addressing OOD problems. We will include these in our revised manuscript. \\n\\n---\\n> **Q2**. Can the data bank be used for direct retrieval-augmentation?\\n\\n**A2**. Thank you for your comment. We acknowledge that the memory bank in our SPARK framework can be used for direct retrieval augmentation. After pre-training, the memory bank\\u2019s parameters are frozen, allowing new samples to retrieve physics-rich embeddings in the bank for augmentation. For convenient understanding, we have illustrated this with diagram in Appendix H.9.\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer pH3K,\\n\\nWe sincerely thank you for your valuable and constructive feedback! Since the Discussion Period Extension provides us with additional time, we are eager to address any further concerns you may have. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper and/or our rebuttal, please let us know, and we will be more than happy to engage in further discussion and improve the paper.\\n\\nThank you so much for devoting time to improving our paper!\\n\\nBest regards,\\n\\nthe Authors\"}", "{\"comment\": \"The authors have done a good job of addressing my concerns. I am particularly happy to see the improvement in performance in the low data and zero-shot regime. I will keep my score at an 8 and recommend this paper be accepted.\"}", "{\"summary\": \"The paper introduces SPARK, a physics-guided augmentation framework for modeling dynamical systems that overcomes the limitations of traditional numerical and data-driven methods. By incorporating a unique compression and augmentation plugin, along with an attention mechanism and Fourier-enhanced graph ODE, SPARK improves model generalization and robustness, especially in data-scarce situations and distribution shifts. Experimental results highlight SPARK's strong performance in accurately predicting complex spatiotemporal dynamics, particularly in challenging cases like sea ice evolution, effectively capturing intricate physical phenomena.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. By incorporating boundary information and physical parameters, SPARK enhances the model's ability to generalize across different physical scenarios, which is crucial for real-world applications.\\n2. The paper provides extensive experimental results across various benchmark datasets, demonstrating SPARK's superior performance compared to existing models, particularly in handling out-of-distribution scenarios.\", \"weaknesses\": \"1. The symbols and formulas appear to be somewhat disorganized, which makes it difficult for readers to understand the meaning. Clear definitions and a more structured presentation of the equations would greatly enhance the paper's accessibility and overall readability.\\n2. The lack of novelty. This paper claims to be the first to use physics-guided compression and augmentation. But there has been a paper [1] doing like this. The techniques of the two papers are very similar, including : (1) using VQ-VAE to compress information (2) augmenting training set by the top-K discrete embeddings.\\n3. The proposed methodology, may be complex to implement in practice. The paper could provide more guidance or examples on how to effectively apply SPARK in different contexts.\\n\\n[1] Wu, Hao, et al. \\\"BeamVQ: Aligning Space-Time Forecasting Model via Self-training on Physics-aware Metrics.\\\" arXiv preprint arXiv:2405.17051 (2024).\", \"questions\": \"1. In line 163, it needs references for those methods which simply concatenate boundary information with node features.\\n2. What does boundary information refer to? Give some examples please.\\n3. In abstract, what's the meaning of \\\"stable data distribution\\\"? Provide explanations about it and why does it can cause ineffectiveness of data scarcity and distribution shifts.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer pH3K (Part 2/3)\", \"comment\": \"> **Q3**. On Page 6, for RQ2, could you be more specific on what challenging tasks?\\n\\n**A3**. Thank you for your valuable feedback. We first redefine challenging tasks in dynamical modeling prediction as problems that arise due to the inherent complexities of capturing high-dimensional, nonlinear, and chaotic systems[1,2]. These tasks often require models to adapt across real-world scenarios, like extreme events or long-term prediction.\\n\\nIn section 4.3, we focus on the prediction of sea ice evolution. This is challenging due to the complex, nonlinear interactions governing its Lagrangian motion[3], coupled with the spatiotemporal variability of environmental forcing factors. \\n\\nTo dispel your concerns, referencing[4], we add two challenging experiments, namely long-term prediction and extreme event prediction. We choose Prometheus and Sevir datasets to conduct the two experiments, separately. \\n\\n- **For long-term prediction**, we use Prometheus with ten steps as input and supervise the prediction of the next ten steps during training. During inference, we predict the next 10, 30, and 50 steps in an autoregressive manner. The table below demonstrates that our model outperforms other baselines in long-term prediction performance.\\n\\n| Time step | U-Net | ViT | FNO | NMO | Ours |\\n| ----------- | ------ | ------ | ------ | ------ | ---------- |\\n| 10 | 0.0931 | 0.0674 | 0.0447 | 0.0397 | **0.0294** |\\n| 30 | 0.1374 | 0.1038 | 0.0815 | 0.0726 | **0.0537** |\\n| 50 | 0.2238 | 0.1842 | 0.1374 | 0.1154 | **0.0921** |\\n\\n- **For extreme event prediction**, we use Sevir dataset, which contains data related to severe weather phenomena. To better evaluate the prediction performance of extreme events, we used the Critical Success Index (CSI) as in[2], in addition to MSE. For simplicity, we used only the thresholds {16, 133, 181, 219} and the mean CSI-M. The table below shows that our model consistently outperform these baselines in extreme event prediction. We will include these in our revised version.\\n\\n| Model | CSI-M $\\\\uparrow$ | CSI-219 $\\\\uparrow$ | CSI-181 $\\\\uparrow$ | CSI-133 $\\\\uparrow$ | CSI-16 $\\\\uparrow$ | MSE($10^{-3}$)$\\\\downarrow$ |\\n| ------- | ------ | ------ | ------ | ------ | ------ | ------ |\\n| U-Net | 0.3593 | 0.0577 | 0.1580 | 0.3274 | 0.7441 | 4.1119 |\\n| ViT | 0.3692 | 0.0965 | 0.1892 | 0.3465 | 0.7326 | 4.1661 |\\n| PredRNN | 0.4028 | 0.1274 | 0.2324 | 0.3858 | 0.7507 | 3.9014 |\\n| SimVP | 0.4275 | 0.1492 | 0.2538 | 0.4084 | 0.7566 | 3.8182 |\\n| Ours | **0.4683** | **0.1721** | **0.2734** | **0.4375** | **0.7792** | **3.6537** |\\n\\n\\n> **Q4**. What is the setup for OOD experiments?\\n\\n**A4**. Thank you for your comment. We propose that training and testing in the in-domain parameters is called w/o OOD experiments, while training in the in-domain parameters and testing in the out-domain parameters is called w/ OOD experiments. Here we present the in-domain and out-domain parameters for different benchmarks in the table below. We will include it in our revised manuscript.\\n\\n| Benchmarks | In-Domain Parameters | Out-Domain Parameters |\\n|------------|------------------------|-------------------------|\\n| PROMETHEUS | $(a_1, a_2, \\\\ldots, a_{25})$, $(b_1, b_2, \\\\ldots, b_{20})$ | $(a_{26}, a_{27}, \\\\ldots, a_{30})$, $(b_{21}, b_{22}, \\\\ldots, b_{25}\\\\)$ |\\n| 2D Navier-Stokes Equation | $\\u03bd = (1e^{-1}, 1e^{-2}, \\\\ldots, 1e^{-7}, 1e^{-8}) $ | $\\u03bd = (1e^{-9}, 1e^{-10})$ |\\n| Spherical Shallow Water Equation | $\\u03bd = (1e^{-1}, 1e^{-2}, \\\\ldots, 1e^{-7}, 1e^{-8}) $ | $\\u03bd = (1e^{-9}, 1e^{-10}) $ |\\n| 3D Reaction-Diffusion Equations | $D = (2.1 \\u00d7 10^{-5}, 1.6 \\u00d7 10^{-5}, 6.1 \\u00d7 10^{-5})$ | $D = (2.03 \\u00d7 10^{-9}, 1.96 \\u00d7 10^{-9}) $ |\\n| ERA5 | $V = ({Sp, SST, SSH, T2m})$ | $V = ({SSR, SSS})$ |\\n\\n\\n---\\n[1] Wu H, et al. \\\"Solving high-dimensional pdes with latent spectral models.\\\" ICML2023.\\n\\n[2] Gao Z, et al. \\\"Earthformer: Exploring space-time transformers for earth system forecasting.\\\" NeurIPS2022. \\n\\n[3] Notz D. \\\"Challenges in simulating sea ice in Earth System Models.\\\" Wiley Interdisciplinary Reviews: Climate Change, 2012.\\n\\n[4] Wang K, et al. \\\"NuwaDynamics: Discovering and Updating in Causal Spatio-Temporal Modeling.\\\" ICLR2024.\"}", "{\"title\": \"Response to Reviewer bjTD (Part 1/4)\", \"comment\": \"Dear Reviewer bjTD,\\n\\nWe sincerely appreciate the time you\\u2019ve dedicated to reviewing our paper, as well as your valuable insights and support. Below, we address your primary concern and offer further clarification.\\n\\n---\\n> **Q1**. The symbols and formulas appear to be somewhat disorganized, which makes it difficult for readers to understand the meaning. Clear definitions and a more structured presentation of the equations would greatly enhance the paper's accessibility and overall readability.\\n\\n**A1**. Thank you for your valuable feedback. To better facilitate the understanding of our paper, we have made the following modifications: \\n\\n- **Problem definition.** We have refined the problem definition and ensured consistency throughout the manuscript. As follows: \\n\\n\\\"Given a dynamical system governed by physical laws such as PDEs, we aim to enhance prediction using autoencoder reconstruction and discrete quantization. We have $N$ observation points in the domain $\\\\Omega$, located at $\\\\mathbf{s} = \\\\{\\\\mathbf{s}\\\\_1, \\\\cdots, \\\\mathbf{s}\\\\_N\\\\}$, where $\\\\mathbf{s}\\\\_i \\\\in \\\\mathbb{R}^{d\\\\_s}$. At time step $t$, the observations are $\\\\mathcal{X}^t = \\\\{\\\\mathcal{X}\\\\_1^t, \\\\cdots, \\\\mathcal{X}\\\\_N^t\\\\}$, where $\\\\mathcal{X}\\\\_i^t \\\\in \\\\mathbb{R}^{d}$ and $d$ represents the number of observation channels. Boundary information and physical parameters affect the dynamical system, leading to different conditions and distribution shifts. We first employ reconstruction model and construct a discrete memory bank to compress and store physical prior information. Then, given historical observation sequences {$\\\\{\\\\mathcal{X}\\\\_i^{-T\\\\_0+1:0}\\\\}$}$\\\\_{i=1}^N$, our goal is to use the pre-trained memory bank for data augmentation and predict future observations {$\\\\{\\\\mathcal{Y}\\\\_i^{1:T}\\\\}$}$\\\\_{i=1}^N$ at each observation point.\\\"\\n\\n- **Symbols and formulas.** We thoroughly review all symbols and formulas in the manuscript to ensure their meanings are precise and clear. For instance, we make the following modifications:\\n\\n$\\\\boldsymbol{u}_i = \\\\text{Proj} \\\\left( \\\\mathcal{X}_i , \\\\boldsymbol{p}^{rel}_i \\\\right) \\\\quad \\\\text{with} \\\\quad\\n \\\\boldsymbol{p}^{rel}_i = \\\\phi\\\\left( \\\\mathbf{s}_i, \\\\boldsymbol{p}^{boun}_i \\\\right), \\\\quad (1)$\\n\\n$\\\\mathcal{L}\\\\_{pre}=\\\\frac{1}{T N} \\\\sum\\\\_{t=1}^T \\\\sum\\\\_{i=1}^N\\\\left(\\\\hat{\\\\mathcal{X}}\\\\_{i}^{t}-\\\\mathcal{X}\\\\_{i}^{t}\\\\right)^2+ \\\\frac{1}{T N} \\\\sum\\\\_{t=1}^T \\\\sum\\\\_{i=1}^N\\\\left (\\\\mu \\\\left\\\\|\\\\boldsymbol{h}\\\\_{i}^{t}-\\\\mathbf{s g}[\\\\boldsymbol{e}]\\\\right\\\\|\\\\_2^2+\\\\gamma\\\\left\\\\|\\\\mathbf{s g}\\\\left[\\\\boldsymbol{h}\\\\_{i}^{t}\\\\right]-\\\\boldsymbol{e}\\\\right\\\\|\\\\_2^2\\\\right ), \\\\quad (6)$\\n\\n$\\\\boldsymbol{q}\\\\_{i}=\\\\frac1{T\\\\_0}\\\\sum\\\\_{t=1}^{T\\\\_0}\\\\delta(\\\\alpha\\\\_{i}^{t} \\n\\\\cdot \\\\boldsymbol{v}\\\\_{i}^{t}),\\\\quad \\\\alpha\\\\_{i}^{t}= \\\\left(\\\\boldsymbol{v}\\\\_{i}^{t} \\\\right )^{T} \\\\cdot \\\\mathrm{tanh}\\\\left(\\\\left(\\\\frac{1}{T\\\\_0} \\\\sum\\\\_{t=1}^{T\\\\_0} \\\\boldsymbol{v}\\\\_{i}^{t}\\\\right)W\\\\_{\\\\alpha}\\\\right), \\\\quad (8)$\\n\\n$\\\\mathcal{L}\\\\_{\\\\text{dyn}} = \\\\frac{1}{TN} \\\\sum\\\\_{i=1}^T \\\\sum\\\\_{i=1}^N \\\\|\\\\hat{\\\\mathcal{Y}}\\\\_{i}^{t} - \\\\mathcal{Y}\\\\_{i}^{t}\\\\|\\\\_2^2 + \\\\lambda\\\\_{\\\\text{reg}} \\\\mathcal{R}(\\\\theta). \\\\quad (11)$\"}", "{\"title\": \"Thanks for your recognition!\", \"comment\": \"Dear Reviewer Rucb,\\n\\nWe sincerely appreciate your valuable feedback and recognition! We are pleased to know that your concerns have been addressed! We will definitely incorporate your suggestions into our revised version. Please kindly let us know if you have any questions further!\\n\\nBest regards,\\n\\nthe Authors\"}", "{\"title\": \"Summary of Response\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your thorough and insightful reviews. We sincerely appreciate your feedback, which has significantly enhanced our paper! Below, we summarize the key concerns raised and our corresponding responses:\\n\\n- **Implementation details of SPARK** (Reviewer 89BR, bjTD, pH3K)\\n\\n We have presented our model details in tabular form and used the Navier-Stokes equation as an example to show dimensional changes and related experiments.\\n\\n- **Problem about novelty** (Reviewer 89BR, bjTD)\\n\\n We have clarified the differences between our SPARK and the mentioned models. Furthermore, we have conducted comprehensive experiments to verify this.\\n\\n- **Contribution of each component or strategy** (Reviewer Rucb, pH3K)\\n\\n We have added ablation experiments to confirm the effectiveness of each component or strategy. \\n\\n- **Details of boundary information and OOD experimental setup** (Reviewer 89BR, bjTD, pH3K)\\n\\n We have included a more detailed description of the boundary information, and an intuitive figure is shown in Appendix I. For OOD experimental setup, we have provided a clear table to present.\\n\\nOnce again, we are truly grateful for your valuable feedback and are happy to address any further concerns or questions!\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"[ Only 1 Day Remaining ] A Gentle Reminder of Feedbacks\", \"comment\": \"Dear Reviewer bjTD,\\n\\nWe sincerely apologize for reaching out again and fully understand that your time is extremely valuable. With the discussion deadline so close, we are eager to know if our responses have alleviated your concerns. \\n\\nWe are pleased to see that Reviewer 89BR has increased the score, and we are glad to have addressed all his concerns. We appreciate the recognition of our work by other three reviewers, and their positive feedbacks like \\\"an intelligent design choice\\\" (Reviewer Rucb), \\\"well-written and well-presented\\\" (Reviewer 89BR), \\\"an interesting idea\\\" (Reviewer pH3K).\\n\\nIn our previous response, we have provided detailed answers to your concerns, including: (1) an explanation of the differences between our paper and the one you mentioned, along with corresponding comparative experiments; (2) a clear definition of boundary information; and (3) an explanation of what constitutes a stable data distribution.\\n\\nTo facilitate understanding, we would like to clarify the contributions of our paper once again. We propose a reconstruction-based vector quantization technique to compress rich boundary information and physical parameters, which we then leverage for physics-guided data augmentation. In downstream task, we incorporate an attention mechanism to model historical observations and design a Fourier-enhanced graph ODE for precise and efficient forecasting. Our work aims to contribute to addressing the challenges of data scarcity and out-of-distribution generalization in the modeling of dynamical systems.\\n\\nWe have carefully refined the manuscript following your insightful feedbacks. Lastly, we would be most grateful if you could kindly reconsider your rating!\\n\\nThank you again for your invaluable guidance and thoughtful review.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the rebuttal. I will take these into consideration.\"}", "{\"title\": \"Response to Reviewer 89BR (Part 1/3)\", \"comment\": \"Dear Reviewer 89BR,\\n\\nThank you for your valuable feedback on our manuscript! We have taken your comments seriously and have made the necessary revisions and additions to address the concerns raised.\\n\\n---\\n> **Q1**. Problem about originality.\\n\\n**A1**. Thank you for your comment. We think our SPARK differs significantly from DGODE[1]. \\n\\n- **Difference with discrete coding.** DGODE is an end-to-end framework. It utilizes discrete \\\"codebank\\\" for **disentangling** environment features and minimizing their impact on node representations. While, SPARK is a upstream-downstream paradigm. In upstream phase, except observations, we incorporate physical parameters and boundary information to train a discrete, physics-rich memory bank. In downstream phase, we forze memory bank's weights and use it to enable targeted augmentation.\\n\\n- **Difference with graph ODE.** The use of graph ODEs is motivated by their ability to handle irregular data and efficiently perform multi-step temporal extrapolation. However, SPARK and DGODE differ significantly in their details: (1) DGODE uses an RNN architecture to compress historical states, while we use an attention mechanism to compress them into an initial state; (2) we incorporate Fourier blocks into the Graph ODE to enhance the capture of global spectral features, improving spatiotemporal generalization.\\n\\nTo further address your concerns, we run DGODE's open-source code and conduct comparative experiments in both non-OOD (ID) and OOD scenarios. The results shown below indicate that SPARK performs better. We will include these in our revised version.\\n\\n| Dataset | Prometheus (ID) | Prometheus (OOD) | ERA5 (ID) | ERA5 (OOD) | Spherical-SWE (ID) | Spherical-SWE (OOD) |\\n|-------------|---------|--------|---------|---------|---------|---------|\\n| DGODE | 0.0344 | 0.0359 | **0.0387** | 0.0435 | 0.0024 | 0.0029 |\\n| Ours | **0.0323** | **0.0328** | 0.0398 | **0.0401** | **0.0022** | **0.0024** |\\n\\n---\\n> **Q2**. The algorithm is not clearly presented.\\n\\n**A2**. Thank you for your detailed comments. We have taken steps to address these issues in the revised manuscript. \\n\\n- **About boundary representation.** In our paper, the \\\"real boundary\\\" ($\\\\boldsymbol{p}^{boun}_i$) refers to the relative positional relationship between node $i$ and the nearest boundary point. We combine this with the node $i$'s coordinate information $\\\\mathbf{s}_i$ to obtain the position encoding $\\\\boldsymbol{p}^{rel}_i$. We have made detailed corrections to Equation (1), as shown below. For a more intuitive understanding, we use the ERA5 dataset as an example and visualize the boundary information in Appendix I.\\n\\n$\\\\boldsymbol{u}_i = \\\\text{Proj} \\\\left( \\\\mathcal{X}_i , \\\\boldsymbol{p}^{rel}_i \\\\right) \\\\quad \\\\text{with} \\\\quad\\n \\\\boldsymbol{p}^{rel}_i = \\\\phi\\\\left( \\\\mathbf{s}_i, \\\\boldsymbol{p}^{boun}_i \\\\right).$\\n\\n- **Time index.** We consistently follow the notation in the Problem Definition. The length of history observations is $T_0$, and the length of future predictions is $T$. We have revised Equations (8) and (11) accordingly, as shown below.\\n\\n$\\\\alpha_{i}^{t}= \\\\left(\\\\mathcal{v}\\\\_{i}^{t} \\\\right )^{T} \\\\cdot \\\\mathrm{tanh}\\\\left(\\\\left(\\\\frac{1}{T\\\\_0} \\\\sum\\\\{t=1}^{T\\\\_0} \\\\mathcal{v}\\\\_{i}^{t}\\\\right)W\\\\_{\\\\alpha}\\\\right), \\\\quad (8)$\\n\\n$\\\\mathcal{L}\\\\_{\\\\text{dyn}} = \\\\frac{1}{TN} \\\\sum\\\\_{i=1}^T \\\\sum\\\\_{i=1}^N \\\\|\\\\hat{\\\\mathcal{Y}}\\\\_{i}^{t} - \\\\mathcal{Y}\\\\_{i}^{t}\\\\|\\\\_2^2 + \\\\lambda\\\\_{\\\\text{reg}} \\\\mathcal{R}(\\\\theta). \\\\quad (11)$\\n\\n- **Index notation.** We have revised the index notation in the pretraining loss equation (6) to ensure it is accurate and meaningful, which is shown below.\\n\\n$\\\\mathcal{L}\\\\_{pre}=\\\\frac{1}{T N} \\\\sum\\\\_{t=1}^T \\\\sum\\\\_{i=1}^N\\\\left(\\\\hat{\\\\mathcal{X}}\\\\_{i}^{t}-\\\\mathcal{X}\\\\_{i}^{t}\\\\right)^2+ \\\\frac{1}{T N} \\\\sum\\\\_{t=1}^T \\\\sum\\\\_{i=1}^N\\\\left (\\\\mu \\\\left\\\\|\\\\mathcal{h}\\\\_{i}^{t}-\\\\mathbf{s g}[\\\\mathcal{e}]\\\\right\\\\|\\\\_2^2+\\\\gamma\\\\left\\\\|\\\\mathbf{s g}\\\\left[\\\\mathcal{h}\\\\_{i}^{t}\\\\right]-\\\\mathcal{e}\\\\right\\\\|\\\\_2^2\\\\right ). \\\\quad (6)$\\n\\nFurther, we have thoroughly reviewed the entire manuscript and standardized the writing to ensure consistency in symbols and formulas.\\n\\n---\\n[1] Wu H, et al. \\\"Prometheus: Out-of-distribution Fluid Dynamics Modeling with Disentangled Graph ODE.\\\" ICML2024.\"}", "{\"title\": \"Thank you & Looking forward to further discussion\", \"comment\": \"Dear Reviewer 89BR,\\n\\nWe would like to extend heartfelt thanks to you for your time and efforts in the engagement of author-reviewer discussion. To facilitate better understanding of our rebuttal and revision, we hereby summarize your key concerns and our responses as follows:\\n\\n- **About the novelty of SPARK.**\\nWe have clearly explained the differences from the paper you mentioned and provided corresponding comparison experiments.\\n\\n- **About the presentation of algorithm and symbols.**\\nWe have thoroughly reviewed the entire manuscript and standardized the writing to ensure consistency in symbols and formulas.\\n\\n- **About the construction of the discrete memory bank.**\\nWe have provided details about construction of the discrete memory bank and the embeddings within it.\\n\\n- **About the details of OOD experimental setup, training cost, and training details.**\\nWe have included details and additional experiments about what you mentioned. Also, we have added relevant contents in the revised appendix.\\n\\nFor other issues not mentioned here, please refer to our detalied rebuttal response. We sincerely hope this addresses your concerns! We respectfully look forward to further discussion with you.\\n\\nThank you again for your valuable guidance and thoughtful review.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer bjTD,\\n\\nThank you once again for your response. Your feedback is incredibly valuable to us. We sincerely request you to reconsider your evaluation and extend our heartfelt gratitude for your time and effort.\\n\\nThank you again for your invaluable guidance and thoughtful review.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 89BR (Part 3/3)\", \"comment\": \"> **Q5**. Given the big accuracy difference, what is the training cost comparison?\\n\\n**A5**. Thank you for your comment. We add experiments of computational costs below. To be fair, we conduct the experiments on a single NVIDIA 40GB A100 GPU. From the results, we can observe that our method has a competitive computation cost. We will include it in our revised version. \\n| Method | UNet | ResNet | VIT | SwinT | FNO | UNO | CNO | NMO | Ours |\\n|--------------|------|--------|-------|-------|------|------|------|------|------|\\n| Training time (h) | 11.2 | 9.76 | 14.5 | 12.3 | 6.9 | 7.8 | 13.4 | 6.3 | 6.7 |\\n| Inference time (s) | 1.34 | 0.93 | 1.32 | 1.13 | 0.54 | 0.67 | 0.12 | 0.52 | 0.55 | \\n\\n> **Q6**. What are the training details, e.g., model architecture, optimizer, training devices? Is boundary information injected at two places, i.e., through node features and directly through the boundary latent vector B?\\n\\n**Q6**. Thank you for your comment. Training details are as follows:\\n- **Model architecture.** To help you better understand our model architecture, we have provided detailed structural information using the Navier-Stokes equations as an example, as shown in the table below.\\n\\n| Upstream | | | Downstream | | |\\n| - | - | - | - | - | - |\\n| Procedure | Layer | Dimention | Procedure | Layer | Dimention |\\n|Boundary information injection| Boundary Fusion (Concat + Linear) | (4096, 128) | Augmentation | GNN Encoder | ($T_0$, 4096, 128) |\\n| |Boundary Encoding (Linear + LayerNorm) | (4096, 128) | | Memory bank retrival | ($T_0$, 4096, 128) |\\n| Physical parameters injection|Channel attention | (2, 128) | Historical observations encoding | Attention score of time steps | (, $T_0$) |\\n| |Aggregation | (4096, 128) | | Initial state encoding | (1, 4096, 128)\\n|GNN reconstruction |Graph Encoder (GNN Layer \\u00d7 L)| (4096, 128) | Fourier-enhanced graph ODE | Fourier transform | (1, 4096, 128) |\\n| | BatchNorm + ReLU | (4096, 128) | | Linear transform | (1, 4096, 128) |\\n| Memroy bank | Construction | (M, 128) | | Inverse Fourier transform | (1, 4096, 128) |\\n| | Linear + LayerNorm | (4096, 128) | | ODE solver | (T, 4096, 128) |\\n\\n- **Optimizer and training devices.** We use the **Adam** optimizer for training. Training is conducted on **8 NVIDIA 40GB A100 GPUs**, and inference is performed on **a single NVIDIA 40GB A100 GPU**.\\n\\n- **About boundary information.** We acknowledge that boundary information is injected at two places. On one hand, we integrate boundary location information $\\\\boldsymbol{p}^{boun}_i$ with node features. On the other hand, we inject the latent vector $\\\\mathcal{B}$ encoded from boundary information into the message-passing layers of the GNN. Details are in Equantion (1) and (4). To facilitate understanding, we provide a schematic of the boundary information in Appendix I. \\n\\n---\\nThanks again for your valuable feedback! Please let us know if you have further questions. \\n\\nBest,\\n\\nthe Authors\"}", "{\"comment\": \"Thank you for addressing my comments. While I still have doubts about the method's novelty and suggest polishing the paper with clear variable definitions and method descriptions, I appreciate the extensive experiments conducted by the authors and will raise my score to 6.\"}", "{\"comment\": \"Thanks for your rebuttal. My concerns have been addressed. I will maintain my score of 6.\"}", "{\"title\": \"Response to Reviewer 89BR (Part 2/3)\", \"comment\": \"> **Q3**. It's unclear how the discrete memory bank is built. What are the e_i in E and how are they constructed?\\n\\n**A3**. Thank you for your feedback. SPARK builds its discrete memory bank using a training strategy inspired by VQ-VAE[1]. Each $e_i$ in the memory bank $E =${$ \\\\{e_1, e_2, \\\\dots, e_M\\\\} $}$ \\\\in \\\\mathbb{R}^{M \\\\times D}$ represents an embedding vector, where $M$ is the fixed size of the memory bank, manually set as a hyperparameter. These embeddings are initialized randomly at the start of training.\\n\\n> **Q4**. How are the with and without OOD datasets constructed in the experiments? Is there an explanation for why SPARK achieved better performance on OOD cases even than other models did on non-OOD cases?\\n\\n**A4**. Thank you for your comment. \\n- **Experimental settings.** For dataset setting, we propose that training and testing in the in-domain parameters is called w/o OOD experiments, while training in the in-domain parameters and testing in the out-domain parameters is called w/ OOD experiments. Here we present the in-domain and out-domain parameters for different benchmarks in the table below. \\n\\n| Benchmarks | In-Domain Parameters | Out-Domain Parameters |\\n|------------|------------------------|-------------------------|\\n| PROMETHEUS | $(a_1, a_2, \\\\ldots, a_{25})$, $(b_1, b_2, \\\\ldots, b_{20})$ | $(a_{26}, a_{27}, \\\\ldots, a_{30})$, $(b_{21}, b_{22}, \\\\ldots, b_{25}\\\\)$ |\\n| 2D Navier-Stokes Equation | $\\u03bd = (1e^{-1}, 1e^{-2}, \\\\ldots, 1e^{-7}, 1e^{-8}) $ | $\\u03bd = (1e^{-9}, 1e^{-10})$ |\\n| Spherical Shallow Water Equation | $\\u03bd = (1e^{-1}, 1e^{-2}, \\\\ldots, 1e^{-7}, 1e^{-8}) $ | $\\u03bd = (1e^{-9}, 1e^{-10}) $ |\\n| 3D Reaction-Diffusion Equations | $D = (2.1 \\u00d7 10^{-5}, 1.6 \\u00d7 10^{-5}, 6.1 \\u00d7 10^{-5})$ | $D = (2.03 \\u00d7 10^{-9}, 1.96 \\u00d7 10^{-9}) $ |\\n| ERA5 | $V = ({Sp, SST, SSH, T2m})$ | $V = ({SSR, SSS})$ |\\n\\n\\n- **Explanation for SPARK's better performance.** Our SPARK is specifically designed for OOD problem. SPARK's physics-guided enhancement improves generalization by leveraging physical priors. This reduces sensitivity to distribution shifts. Additionally, the fourier-enhanced graph ODE module provides a robust mechanism for prediction, outperforming other baselines. To address your concern, we use three models specialized in OOD dynamical system modeling (LEADS[2], CODA[3], NUWA[4]), along with FNO, for comparison. The results shown below indicate that OOD-specific models outperform FNO in both OOD and non-OOD scenarios, with SPARK achieving the best performance. We will include these in our revised version.\\n\\n| Dataset | Prometheus (ID) | Prometheus (OOD) | ERA5 (ID) | ERA5 (OOD) | Spherical-SWE (ID) | Spherical-SWE (OOD) |\\n|-------------|----------------|----------------|---------|----------|---------|----------|\\n| FNO | 0.0547 | 0.0606 | 0.7233 | 0.9821 | 0.0061 | 0.0084 |\\n| LEADS | 0.0374 | 0.0403 | 0.2367 | 0.4233 | 0.0038 | 0.0047 |\\n| CODA | 0.0353 | 0.0372 | 0.1233 | 0.2367 | 0.0034 | 0.0043 |\\n| NUWA | 0.0359 | 0.0398 | 0.0645 | 0.0987 | 0.0032 | 0.0039 |\\n| Ours | **0.0323** | **0.0328** | **0.0398** | **0.0401** | **0.0022** | **0.0024** |\\n\\n---\\n[1] Van Den Oord A, et al. \\\"Neural discrete representation learning.\\\" NeurIPS2017.\\n\\n[2] Kirchmeyer M, et al. \\\"Generalizing to new physical systems via context-informed dynamics model.\\\" ICML 2022. \\n\\n[3] Yin Y, et al. \\\"LEADS: Learning dynamical systems that generalize across environments.\\\" NeurIPS2021.\\n\\n[4] Wang K, et al. \\\"NuwaDynamics: Discovering and Updating in Causal Spatio-Temporal Modeling.\\\" ICLR2024.\"}", "{\"title\": \"Thank you & Looking forward to further discussion\", \"comment\": \"Dear Reviewer pH3K,\\n\\nWe deeply appreciate your dedication to engaging in author-reviewer discussions. To facilitate better understanding of our rebuttal and revision, we have outlined your key concerns and our responses to enhance communication:\\n\\n- **About the motivation for using each component in SPARK.**\\nWe have included discussion about the interconnection between each network component and conducted ablation experiments to demonstrate the contribution of each physical component.\\n\\n- **About the challenging tasks.**\\nWe have redefined challenging tasks in dynamical system modeling and explained why the prediction of sea ice is challenging. Further, we choose two specific challenging tasks (long-term prediction and extreme event prediction) and conduct experiments to demonstrate the effectiveness of SPARK.\\n\\n- **About OOD experimental setup.**\\nWe have provided more detailed OOD experimental setup descriptions and the corresponding table.\\n\\nFor other issues not mentioned here, please refer to our detailed rebuttal response. We sincerely hope this addresses your concerns! We humbly look forward to further discussion with you.\\n\\nThank you again for your valuable guidance and thoughtful review.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer bjTD (Part 2/4)\", \"comment\": \"> **Q2**. Problem about novelty.\\n\\n**A2**. Thank you for your valuable comments. To the best of our knowledge, SPARK should be the first physics-guided compression and augmentation framework. There are fundamental differences between SPARK and BeamVQ[1]:\\n\\n**(1) Difference in compressing data with VQ-VAE.** \\n\\n- > **Different input.** SPARK's input includes physical prior information (boundary information and physical parameters) along with the input data, while BeamVQ's input consists only of the observational data.\\n\\n- > **Different workflows and SPARK is more lightweight.** While both SPARK and BeamVQ are plugin-like frameworks, upon carefully reviewing the original paper, we find that BeamVQ actually functions as an end-to-end framework. In BeamVQ, the encoder acts as the backbone model. This design introduces a large number of parameters during training, which aligns with the original authors' claim of requiring 16 NVIDIA A100-PCIE-40GB GPUs. Essentially, BeamVQ can be considered a combination of a backbone and an improved VQVAE, making it not strictly a plugin. This likely affects the original backbone. In contrast, SPARK is a two-stage framework where the memory bank, once trained, is frozen and directly used for downstream tasks. This is quite lightweight. We have included a schematic comparison of SPARK and BeamVQ in the appendix to clearly illustrate their differences.\\n\\n- > **Experiment comparison.** To address your concerns, we contact the authors of the BeamVQ paper and obtain partial access to their codes. The core codes of BeamVQ is provided in Appendix H.9. We then conduct experiments on the Navier\\u2013Stokes, Spherical-SWE, Prometheus, and 3D Reaction\\u2013Diff dataset. Here, we use FNO and SimVP as backbones. Further, we select parameter count, training time, and inference time to compare the effiency of the two models on Navier\\u2013Stokes. The table below shows that SPARK is much more lightweight and performs better, supporting our claims. Notably, the SimVP+BeamVQ model variant crashes on 3D Reaction-Diff due to memory overflow, as its parameter complexity is unsuitable for 3D scenarios. We will include these in our revised version.\\n\\n| |Navier\\u2013Stokes|Spherical-SWE|Prometheus| 3D Reaction\\u2013Diff |\\n| ------------ | ------ | ------ | ------ | ------ |\\n| FNO | 0.1556 | 0.0038 | 0.0447 | 0.0132 |\\n| FNO+BeamVQ | 0.1342 | 0.0032 | 0.0356 | 0.0104 |\\n| FNO+SPARK | 0.1257 | 0.0029 | 0.0338 | 0.0095 |\\n| SimVP | 0.1262 | 0.0031 | 0.0394 | 0.0108 |\\n| SimVP+BeamVQ | 0.1173 | 0.0027 | 0.0375 | - |\\n| SimVP+SPARK | 0.1105 | 0.0024 | 0.0360 | 0.0087 |\\n\\n| |MSE|Param| Training time | Inference time |\\n| ------------ | ------ | ------ | ------ | ------ |\\n| FNO+BeamVQ | 0.1342 | 214.25 MB | 26.11 h | 3.25 s |\\n| FNO+SPARK | 0.1257 | 35.67 MB | 4.2 h | 0.58 s |\\n\\n**(2) Difference in augmenting data by the top-K discrete embeddings.**\\n\\n- > **Different usage of top-K.** BeamVQ relies on high-quality, non-differentiable physical metrics for filtering, which are not available in all scenarios and require domain-specific expertise. In contrast, SPARK's top-k approach aims to expand the search space, and we use the fusion of input with the top-k embeddings. \\n\\n- > **Experiments on hyperparameter $k$.** To address your concerns, we add experiments on the value of $k$ on the Navier-Stokes\\uff0cPrometheus, 3D Reaction\\u2013Diff, and ERA5 datasets. The candidate values are \\\\{1,3,5,7,9,11\\\\}, and the results are shown in the table below.\\n\\n| |Navier\\u2013Stokes|Spherical-SWE|Prometheus| 3D Reaction\\u2013Diff |\\n| ------------ | ------ | ------ | ------ | ------ |\\n| k=1 | 0.0752 | 0.0022 | 0.0315 | 0.0116 |\\n| k=3 | 0.0726 | **0.0018** | **0.0296** | 0.0108 |\\n| k=5 | **0.0715** | 0.0021 | 0.0303 | **0.0104** |\\n| k=7 | 0.0731 | 0.0024 | 0.0311 | 0.0110 |\\n| k=9 | 0.0764 | 0.0025 | 0.0320 | 0.0121 |\\n| k=11 | 0.0780 | 0.0029 | 0.0327 | 0.0128 |\\n\\nAs $k$ increases, the model's performance first improves and then declines, with optimal performance generally achieved when $k$ is between 3 and 5. \\n\\nIn summary, our method is a genuine two-stage framework. Compared to BeamVQ, SPARK is more lightweight and achieves better performance. The complete results will be included in the revised version.\\n\\n---\\n[1] Wu H, et al. \\\"BeamVQ: Aligning Space-Time Forecasting Model via Self-training on Physics-aware Metrics.\\\" arXiv.\"}", "{\"title\": \"Response to Reviewer bjTD (Part 4/4)\", \"comment\": \"> **Q5**. What does boundary information refer to? Give some examples please.\\n\\n**A5**. Thank you for your comment. In dynamical systems and natural sciences, boundary information is mathematical term used to describe the behavior of physical systems at their boundaries. \\n\\nIn our paper, boundary information is divided into geometric boundary position information and intrinsic features at the corresponding boundary. The geometric boundary position refers to the **relative distance** between the **current node** and **the nearest boundary point**. And the intrinsic features at the boundary vary with the system. For example, in the ERA5 dataset, these features include velocity, pressure, temperature, and humidity. To facilitate understanding, we use ERA5 dataset as an example and visualize the boundary information in Appendix I.\\n\\n\\n> **Q6**. In abstract, what's the meaning of \\\"stable data distribution\\\"? Provide explanations about it and why does it can cause ineffectiveness of data scarcity and distribution shifts.\\n\\n**A6**. Thank you for your insightful comment. By \\\"stable data distribution\\\", we refer to the assumption that the training and testing samples are drawn from the same probability distribution, i.e., $P\\\\_{\\\\text{train}}(X, Y) = P\\\\_{\\\\text{test}}(X, Y).$ \\n\\nUnder this assumption, minimizing the empirical risk on the training set leads to good generalization on the test set:\\n\\n$R_{\\\\text{emp}}(f) = \\\\frac{1}{n} \\\\sum\\\\_{i=1}^{n} L(f(X\\\\_i), Y\\\\_i),$\\n$R(f) = \\\\mathbb{E}\\\\_{(X, Y) \\\\sim P\\\\_{\\\\text{test}}(X, Y)} [L(f(X), Y)],$\\n\\nwhere $L(\\\\cdot, \\\\cdot)$ is the loss function and $f$ is the learned model. However, **data scarcity** makes it difficult to accurately estimate $P\\\\_{\\\\text{train}}(X, Y)$ because the sample size $n$ is too small. This increases the discrepancy between the empirical distribution $\\\\hat{P}\\\\_{\\\\text{train}}(X, Y)$ and the true distribution $P\\\\_{\\\\text{train}}(X, Y)$, leading to higher generalization error.\\n\\nFurthermore, **distribution shift** directly cause, i.e., $P\\\\_{\\\\text{train}}(X, Y) \\\\neq P\\\\_{\\\\text{test}}(X, Y).$ As a result, the model optimized on the training data performs poorly on the testing data.\\n\\n\\n---\\nThanks again for your constructive suggestions! Please let us know if you have further questions.\\n\\nBest,\\n\\nthe Authors\"}", "{\"title\": \"Thank You for Your Feedback and Support\", \"comment\": \"Dear Reviewer 89BR,\\n\\nThank you for your valuable feedback and we will polish the paper based on your suggestions. We greatly appreciate your recognition of our extensive experiments and your support in raising the score!\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes SPARK to address the challenges of data scarcity and distribution shifts in dynamical system modeling. SPARK integrates boundary information and physical parameters by using an autoencoder, and then a pre-trained memory bank is obtained. It further combines Fourier-enhanced graph ODE to efficiently predict long-term dynamical systems. The experimental results have demonstrated the superiority of the proposed method against the baseline models across many dynamical systems under distribution shifts and limited data conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper presents an interesting idea for handling data scarcity and OOD, which are important topics in scientific machine learning.\", \"This paper is well-written and has a detailed presentation of methods, experimental setup, and results discussion.\", \"This paper has tested multiple challenging datasets, such as ERA5 and 3D systems.\"], \"weaknesses\": [\"The motivation for using each component in SPARK can be further clarified. The paper will benefit from discussing the interconnection between each network component.\", \"It would also be good to have ablation studies on incorporated physics. The authors may consider reducing physical information (i.e., boundary information and physical parameters) for pre-training. Then, we can see the contribution of each physical component.\"], \"questions\": [\"On Page 6, for RQ2, could you be more specific on what challenging tasks?\", \"What is the setup for OOD experiments?\", \"How do you compute PSNR and SSIM for scientific data? Image data has a fixed range of [0,255] but scientific data doesn\\u2019t.\", \"Energy Spectrum is a common metric for fluid dynamics. Is it also commonly used for reaction-diffusion equations? How does this paper compute the energy spectrum?\", \"Some minor typos:\", \"On Page 2, \\u201ceffectively long-term prediction\\u201d should be \\u201ceffective \\u2026\\u201d.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank You for Your Feedback and Support\", \"comment\": \"Dear Reviewer pH3K,\\n\\nThank you for your insightful review and dedication to the review process. We are glad to see that all your concerns have been addressed! We remain eager to address any further questions or concerns you may have.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"summary\": \"Data-driven methods for dynamical systems often face distribution shift challenges. To tackle this, this paper proposes SPARK, a physics-guided plugin to address both environmental distribution shift (due to changes in boundary conditions and physical parameters) and temporal distribution shift.\\nSPARK achieves this by incorporating the boundary information and physical parameters into a discrete memory bank constructed through solution reconstructions.\\nBy embedding these physical priors, the memory bank can then be used to augment data samples in downstream tasks, thereby increasing model generalizability.\\nTo handle the temporal distribution shift, SPARK encodes historical information into initial states through attention and uses Fourier-enhanced graph ODE for long-term prediction.\\nIn the end, the paper evaluates SPARK on several benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea of increasing model generalizability by augmenting data in downstream tasks with a pre-trained memory bank that contains boundary information and physical parameters is nice.\\n2. The paper evaluates the method on a good number of benchmarks.\", \"weaknesses\": \"1. The degree of originality is not high. It shares quite some similarities with the DGODE model in (Prometheus by Wu et al. 2024, cited by the paper), which proposed the idea of \\\"codebank\\\" to include the environmental factor for OOD and graph ODE for future predictions.\\n2. The algorithm is not clearly presented. The paper presented reasonable ideas but without enough technical details to tell a clear story. \\nMathematical notation is not clearly defined, which makes it hard to follow the method. For example, how is the \\\"real boundary\\\" (\\\"p^{boun}\\\")represented? Is it a list of spatial coordinates of discrete boundary nodes? Time index does not make sense in section 3.3. For example, T in equation 8 is the length of history observations but then it is also used in loss function in equation 11 to represent the number of future predictive steps, which is confusing. Index notation does not make sense in the pretraining loss equation (6).\\n3. It's unclear how the discrete memory bank is built. What are the e_i in E and how are they constructed ?\", \"questions\": \"1. How are the with and without OOD datasets constructed in the experiments? Is there an explanation for why SPARK achieved better performance on OOD cases even than other models did on non-OOD cases?\\n2. Given the big accuracy difference, what is the training cost comparison?\\n3. What are the training details, e.g., model architecture, optimizer, training devices? Is boundary information injected at two places, i.e., through node features and directly through the boundary latent vector B?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BYwdia04ZA
Measuring similarity between embedding spaces using induced neighborhood graphs
[ "Tiago Fernandes Tavares", "Fábio José Ayres", "Paris Smaragdis" ]
Deep Learning techniques have excelled at generating embedding spaces that capture semantic similarities between items. Often these representations are paired, enabling experiments with analogies (pairs within the same domain) and cross-modality (pairs across domains). These experiments are based on specific assumptions about the geometry of embedding spaces, which allow finding paired items by extrapolating the positional relationships between embedding pairs in the training dataset, allowing for tasks such as finding new analogies, and multimodal zero-shot classification. In this work, we propose a metric to evaluate the similarity between paired item representations. Our proposal is built from the structural similarity between the nearest-neighbors induced graphs of each representation, and can be configured to compare spaces based on different distance metrics and on different neighborhood sizes. We demonstrate that our proposal can be used to identify similar structures at different scales, which is hard to achieve with kernel methods such as Centered Kernel Alignment (CKA). We further illustrate our method with two case studies: an analogy task using GloVe embeddings, and zero-shot classification in using CLIP and BLIP-2 embeddings. Our results show that accuracy in both analogy and zero-shot classification tasks correlates with the embedding similarity. These findings can help explain performance differences in these tasks, and may lead to improved design of paired-embedding models in the future.
[ "Embedding Space Geometry", "Paired Representation Similarity", "Graph-Based Embedding Comparison" ]
Reject
https://openreview.net/pdf?id=BYwdia04ZA
https://openreview.net/forum?id=BYwdia04ZA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u5emRHHi1W", "hu6NxwLbuf", "ZTThMpGyG2", "Yh8m9a82Bm", "WxRUA2W9cj", "TAaSeSEvLN", "CYl5aN2AAM", "ASfZAFWxYF", "0hOW5uiWo0" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "meta_review", "official_comment", "official_review" ], "note_created": [ 1732618181466, 1733139182028, 1732617997599, 1730672234180, 1730556687833, 1737524144623, 1734700271499, 1732618136640, 1730601551120 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11760/Authors" ], [ "ICLR.cc/2025/Conference/Submission11760/Reviewer_cKTW" ], [ "ICLR.cc/2025/Conference/Submission11760/Authors" ], [ "ICLR.cc/2025/Conference/Submission11760/Reviewer_cKTW" ], [ "ICLR.cc/2025/Conference/Submission11760/Reviewer_e5Rj" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11760/Area_Chair_ge8k" ], [ "ICLR.cc/2025/Conference/Submission11760/Authors" ], [ "ICLR.cc/2025/Conference/Submission11760/Reviewer_A643" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your review!\", \"comment\": \"Dear reviewer e5Rj\\n\\nThanks for the effort put into reviewing our manuscript. Regarding your concerns:\\n\\n1. We have added experiments using BLIP-2 embeddings, the ImageNet dataset, and GULP similarities. These further corroborate with our current findings.\\n2. We removed the text from the scatter plots. Thanks for this suggestion; the figures are much clearer now.\\n3. We added GULP as another baseline for evaluation.\\n\\nWe hope to have addressed all your concerns, and kindly ask you for an increase in the review grade.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": [\"I thank the authors for their thorough rebuttal, which has addressed several points I raised in my review:\", \"Positioning NNGS much more as an additional metric tool alongside CKS or GULP as opposed to a replacement,\", \"inclusion of GULP, ImageNet and BLIP-2 as additional metric, dataset and embedder for comparison, respectively,\", \"and discussion of better interpretability of NNGS.\", \"The authors did a great job incorporating a much more comprehensive experimental study, and as a result, I have raised my score from 3 to 5. Unfortunately, I am still slightly advocating for rejection, because:\", \"While it is really great to see ImageNet added as an additional benchmark dataset, this only leaves two small-scale studies (Glove & Cifar100) and ImageNet. While older works such as CKA only study Cifar10/100, CKA has been validated and applied to many more datasets over the past years. Incorporating more datasets is in my eyes therefore essential, particularly as the\", \"my review listed other metrics (which was not at all meant as a comprehensive list) such as SVCCA, PWCCA, Brain-Score or RSA. While I understand why ContraSim is not a suitable comparison to make, what about comparison other metrics such as SVCCA or PWCCA? And if no comparison are made, following the discussion of ContraSim in the rebuttal, it's crucial to highlight why that is the case.\"]}", "{\"title\": \"Thanks for your comments!\", \"comment\": \"Dear reviewer cKTW,\\n\\nWe immensely thank you for the effort put into the reviewing process. We have considered all your concerns, and acted on them as follows:\\n\\n> Testing a metric on two small-scale case studies is simply insufficient.\\n> Moreover, the application of CLIP on just CIFAR-100 is insufficient.\\n\\nThanks for this kind suggestion. We have added experiments using:\\n* GULP as an additional baseline\\n* ImageNet as an additional dataset\\n* BLIP-2 as an additional method to generate multimodal embeddings.\\n\\nWe increased the experimental report so that it now contains the visualization related to all tested approaches (NNGS, CKA, and GULP). Each experiment is now reported with visualizations accompanying the numeric results.\\n\\n> The authors insufficiently compare and contrast against other similarity measures (of which there are many)\\n\\nWe thank you for this review. We addressed this concern by adding GULP to our benchmark. We also considered ContraSim, but, as it this is a learned measure, the number of necessary configurations to compare performance would be too high - in fact, any function could theoretically be approximated by ContraSim with adequate data and with large enough encoders.\\n\\n> In turn, this makes it unclear why NNGS should be preferred over other distance-based similarity measures.\\n\\nAs we have discussed above, we do not with to claim to have an overall \\\"better\\\" metric. Rather, we show that NNGS is more effective and easier to interpret in the specific use cases we work with. The definition of \\\"similarity\\\" is not a general consensus over the literature, and each measure is built upon different assumptions regarding that. Hence, we now clarify in our conclusion that \\\"different similarity metrics can work harmoniously to provide different viewpoints about embedding spaces\\\"\\n\\n> It is not entirely clear to me why NNGS should, again, be preferred?\\n\\nThanks for this note.\\n\\nIn Table 1, we show that sigma can be adjusted in RBF-CKA, but it only relates to proximity if clusters have a more or less uniform composition. In our experiments, we were unable to tune RBF-CKA to differentiate noise within data clusters from shuffling data clusters themselves. However, finding the value of k only requires looking at the dataset and choosing which is more interesting for the specific use case, that is, k is directly related to the data composition and easier to interpret.\", \"as_for_the_question\": \"\\\"why NNGS should be preferred\\\", the discussion is more profound. NNGS, CKA, GULP, and each other similarity measure accounts for different aspects of what could be defined as similarity in embeddings. In special, we note that NNGS comes from defining similar embeddings as those in which neighborhoods are preserved, while GULP defines similar embeddings as those that lead to a similar error in a linear prediction. These underlying definitions are so different that it makes little sense to state that one should be preferred over the other a priori. However, if we know that what we wish to measure is the neighborhood similarity between paired representations of items, then NNGS should be preferred; likewise, if the problem at hand requires analyzing the prediction error of embeddings, then GULP should be preferred.\\n\\n> L328\\n\\nAs we have clarified, k is a parameter that depends only on the number of elements in each data cluster, whereas sigma depends on the in-cluster variance and the between-cluster distances. Hence, k can be immediately set and changed to tune NNGS to bring information on more local or more global scales. \\n\\n> L329:\\n\\nThe proposed kernels in the original paper are linear and RBF. It is likely that, theoretically, we could find a specific kernel for each dataset. However, this falls out of the scope of this paper, as our work is about using the \\n\\n> Novelty\\n\\nThanks for the reference. We note that our work was first submitted (and rejected) to Neurips 2024 in May 14th, as will soon be available in OpenReview, which precedes the first version of Sobal et al.'s preprint; however, we have added Sobal et al. as a reference in the present work.\\n\\nTheir work use a soft neighborhood approach to train a multimodal machine similar to CLIP and find a minor improvement in a downstream supervised classification process. We have provided a test in the same context (contrastive multimodal learning) that shows NNGS correlates to Zero-Shot classificatio performance, which was not approached by Sobal et al.'s work. Also, we note that we present a similarity measure to compare embeddings, not a loss function for contrastive learning, which can be used in other contexts. Additionally, we note that the fact that a similar idea lead to a higher accuracy in a downsteam task is further evidence that NNGS can give useful insight regarding embedding spaces.\\n\\nWe thank you for the notes that have helped increasing the quality of our work. We hope to have addressed all your concerns, and kindly ask you for an increase in the review grade.\"}", "{\"summary\": \"The paper introduces a new metric, Nearest Neighborhood Graph Similarity (NNGS), designed to evaluate the similarity between embedding spaces by examining the structural similarity of induced neighborhood graphs. The authors use Jaccard similarity to assess overlap between nearest neighbors in paired embeddings. In doing so, NNGS enables comparisons across domains or modalities (e.g., text and image). NNGS is validated on two experimental case studies using analogies and GloVE embeddings, as well as CLIP embeddings on CIFAR100.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Generally, the paper is structured well, which as a result makes it fairly straighforward to follow the train of throught.\\nMoreover, I believe that the authors did a good job providing some crucial theoretical fundamentals in support of NNGS across section 3, which offers some interesting insights on respective measure bounds, e.g. for two independent point clouds and variations thereof.\", \"weaknesses\": \"Unfortunately, I am currently strongly advocating for rejection, as the paper as several large issues alongside lack of clarity in different important parts. I have ordered these based on their importance to me.\\n\\nOne major problem with this work is the lack of meaningful experiments, both with respect to their setup, as well as their breadth and depth.\\n\\n* Testing a metric on two small-scale case studies is simply insufficient. To understand if NNGS holds any relevant benefits over NNGS, a much larger array of tests should be conducted. What happens when the metric is applied e.g. on CLIP, but with a more out-of-distribution dataset? How do insights hold when moving to larger, higher-dimensional variants? How does it transfer to other language-based similarity tasks? And for the one application to GloVe embeddings, the authors report a single correlation value, without any additional visualization, explanation of discussion of differences.\\n\\n* Moreover, the application of CLIP on just CIFAR-100 is insufficient. For one, CIFAR-100 operates in much lower resolution than the training data used for CLIP. While still applicable, insights do not necessarily transfer. At the same time, the experimental design is problematic; relying on CLIPs sensitivity to template changes can fall victim to the known bag-of-words nature in CLIP (c.f. e.g. Yuksekgonul et al. 2022, https://arxiv.org/abs/2210.01936). At the same time, the differences in correlation to CKA, particularly given that experiments were conducted on just one dataset, are insignificant.\\n\\n* The authors insufficiently compare and contrast against other similarity measures (of which there are many) - both in their discussion of related works, and more importantly, their experimental case studies. Why simply focus on CKA, when SVCCA, PWCCA (Morcos et al. 2018, https://arxiv.org/pdf/1806.05759), ContraSim (Rahamim et al. 2023, https://arxiv.org/pdf/2303.16992), Brain-Score, RSA or metrics like GULP (https://arxiv.org/pdf/2210.06545) or a simple mean cosine similarity between clusters are all possible similarity measures to relate against performance changes. \\n\\nIn turn, this makes it unclear why NNGS should be preferred over other distance-based similarity measures. This also holds when looking at the theoretical motivation: there is no free lunch. By avoiding the explicit reliance of point distances, NNGS in return disregards relative relations between neighbouring points in a k-Neighbourhood, which in turn raises the question on whether this is a desired property or not.\\n\\n* The authors for example note that \\\"NNGS has two additional parameters when compared to CKA: the neighborhood size k, and the distance metric used to induce the neighborhood graph. In datasets with more than one cluster, these parameters can be manipulated to find similarities at different scales and in different situations.\\\" But in CKA, one can simply adjust the utilized kernel to adapt to different situations. Moreover, isn't the dependence on different scales and situations used as the motivating factor for NNGS over CKA, as e.g. noted in L108-111? It is not entirely clear to me why NNGS should, again, be preferred?\\n\\nSimilarly, there are several other elements of the provided motivation that remain unclear to me:\\n\\n* L328: \\\"Although theoretically the value of \\u03c3 in CKA with an RBF kernel plays the same role, it is easier to find suitable values for k than to \\u03c3. >>> But why? This is very handwavy, arbitrary reasoning. Sigma is simply a hyperparameter to tune like k.\\n\\n* L329: \\\"In special, we observe that even very small values of \\u03c3 were innefective to find local changes in the point clouds.\\\" >>> But in these cases, CKA does allow for simple changes in the underlying kernel to much better account for particular point cloud structures, no?\\n\\nFinally, the actual novelty is also limited: The Jaccard similarity is exactly defined as a distance over vertices' graph neighbourhoods. The singular novelty in this work is thus the application to a graph induces by some distance function; with works such as Sobal et al. 2024 (https://arxiv.org/pdf/2407.18134) having already investigated the use of distance-induced graphs for e.g. contrasive learning.\\n\\nAlso a smaller, nitpicky issue: The paper uses in parts weird formulation throughout, such as \\\"datablob\\\"; which I assume refers to clusters?\", \"questions\": \"See weaknesses. I am currently strongly advocating for rejection; but would be willing to raise my score if the authors significantly extend their experimental study, and provided convincing arguments regarding the novelty of the proposed NNGS.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new metric, named NNGS, to evaluate the similarity between paired item representations. NNGS is based on the structural similarity between the nearest-neighbors of induced graphs. Two case studies in analogy and zero-shot classification tasks demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a new metric that assesses the similarity between paired item representations by examining the structural similarity within the neighborhoods of induced graphs.\\n2. The effectiveness of the proposed method is demonstrated in two scenarios: analogy tasks and zero-shot classification tasks.\", \"weaknesses\": \"1. The methods discussed in the related work section are insufficient. For instance, only GloVe and CLIP are mentioned for semantic word embeddings and cross-modal embeddings, respectively. A broader range of relevant methods should be included, such as BLIP and Flamingo, to provide a more comprehensive review.\\n2. The quality of the figures does not meet the standards expected at top conferences like ICLR. For example, Figures 5 and 6 contain overlapping text, which significantly impacts readability and clarity.\\n3. The comparison is limited to CKA, a method proposed in 2019. To strengthen the evaluation, more recent methods should be included for comparison.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This submission proposes Nearest Neighbor Graph Similarity (NNGS) as a metric for evaluating similarity between embedding spaces by examining structural similarities within neighborhoods of induced graphs. While revisions during rebuttal addressed some concerns, significant issues remain, particularly in terms of novelty, experimental validation, and breadth of comparisons. Despite the added experiments, the scope remains limited, particularly for a metric that aims to generalize across embeddings and modalities.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers are still negative after the rebuttal and discussion.\\n\\nReviewer cKTW (Score: 5): While acknowledging improvements, raised ongoing concerns about experimental scope and lack of diverse comparisons, advocating for rejection.\\n\\nReviewer A643 (Score: 5): Criticized parameter sensitivity, computational inefficiency, and limited experimental breadth, maintaining a score below acceptance.\\n\\nReviewer e5Rj (Score: 5): Highlighted insufficient related work and comparisons, agreeing that the experimental section does not adequately validate the proposed metric.\"}", "{\"title\": \"Thanks for your review!\", \"comment\": \"Dear reviewer A643\\n\\nThanks for the effort put into reviewing our work. We have addressed your questions as follows:\\n\\n1. This discrepancy was found because the datasets were created differently. As discussed in Section 3.3 (and Appendix D), data for each row in Table 1 is consisted of data grouped within blobs (clusters) with a particular added noise, whereas data for Table 1 consists of a unique dataset with all data in a single cluster. The process to create each of the datasets are discussed in Section 3.2.2 (for Figure 1) and Section 3.3 (for Table 1).\\n\\n2. The parameter c is an evidence that larger datasets could use proportionally larger neighborhood sizes to generate the same parameters. Different values of c relate to different values of k and should be chosen according to the desired neighborhood size for analysis, as we clarify in Section 3.3.\\n\\n3. A one-on-one correspondence is found in paired multimodal datasets used to train contrastive learning machines such as CLIP. We have added this as a footnote for quicker reference.\\n\\nAlso, we have corrected the reference to Table 1. Regarding weakness 5, we clarify that this is because Figure 1 uses a single-cluster dataset where the only distortion is additive noise, whereas the results in Table 1 regard datasets with two clusters, hence there are distortions caused by unbalance, noise in cluster positioning, and noise within each cluster. This discussion was added to the manuscript.\\n\\nWe hope to have addressed all your concerns, and kindly ask you for an increase in the review grade.\"}", "{\"summary\": \"This paper proposes a novel method, namely \\u201cNearest Neighbor Graph Similarity (NNGS)\\u201d, to evaluate the similarity between paired embedding spaces. Through case studies of the Glove and CLIP models, this paper validates the effectiveness of NNGS, showing a correlation between similarity and task-specific accuracy such as analogy calculation and cross-modal zero-shot classification. These findings can help to understand the reasons behind performance differences in these tasks and provide directions for improving the design of paired embedding models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tIntroduces a novel metric, NNGS, for evaluating similarity between paired embedding spaces.\\n2.\\tNNGS excels in identifying similar structures across different scales, which is challenging for traditional kernel methods like CKA.\", \"weaknesses\": \"1.\\tNNGS is based on Jaccard similarity, therefore it is non-differentiable and cannot be used for end-to-end learning of structurally similar representations, limiting its use to measuring the similarity between paired embeddings of items.\\n2.\\tThis paper uses the K-nearest neighbor algorithm, which has high computational complexity when used on large-scale points, requiring high hardware computing resources.\\n3.\\tNNGS is constrained by KNN because KNN is sensitive to outliers in the data, which may affect its performance.\\n4.\\tthe choice of k is crucial, and in high-dimensional spaces, the performance of KNN may also deteriorate. The changing the value of k implies in a modification of the locality of the similarity measure. Although NNGS demonstrates good performance, there is a lack of detailed discussion on how to select the value of k and other parameters.\\n5.\\tFigure 1 illustrates the relationship between NNGS and the number of selected k-nearest neighbors, indicating that NNGS increases as k increases; this contradicts the comparison with the last two columns of Table 1.\\n6.\\tIn Line 290, the reference to Table 1 is incorrectly labels as Table 3.3.\", \"questions\": \"1.\\tIn Figure 1, the relationship expressed on the same curve represents the variation of NNGS with the number of selected k nearest neighbors, indicating that as the number of k nearest neighbors increases, NNGS also increases. This contradicts the data in the last two columns of the first row in Table 1. Could you please explain this discrepancy?\\n \\n2.\\tFrom Equation 7, it can be seen that c = k/(n - 1), where k is controlled by c. So, how to choose c?\\n \\n3.\\tIn section 3.1, how is the existence of a one-to-one correspondence proven in real-world scenarios? Can the authors provide relevant theoretical or empirical support for this assumption?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BYoN2c0o6M
M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation
[ "Sucheng Ren", "Yaodong Yu", "Nataniel Ruiz", "Feng Wang", "Alan Yuille", "Cihang Xie" ]
There exists recent work in computer vision, named VAR, that proposes a new autoregressive paradigm for image generation. Diverging from the vanilla next-token prediction, VAR structurally reformulates the image generation into a coarse to fine next-scale prediction. In this paper, we show that this scale-wise autoregressive framework can be effectively decoupled into \textit{intra-scale modeling}, which captures local spatial dependencies within each scale, and \textit{inter-scale modeling}, which models cross-scale relationships progressively from coarse-to-fine scales. This decoupling structure allows to rebuild VAR in a more computationally efficient manner. Specifically, for intra-scale modeling --- crucial for generating high-fidelity images --- we retain the original bidirectional self-attention design to ensure comprehensive modeling; for inter-scale modeling, which semantically connects different scales but is computationally intensive, we apply linear-complexity mechanisms like Mamba to substantially reduce computational overhead. We term this new framework M-VAR. Extensive experiments demonstrate that our method outperforms existing models in both image quality and generation speed. For example, our 1.5B model, with fewer parameters and faster inference speed, outperforms the largest VAR-d32-2B. Moreover, our largest model M-VAR-d32 impressively registers 1.78 FID on ImageNet 256$\times$256 and outperforms the prior-art autoregressive models LlamaGen/VAR by 0.4/0.19 and popular diffusion models LDM/DiT by 1.82/0.49, respectively.
[ "Scale-wise Autoregressive Model" ]
https://openreview.net/pdf?id=BYoN2c0o6M
https://openreview.net/forum?id=BYoN2c0o6M
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tsOY5cdDdq", "iaB6ZrDIYh", "hnaqIPRjFo", "bq4yTbLhNu", "HW1vQJ4us3" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731453318230, 1730710426234, 1730700122377, 1730701899034, 1730660619658 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1959/Authors" ], [ "ICLR.cc/2025/Conference/Submission1959/Reviewer_sAvT" ], [ "ICLR.cc/2025/Conference/Submission1959/Reviewer_aiUi" ], [ "ICLR.cc/2025/Conference/Submission1959/Reviewer_duhF" ], [ "ICLR.cc/2025/Conference/Submission1959/Reviewer_qR12" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a new autoregressive image generation framework, M-VAR, which leverages bidirectional self-attention for intra-scale modeling and the Mamba mechanism for inter-scale modeling.\\nThe proposed method seems improved both the computational efficiency and the quality of generated images compared to VAR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The statistics of attention score and computation cost of the attention in VAR are interesting and inspiring.\\n\\n(2) The combination of intra-scale self-attention and inter-scale linear modeling seems a reasonable solution to improve the computational efficiency of VAR.\\n\\n(3) The largest model of M-VAR achieves SOTA FIDs on ImageNet dataset.\", \"weaknesses\": \"(1) The decoupling of scale-wise autoregressive modeling seems reasonable, but why we must adopt Mamba? Other efficient self-attention variants should also be considered.\\n\\n(2) In Table 2, M-VAR-dX usually has more parameters than VAR-dX. Are these additional parameters help M-VAR for better performance? \\n\\n(3) The computational FLOPS are not discussed in this article, since the number of paramters is not the only factor affecting computational efficiency. \\n\\n(4) The curve shown in Figure 5 might seem counter-intuitive, why global attention performances worse despite its global modeling capacity? \\n\\n(5) Some typos, e.g. L480 'As shown in Table 5', it should be 'Figure 5'?\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a hybrid framework combining Mamba and Attention mechanisms for scale-wise autoregressive image generation. While the approach appears standard, the claim made by the authors is that the decoupling of intra-scale and inter-scale modeling improves computational efficiency and image quality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is generally well-written and easy to follow.\\n2. It includes numerous objective metrics that contribute to the evaluation.\\n3. Observing Table 1, it is evident that reducing intra-scale attention operations is necessary due to the computational cost highlighted.\", \"weaknesses\": \"1. The presentation could be improved as some figures, such as Figures 2, 3, and 5, are overly large and impact readability.\\n2. Despite the inclusion of many metrics, several tables exhibit issues:\\n * In Table 2, under the section \\\"Generative model comparison,\\\" the comparison between Scale-wise Autoregressive models (M-VAR and VAR) seems unfair. For example, the last two rows show that M-VAR (depth 32) with 3B parameters outperforms VAR (depth 30) with 2B parameters, but the parameter count for M-VAR is 50% higher.\\n * Additionally, inference time increases from 0.7s to 1s (a 43% increase) despite only slightly better FID and IS scores.\\n3. Table 6 appears to lack significant information and could be made more concise for clarity.\\n4. It is suggested that the data in Table 1 be illustrated as a figure to better highlight this critical motivation behind the work.\", \"questions\": \"Was the VAR-d36 model trained by the authors since it has not been released? (in Table 4)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work builds upon the prior work VAR [1] model for autoregressive multiscale image generation. The work shows that inter-scale dependencies have higher computational cost compared to intra-scale dependencies and extends the inter-scale attention mechanism with Mamba-like attention. Experiments on ImageNet 256 and class-conditional 512 show that model performs better than VAR in terms on the FID and IS scores.\\n\\n[1] Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The proposed approach provides statistics on the computational overhead of the intra-scale and inter-scale attention modules for autoregressive multiscale image generation. The statistics are used to design a new mamba based attention module of modeling inter-scale dependencies\", \"Adequate experiments and ablations are performed that show better FiD and IS compared to prior work.\"], \"weaknesses\": [\"The paper is very difficult to read. In eq. 2, the parametrization \\\\theta is not defined. ll. 201-202 are not correct. There are a lot of broken sentences and grammatical ill-constructed sentences. For example, ll. 213 \\\"The sequence S of multiple scales is much longer than each scale (s1, ..., sn)\\\" is not clear. ll. 229 -231 are broken.\", \"What is meant by the attention score, reported in Table 1. how is this score computed is not defined or explained.\", \"Images have a local dependency structure. Therefore intra-scale dependencies are easier to model. It will be good to provide an evidence with the pixel correlations on the dataset considered as a function of inter-pixel distance.\", \"In table 4, how is the inference time of M-VAR lower compared to the VAR model while in table 5 its slightly higher or comparable. The paper mentions that the reduction is quadratic in computational efficiency. How do these results demonstrate the effect?\", \"The number of parameters for the proposed model are much higher than the baseline VAR model. The work claims to improve the computational cost of the baseline. How do these results justify the claim.\", \"Prior work [a,b,c,d], also performs multi-scale image generation. How does this approach compare to the prior work? A line work exists on multi-scale image generation with autoregressive models. The related work does not discuss the prior work for multi-scale image generation.\", \"[a] Mahajan, Shweta and Roth, Stefan. PixelPyramids: Exact Inference Models from Lossless Image Pyramids. In ICCV, 2021.\", \"[b] Xuezhe Ma, Xiang Kong, Shanghang Zhang, and Eduard H.Hovy. MaCow: Masked convolutional generative flow. In NeurIPS, 2019.\", \"[c] Jacob Menick and Nal Kalchbrenner. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In ICLR, 2019.\", \"[d] Scott E. Reed, A\\u00e4ron van den Oord, Nal Kalchbrenner, Sergio Gomez Colmenarejo, Ziyu Wang, Yutian Chen, Dan\", \"Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation. In ICML, 2017.\"], \"questions\": [\"What is attention score and how is it computed? Are these results on the test images? If so, how many images are considered for the statistics?\", \"How does the increased number of parameters correlated with the claimed computational efficiency of the model.\", \"Also see weaknesses above, for additional reviewer questions and concerns.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces M-VAR, a new autoregressive image model based on VAR. The core idea is to decouple VAR into intra-scale modeling and inter-scale modeling. For intra-scale modeling, softmax attention is used, while mamba is used for inter-scale modeling. On ImageNet 256x256 and ImageNet 512x512, M-VAR achieves better efficiency/FID than VAR.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-written. The motivation is clear and reasonable. The proposed method is also presented clearly.\\n2. On ImageNet 256x256 and 512x512, M-VAR demonstrates better results than VAR.\\n3. Ablation study shows intra-scale attention + mamba works better than global attention on VAR.\", \"weaknesses\": \"1. Technical contribution is limited. Replacing global attention with hybrid model architectures has been extensively explored in the AI community. A big concern of such designs is that they may not preserve advantages after scaling up and applying them to real-world use cases (e.g., text-to-image generation). Given that this work only has ImageNet results, the value of the current manuscript is limited for the community.\\n2. It is unclear why M-VAR can deliver better FID than VAR. From the model capacity perspective, global attention should have a stronger/similar capacity than intra-scale attention and mamba. \\n3. Current design choices seem quite random, lacking detailed ablation studies. For example, there are many different choices for intra-scale modeling and inter-scale modeling (RWKV, linear attention, etc). Is there any insight on why choosing the current design?\\n4. According to the ImageNet experiments, the improvements look a bit incremental.\", \"questions\": \"1. What's the setting for speed comparison (hardware, inference engine, batch size, etc)? In addition to relative speedup ratios, adding measured latency/throughput in the tables will be better.\\n2. Why M-VAR can deliver better FID than VAR? I can see that M-VAR has advantages over VAR from the efficiency perspective. But, from the model capacity perspective, I do not see clear advantages.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
BYWVwmbqwK
Unpaired Single-Cell Dataset Alignment with Wavelet Optimal Transport
[ "Peter Pao-Huang", "Kyra Thrush-Evensen", "Daniel Montemayor", "Cyril Lagger", "Morgan Levine" ]
Aligning single-cell samples across different datasets and modalities is an important task with the rise of high-throughput single-cell technologies. Currently, collecting multi-modality datasets with paired samples is difficult, expensive, and impossible in some cases, motivating methods to align unpaired samples from distinct uni-modality datasets. While dataset alignment problems have been addressed in various domains, single-cell data introduce additional complexity including high levels of noise, dropout, and non-isometry between data spaces. In response to these unique challenges, we propose Wavelet Optimal Transport (WOT), a multi-resolution optimal transport method that aligns samples by minimizing the spectral graph wavelet discrepancies across datasets. Filters are incorporated into the optimization process to eliminate non-essential scales and wavelets, enhancing the quality of correspondences. We demonstrate the capacity of WOT in highly noisy and non-isometric conditions, outperforming previous state-of-the-art methods by significant margins, especially on real single-cell datasets.
[ "single cell", "optimal transport", "unpaired dataset alignment", "spectral graph wavelets", "gromov wasserstein" ]
Reject
https://openreview.net/pdf?id=BYWVwmbqwK
https://openreview.net/forum?id=BYWVwmbqwK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w9iY4Dfh3b", "rnT0LGwU1b", "qDNuj8reaB", "p2ZE0siicW", "oxhaZQKVtO", "oXytpIunep", "oSrKgjQktq", "nhD5TuHKQv", "kOiUaQfnUH", "iXJ5UAoEee", "huFzodeP0l", "gfRj5Xlcng", "eK9b66Dome", "MxjPf86EMZ", "KE0hAixcw0", "I76YmU84uN", "CK4FUdCgSg", "8jbHrL1QCt", "7nPh4UQR6F", "2J0vuejRTX" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730156384657, 1731954147034, 1732230918279, 1731953746991, 1732920449481, 1731953176351, 1730635583928, 1732885853083, 1731954242718, 1731953378635, 1732312177392, 1734431999900, 1732312058424, 1737523862032, 1730541810840, 1732217896519, 1731131352520, 1731954482752, 1731952592305, 1731953925687 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_Txuv" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_1Hb6" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_oyEN" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_tWjN" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Area_Chair_wRpQ" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_tWjN" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_1Hb6" ], [ "ICLR.cc/2025/Conference/Submission7769/Reviewer_1Hb6" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ], [ "ICLR.cc/2025/Conference/Submission7769/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Wavelet Optimal Transport (WOT) for aligning unpaired single-cell datasets across different modalities. The core contribution lies in using spectral graph wavelets to decompose dataset signals into multiple scales and using filters during optimal transport. The authors propose two variants: E-WOT (entropy-based filtering) and L-WOT (learned filtering). This framework is claimed to generalize Gromov-Wasserstein and shows better performance on noisy data alignment tasks. Various experiments are performed on synthetic data, shape correspondence tasks, and real single-cell multi-omics datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: The paper addresses a well-motivated problem (dataset alignment in single cell technologies). The proposed method outperforms many baselines in several setups.\", \"s2\": \"Multi-scale aggregation is a valid approach which has connections to Gromov-Wasserstein under specific conditions (Remark 1). The framework itself is general enough to allow a broad design space with many things to tune for improved performance (kernels, filter, scale aggregation, graph construction...)\", \"weaknesses\": \"W1: From a high-level perspective, the method combines established concepts (SGW and Gromov Wasserstein with filtering approaches). While the combination is novel, there appears to be limited theoretical development to help understand why and when the approach works well. Additionally, there are multiple moving parts that would benefit from a more detailed ablation study to shed light on different design choices. For instance,\\n\\na) Comparing at multiple scales is the primary motivation for this approach. I think it would be helpful to show (e.g., in experiments) which scales are most relevant and how much they contribute to performance. This is particularly important given the non-negligible performance variability between different WOT variants. In some experiments, E-WOT and L-WOT show varying performance (sometime underperforming some baselines). Without understanding this variability, it is challenging to predict when WOT will outperform existing methods or which variant to use.\\n\\nb) This may be a minor point, but there are limited discussions of when to use which aggregation method (sum, max, potentially other weighting schemes) and how this choice affects results. \\n\\nThe overall impression is that there are many tunable components (scale selection, filtering approach, aggregation method, kernel choice) without sufficient guidance or understanding of their interactions.\", \"w2\": \"a) L-WOT may require some discussion on when or whether it converges at all. b) Is entropy always a good heuristic to emphasize informative scales? Using KDE with tunable bandwidth seems to introduce further complexity (in the Appendix, the authors report fixing Gaussian KDE bw at 0.4; have the authors considered adaptive bandwidth like Scott's or Silverman's?)\", \"w3\": \"It is not clear whether the experiments exclusively use the Chebyshev polynomial approximation. The paper would be strengthened if there is some analysis/comparison between the Chebyshev approximation vs the exact computation to provide some understanding of the trade-off between speed and performance.\", \"w4\": \"I think that runtime/memory benchmarking for all methods should be discussed more thoroughly (beside what is provided in Appendix E). What are the runtime for each method in each experiment? Moreover, constructing large, fully-connected graphs is expensive. It helps to understand the computational/memory cost when evaluating different methods for practical applications.\", \"questions\": \"Q1: Why is this work not positioned as a general framework for handling noisy, non-isometric spaces (many of which the authors leave as future directions)? Are there inherent limitations to it when applied to other domains? It is true that single-cell data provides an important application, but if the method is more general then I think it may be a good idea to broaden the experiment settings to truly evaluate its effectiveness against other baselines.\", \"q2\": \"The authors state they cannot conduct hyperparameter tuning due to the unpaired setting (with some related details in Appendix C), is this a standard practice to pretend we don't have ground truth for tuning in these domains? Since there are many things to tune, how practical is the approach compared to the baselines? If we would like to avoid falling back to using heuristics, then how common is prior scale knowledge in practical settings? (In the paper, the authors cite Deutsch et al. (2016) but it seems generic and is not clear what scale is considered noise since there are no relevant experiments)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer\\u2019s concerns and questions below:\\n***\\n**Comparing at multiple scales is the primary motivation for this approach. I think it would be helpful to show (e.g., in experiments) which scales are most relevant and how much they contribute to performance.**\\n\\nPlease view our response to this question in the \\u201cResponse to Common Questions & Concerns\\u201d comment.\\n***\\n**There are limited discussions of when to use which aggregation method (sum, max, potentially other weighting schemes) and how this choice affects results. The overall impression is that there are many tunable components (scale selection, filtering approach, aggregation method, kernel choice) without sufficient guidance or understanding of their interactions.**\\n\\nPlease view our response to this question in the \\u201cResponse to Common Questions & Concerns\\u201d comment.\\n***\\n**L-WOT may require some discussion on when or whether it converges at all.**\\n\\nSince L-WOT is a bilevel optimization problem where both the inner and outer loops are nonconvex, it is difficult to make any rigorous statements about the convergence of L-WOT. However, in practice, we find that limiting the number of outer loops to 100 is sufficient to obtain good performance in experiments. \\n***\\n**Is entropy always a good heuristic to emphasize informative scales?**\\n\\nWe **added a new section (Appendix 5.2.2)**, which explores the different wavelet scales and the informative ones. Specifically, in subsection \\u201cFilter Scales\\u201d of Appendix 5.2.2, we find that entropy closely matches the ideal filter for both single-cell datasets, demonstrating that is a good heuristic at least for scGEM and SNARE-seq. \\n\\nHowever, heuristics by definition can not always be good in all scenarios; the entropy heuristic is no exception. For instance, if one scale is uniformly distributed at random, the entropy heuristic would emphasize this scale even though it is complete noise. In practice, as shown in the experiments, we still obtain good results with this heuristic even if there are potential cases in which it may fail.\\n***\\n**Using KDE with tunable bandwidth seems to introduce further complexity (in the Appendix, the authors report fixing Gaussian KDE bw at 0.4; have the authors considered adaptive bandwidth like Scott's or Silverman's?)**\\n\\nWe did not try adaptive bandwidths for the Gaussian KDE since each dataset is unit normalized. \\n***\\n**It is not clear whether the experiments exclusively use the Chebyshev polynomial approximation. The paper would be strengthened if there is some analysis/comparison between the Chebyshev approximation vs the exact computation to provide some understanding of the trade-off between speed and performance.**\\n\\nOur implementation exclusively uses the Chebyshev polynomial approximation, as stated in Section 3.1. This approximation is well-established in spectral graph wavelets literature [1] providing extensive theoretical guarantees and empirical validation. Since our work builds upon this foundation, we believe that re-validating the Chebyshev approximation would not substantially strengthen our contribution or provide new insights beyond what is already established in the literature. Our experimental results across multiple settings demonstrate that using the Chebyshev approximation effectively serves our method objective in handling noise and non-isometry in single-cell dataset alignment.\\n***\\n**I think that runtime/memory benchmarking for all methods should be discussed more thoroughly (beside what is provided in Appendix E). What are the runtime for each method in each experiment? Moreover, constructing large, fully-connected graphs is expensive. It helps to understand the computational/memory cost when evaluating different methods for practical applications.**\\n\\nWe believe the runtime/memory benchmarking is comprehensively covered in Appendix E where we provide\\n1. A detailed comparative table showing how each method scales with respect to feature dimensionality, number of samples, number of scales, and choice of wavelet kernels\\n2. Explicit timing benchmarks comparing our methods against GW-OT across different dataset sizes (from n=100 to n=10,000)\\n\\nRegarding fully connected graphs, this is a preprocessing step for computing intra-dataset distances and is not part of the core optimal transport algorithm - both our method and baselines require this distance computation.\\n\\nHowever, if there are specific aspects of the computational analysis you would like us to elaborate on, we would be happy to clarify those sections.\\n***\\n**Why is this work not positioned as a general framework for handling noisy, non-isometric spaces (many of which the authors leave as future directions)? Are there inherent limitations to it when applied to other domains?**\\n\\nPlease view our response to this question in the \\u201cResponse to Common Questions & Concerns\\u201d comment.\\n***\"}", "{\"title\": \"Minor comment on Question 5\", \"comment\": \"In my opinion, a more informative experiment to demonstrate the value of wavelets would have been to compare this approach against GW with another spectral / graph-based distance that doesn't involve wavelets. Euclidean distance is pretty much guaranteed to not be great for high-dimensional applications like this, even if dimensionality reduction is used because in single-cell datasets, people typically use more than a couple PCs, for example, in order to capture majority of the variability. So in practice, I believe people use heat kernels or shortest path distances in nearest neighbor graphs in GW. Having said that, I have personally been more focused on the non-simulated data applications.\"}", "{\"comment\": \"We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer\\u2019s concerns and questions below:\\n***\\n**Few misspellings in the main text (e.g., in line 012, 'throughput' should be 'throughout').**\\n\\nThis is not a misspelling. High **throughput** technologies are instruments that can analyze and create data for a large batch of samples.\\n***\\n**Figure 1 lacks clarity in illustrating both the task and framework.**\\n\\nCan you be more specific about what part of the figure lacks clarity? \\n\\nThe task is visualized through two graphs (X and Y) with nodes that need to be matched across datasets, specifically highlighting how nodes C and D in X correspond to nodes 3 and 7 in Y.\", \"the_framework_is_explicitly_shown_through\": [\"Multiple layers representing different scales (S0, S1, S2...)\", \"The spectral graph wavelet coefficients (\\u03c8) that define relationships between nodes\", \"A visual representation of how these multi-scale views contribute to the total cost C\", \"The filter F that removes uninformative scales and wavelets\"], \"the_figure_directly_communicates_our_core_concept\": \"WOT finds matches between points by considering their relationships at multiple scales, illustrated through the layered representation and the mathematical formulation above.\\n\\nIf you have more specific points for improvement, we would be happy to include them. \\n***\\n**Lack of comparisons with more recent single-cell paired alignment methods such as Harmony and scDML.**\\n\\nIn unpaired dataset alignment, the methods referenced in Table 2 are current state-of-the-art methods.\\n\\nNote that while many newer works may fall into the category of data alignment, they often have very strong assumptions between the data spaces and incorporate those assumptions into their method. For example, Harmony [1] assumes knowledge of cell type labeling or batch information. Other examples include assuming a prior knowledge graph between features [3], utilizing weakly linked features [2], and more [4,5]. In short, these methods are not truly \\\"unpaired alignment\\\" methods and thus would not be a fair and meaningful baseline. Additionally, these methods cannot operate on modalities such as brightfield imaging/scRNA-seq, etc., or other modalities outside of single-cell biology where these assumptions may not hold.\\n\\nFurthermore, scDML is a data clustering technique, not a cross dataset alignment technique. Specifically, it corrects for batch effect for datasets in the same space (i.e. given two RNA-seq datasets, cluster the two). This is a different task altogether than ours because we\\u2019re trying to align datasets **not** in the same space (i.e. given an RNA-seq dataset and an ATAC-seq dataset, align the two). \\n***\\n**The description of the weakness of related works in Section 2.1 remains for more detailed analysis about these methods.**\\n\\nOur paper provides an analysis of related works' limitations through both general discussion and experiments:\\n- Section 2.1 introduces existing methods and their general limitations\\n- Section 4.1 explicitly demonstrates how baseline methods like GW fail in high-noise scenarios\\n- Section 4.2 shows baseline limitations in handling non-isometric relationships\\n- Real single-cell experiments in Section 4.3 validate these limitations in practice\\n\\nRather than just stating weaknesses, we systematically demonstrate them through carefully designed experiments that isolate failure modes. The progression from controlled experiments to real data provides clear, empirical evidence of where and why existing methods fall short (noise and non-isometry). If the reviewer has specific aspects of the baseline methods they feel warrant deeper analysis, we welcome more detailed feedback.\\n***\\n**The study lacks a broader range of experimental scenarios involving real single-cell multi-omic datasets. Results would be more convincing with diverse single-cell multi-omic datasets from different tissues and sequencing technologies, to demonstrate the method's effectiveness across various empirical conditions.**\\n\\nWe agree that aligning samples across different tissues would be useful. However, our current choice of the scGEM and SNARE-seq datasets was deliberate and comprehensive for a couple of reasons:\\n1. These datasets represent distinct biological scenarios and technical challenges:\\n * scGEM captures a dynamic process (cell reprogramming) with gradual state transitions\\n * SNARE-seq represents discrete cell types with clear cluster boundaries\\n * They use different measurement technologies (gene expression/DNA methylation vs RNA/chromatin accessibility)\\n2. These datasets are well-established benchmarks in the field, enabling direct comparison with multiple baseline methods using standardized evaluation metrics.\\n\\nWhile additional datasets could provide further validation, our current experiments already demonstrate WOT's performance across significantly different biological contexts and technical conditions.\\n***\"}", "{\"comment\": \"Thank you for the reviewer's continued discussion of our work. We respectfully disagree with the assessment of practical applicability and would like to clarify some important points:\\n\\n(1) The fundamental issue is that paired samples or prior labeling information (like cell-type labeling) are challenging or even impossible to obtain in many real-world scenarios. In limited cases where this information is available, it is obvious that one should leverage this prior information for better alignment. However, _in most practical cases_, it is not a matter of deciding between paired or unpaired methods, but rather, one can **only** use unpaired methods. \\n\\nFor example, in modalities like spatial transcriptomics, single-cell metabolomics, and single-cell glycomics, obtaining definitive cell type labels is nearly impossible due to technical limitations and the complex nature of these measurements. Even in more established modalities like scRNA-seq and scATAC-seq, obtaining cell-type labeling (used as input for scJoint) requires additional complex computational or experimental procedures such as manual expert annotation, automated tools like SingleR, or extensive marker gene analysis. As a result, not all scRNA-seq or scATAC-seq datasets have cell type labeling information. In these common cases where prior information is not provided, one can still use WOT but cannot use works like scJoint. Therefore, in practice, our method is much more applicable to real problem settings.\\n\\n(2) Beyond the issue of requiring often unattainable prior information, existing methods are typically restricted to specific modality pairs. For instance, scJoint can only align between scRNA-seq and scATAC-seq, making it unsuitable for other experimental datasets like scGEM (scRNA-seq and DNA methylation). As demonstrated in our single-cell experiments, WOT can align multiple modalities and is theoretically designed to align any modality combination. While we acknowledge that there are inherent tradeoffs between generality and specificity in any method, we have intentionally developed WOT as a general-purpose solution that can serve as a \\\"swiss-army knife\\\" for any type of modality alignment task (as described in our introduction).\\n\\nFurthermore, we must correct a misunderstanding about Harmony's capabilities. Harmony cannot align between different modalities, including scRNA-seq and scATAC-seq. As explicitly stated in its methods section and \\\"Assumptions about Input Data,\\\" Harmony was designed specifically for aligning scRNA-seq data. Consequently, it cannot be applied to our single-cell datasets, which require alignment between different modalities. \\n\\nIn summary, we want to emphasize that the framing of choosing between paired and unpaired methods is misleading. While paired information should certainly be leveraged when available, the reality is that most real-world situations are inherently _unpaired_. In these cases, there is no decision to make between paired and unpaired methods - unpaired methods are the only viable option. As such, we believe that WOT is a practically useful method that can align between any pair of single-cell modalities in a completely unpaired manner.\"}", "{\"comment\": \"We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer\\u2019s concerns and questions below:\\n***\\n**I read it through multiple times and it was still quite confusing and not sure who the audience is. This is clearly not going to be the computational biology practitioners as the paper is too technical without providing connections and insights sufficiently to the single-cell applications.**\\n\\nWe respectfully disagree that the paper lacks clear motivation and connection to single-cell biology. Our method was specifically designed to address two critical challenges unique to single-cell data alignment: high technical noise and non-isometric relationships between modalities. These are not arbitrary technical innovations looking for applications - they directly respond to fundamental challenges in the single-cell field as detailed in our introduction.\\n\\nUnlike traditional GW methods that were made for isometric matching, WOT was intentionally made to handle the complex realities of single-cell data. The progression of our experiments deliberately and consistently demonstrates this connection back to single-cell: we first (1) isolate and validate WOT's ability to handle noise (bifurcation experiment) and non-isometry (shape correspondence experiment) before (2) showing its effectiveness in real single-cell datasets.\\n\\nWe also respectfully disagree that the paper's technical depth makes it inaccessible to computational biology practitioners. Computational biology, particularly single-cell analysis, regularly uses technically challenging mathematical methods from other fields - from manifold learning in trajectory inference to probabilistic modeling. Our target audience is precisely these practitioners.\\n\\nAdditionally, much like how GW was initially developed for object matching but has now been applied to even the field of single-cell [1], our method is initially developed for single-cell but can also be applied in future works to other domains; this general applicability is a benefit rather than a drawback. \\n***\\n**It was not clear what alignment of \\\"unpaired\\\" single-cell dataset would actually mean. What would be the use case for this, how would biologists benefit from such alignment for scientific research?**\\n\\nAs stated throughout the introduction and mathematically explicit in lines 158-161, unpaired alignment between single cell datasets means that we want to define the mappings between cells of one modality (like scRNA-seq) to cells of another modality (like ATAC-seq) without any apriori knowledge on the joint distribution of these modalities.\\n\\nSince each modality provides a different measurement (i.e. view) of the cell state, biologists often want to have data from multiple modalities from the same cell. However, this case is often impossible since many measurements are destructive, meaning you cannot conduct multiple measurements on the same cell. Hence, having unpaired single-cell alignment methods like WOT is important because it effectively provides data from multiple modalities on the \\u201csame\\u201d cell even when it is biologically impossible to measure these modalities on the same cell. \\n\\n***\\n**The authors need to keep referring back & connecting to the single-cell example/application throughout the presentation of the algorithm. Section 3 is almost entirely separated from the single-cell application, and it is unclear how these two are related. For instance, the \\\"scale\\\" is discussed a lot in the algorithm, but what does scale actually mean in single-cells?**\\n\\nWe agree that some terms like \\u201cscale\\u201d in the methods section may be ambiguous in how they relate to single-cell. However, since Section 3 is our Methods section, we deliberately structured the paper to first establish the mathematical foundation of WOT before demonstrating its practical relevance to single-cell biology in the Experiment section. This is standard practice for methods papers, where a clean technical presentation enables readers to fully understand the approach before seeing its application.\\n\\nTo clarify the meaning of scale, it intuitively represents the high or low frequency patterns of the single cells dataset. While there is not a direct a one-to-one correspondence with a specific biological variation, one could imagine the low-frequency scales (higher valued scales) corresponding to global, slow-moving patterns like cell type while high-frequency scales (lower valued scales) represent local, fast-moving patterns like noise. Each scale represents a different level of pattern resolution from the more global scale information to local scale information. The goal of WOT is to separate the various scales of each single-cell dataset and align the most common scales, which directly reduces noise and non-isometry between different datasets. \\n\\nTo further address these concerns, we **added a new section (Appendix D.2.2)** to visualize and attempt to explain the scales of wavelets for single-cell datasets. \\n***\"}", "{\"summary\": \"The authors present Wavelet Optimal Transport (WOT) based on spectral graph wavelets to align different single-cell datasets, which can be challenging due to noise, dropout, and batch effects. By introducing two versions of WOT, namely L-WOT and E-WOT, the authors demonstrate that these methods are collectively better the the state-of-the-art methods on two simulation and two real single-cell datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors address important problem of aligning unpaired single-cell datasets, which seems to not have been addressed in the literature previously. While many unpaired methods exist in different applications, such as image-to-image translation as the authors rightly point out, single-cell context requires additional careful considerations. The mathematical details are thorough and the algorithm itself is familiar to the literature, as it relies on projected gradient descent and/or alternating minimization schemes. I think when the paper is put in right format and much more user-friendly writing style, it will be a great contribution to the community\", \"weaknesses\": [\"The biggest weakness of this paper is the **presentation**. I read it through multiple times and it was still quite confusing and not sure who the audience is. This is clearly not going to be the computational biology practitioners as the paper is too technical without providing connections and insights sufficiently to the single-cell applications. It felt as though the authors had the idea of WOT on spectral graph wavelets first and then tried to find for some relevant applications afterwards, which led to awkward connections. The experiments felt a bit rushed in that the main highlights (Section 4.3) are only dedicated half a page, without clear and thorough investigations on why the WOT algorithms might be performing better (I also didn't like the practice of \\\"taking\\\" the numbers from other papers).\", \"It was not clear what alignment of \\\"unpaired\\\" single-cell dataset would actually mean. What would be the use case for this, how would biologists benefit from such alignment for scientific research? For image-to-image translation, yes there are clear reasons, but for alignment of single-cell dataset, it was not so clear from the paper\", \"The authors need to keep referring back & connecting to the single-cell example/application throughout the presentation of the algorithm. Section 3 is almost entirely separated from the single-cell application, and it is unclear how these two are related. For instance, the \\\"scale\\\" is discussed a lot in the algorithm, but what does scale actually mean in single-cells? The wavelets can provide a basis for decomposition in spectral & temporal domain, but what does it actually mean for single-cell images?\", \"I think the experimental results are not presented in reader-friendly manner. The evaluation metrics keep changing (geodesic, label transfer accuracy), which the practitioner not familiar with them would have hard time understanding - What do each of these metrics really tell you? Wouldn't it be also important to discuss what filters are used and learned for L- and E-WOT in these real-world examples? What do each of baseline in real-world example do (SCOT, UNIONCOM, etc..) and how/why WOT methods are better as we see in the table?\", \"\\\"L-WOT performs much better than E-WOT and existing methods on the scGEM while the inverse is seen in SNARE-seq. A potential reason for this difference is that scGEM profiles cells in dedifferentiation, so the boundaries of cell types are not as clear as those of SNARE-seq ~\\\" => This feels like scratching the surface and not really getting at the reason why one might perform better than the other\", \"Some illustrative (quantiative and qualitative) figure on the experiment results for 4.3 would be informative.\", \"All in all, the authors should focus much less on the mathematical underpinnings but focus more on biological aspect for this to be a valid contribution to the community.\"], \"questions\": [\"The simulation dataset seems to have dimension of from 1,000 to 2,000, but the real-world single-cell datasets remain at 10~30. Could you explain if this is common combination and how this might affect the algorithm?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the author's detailed response and careful revision of the manuscript. After carefully reading the revised manuscript and other reviewers' opinions, some of my confusions were answered, but there are still some concerns that I think the author needs to further clarify. The following are the most important ones (note, not all questions, just those that I think need more response):\\n\\n1. Regarding the author's view on Harmony, I am skeptical. First, although they are paired methods, they are the most commonly used alignment method in practice. Indeed, Harmony was proposed for alignment between scRNA-seq datasets, but this does not affect its use for alignment between scRNA-seq and scATAC-seq datasets. In addition, I listed these two methods mainly as examples. Even the methods suitable for alignment between scRNA-seq and scATAC-seq are still too numerous to mention. For example, scJoint (https://www.nature.com/articles/s41587-021-01161-6) can do this. \\n\\n2. Experimental Fairness. It does seem that the method proposed by the author has a performance advantage over the unpaired method. The authors believe that it is unfair to compare paired and unpaired methods. In terms of methodology, I agree with this view. But in practice, I completely disagree. Because if our goal is to align scRNA-seq and scATAC-seq to the same space for downstream bioinformatics analysis, we don\\u2019t care whether to use the unpaired method or not. We only care about the quality of the alignment results. In particular, if unpaired methods perform much worse than paired methods, then we will have less motivation to use these methods, even if they have technical innovations. What I mean is that I don\\u2019t understand how this technical innovation has any performance advantage over the existing paired methods. And the existing methods may be able to be extended to align three or four datasets at the same time, while this method is limited.\\n\\nOverall, this paper is technically novel and interesting. But I still wonder what advantages it has in practice. And the methods for aligning across sequencing data seem to be very \\\"crowded\\\", which requires the authors to emphasize the differences between their proposed method and other methods, and not just limited to technical differences.\\n\\nIn addition, I strongly agree with reviewer Txuv's opinion. While the paper appears to be technically very novel, it appears to lack breadth and depth. In terms of content, I think the author needs to continue to work hard to improve these shortcomings or answer my concerns. Therefore, I will maintain my original score for now. Looking forward to further response from the author.\"}", "{\"comment\": \"(continued)\\n\\n**The authors state they cannot conduct hyperparameter tuning due to the unpaired setting (with some related details in Appendix C), is this a standard practice to pretend we don't have ground truth for tuning in these domains? Since there are many things to tune, how practical is the approach compared to the baselines? If we would like to avoid falling back to using heuristics, then how common is prior scale knowledge in practical settings?**\\n\\nFor completely unpaired settings like single-cell dataset alignment, not having a validation set is indeed standard practice and reflects real-world scenarios where paired data is unavailable or impossible to obtain. This is not about \\\"pretending\\\" we do not have ground truth, but rather it reflects the genuine constraints of these biological applications.\\n\\nRegarding practicality, our method remains competitive with baselines even without extensive tuning. Most hyperparameters are fixed to default values (as detailed in Appendix C), and the few variable ones (like entropic $\\\\epsilon$ regularization) are selected through an unsupervised procedure that we have now made explicit in Algorithm 2. In fact, our method achieves better results compared to baselines that face similar hyperparameter complexity.\\n\\nWhile prior scale knowledge is not commonly available in practice for single-cell data, our **newly added empirical analysis in Appendix D.2.2** shows that smaller-scale wavelets typically better reveal samples that should be aligned, while larger scales tend to muddle the data points. This pattern was consistent across both scGEM and SNARE-seq datasets, suggesting that even without prior knowledge, there are patterns in how different scales contribute to alignment quality, which guides interpreting informative scales even without prior knowledge. \\n\\nLastly, we have improved the clarity of our unsupervised hyperparameter tuning by replacing the text description of our hyperparameter selection with Algorithm 2.\\n***\\n[1] Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 2011.\"}", "{\"comment\": \"(continued)\\n\\n**I think the experimental results are not presented in reader-friendly manner. The evaluation metrics keep changing (geodesic, label transfer accuracy), which the practitioner not familiar with them would have hard time understanding - What do each of these metrics really tell you?**\\n\\nWe agree and have **added Appendix E** which summarizes each metric. However, please note that different evaluation metrics are necessary since each experiment is different and necessitates a different evaluation for meaningful insights. \\n***\\n**Wouldn't it be also important to discuss what filters are used and learned for L- and E-WOT in these real-world examples?**\\n\\nSection 3.3 and 3.4 outline how WOT derives the filters in E-WOT and L-WOT. Additionally, we have **added a new section (Appendix D.2.2)** that visualizes and analyzes the filters from L-WOT and E-WOT to address this concern.\\n***\\n**What do each of baseline in real-world example do (SCOT, UNIONCOM, etc..) and how/why WOT methods are better as we see in the table?**\", \"each_baseline_method_is_a_dataset_alignment_method\": \"they find a mapping from samples in one dataset to samples in another dataset. As discussed in our method and experiment section, WOT performs better because we explicitly model the various scales of each dataset and filter out the noisy and non-isometry components. This robustness in the presence of noise and non-isometry (as demonstrated in our experiments) is likely why WOT performs better in various experiments.\\n***\\n**\\\"L-WOT performs much better than E-WOT and existing methods on the scGEM while the inverse is seen in SNARE-seq. A potential reason for this difference is that scGEM profiles cells in dedifferentiation, so the boundaries of cell types are not as clear as those of SNARE-seq ~\\\" => This feels like scratching the surface and not really getting at the reason why one might perform better than the other**\\n\\nWe conducted an **additional experiment** to visualize the filter weights and wavelet scales for the single-cell experiments in **Appendix D.2.2** which may provide some intuition on why we see this differing performance. However, please understand that a rigorous conclusion to answer why one heuristic performs better than another on a specific dataset is incredibly non-trivial in machine learning. A similar analogy is trying to understand why a learning rate of 1e-2 is better than 1e-3 for one experiment but worse for another experiment. \\n***\\n**Some illustrative (quantiative and qualitative) figure on the experiment results for 4.3 would be informative.**\\n\\nWe added a **new section (Appendix D.2.2)** which provides more illustrations for Experiment 4.3. Please also view the beginning of Appendix D.2 for visualizations of single-cell dataset alignments. If you have any specific suggestions, we would be happy to include them. \\n***\\n**The authors should focus much less on the mathematical underpinnings but focus more on biological aspect for this to be a valid contribution to the community**\\n\\nWe respectfully disagree. As a submission to ICLR, the paper needs to balance mathematical novelty with biological applications. The technical depth is not excessive - it is essential for demonstrating and explaining the method's contributions to the machine-learning community. Through experiments and direct motivation by problems in single-cell, we also balance this technical depth with substantive relevance and contribution to the field of single-cell. \\n***\\n**The simulation dataset seems to have dimension of from 1,000 to 2,000, but the real-world single-cell datasets remain at 10~30. Could you explain if this is common combination and how this might affect the algorithm?**\\n\\nReal single-cell datasets like scRNA-seq are commonly in the thousands to tens of thousands of dimensions. Datasets of 10-30 dimensions are only after dimensionality reduction. WOT operates on the spectral graph wavelets (SGWs) derived from the pairwise distance matrices of the datasets, rather than directly on the high-dimensional data points themselves. The construction of the pairwise distance matrices, which serve as the input to the SGW transform, is indeed affected by the dimensionality of the data. As the number of dimensions increases, the computation of pairwise distances becomes more challenging due to the curse of dimensionality. High-dimensional spaces tend to be sparse, and the notion of distance becomes less informative as the dimensionality grows. \\n\\nHowever, once the pairwise distance matrices are computed as a preprocessing step, the subsequent steps of constructing the SGWs and applying the WOT algorithm are not explicitly affected by the dimensionality of the original data points. The SGWs are derived from the eigendecomposition of the graph Laplacian, which is constructed purely based on the pairwise distance matrices.\\n***\\n[1] SCOT: single-cell multi-omics alignment with optimal transport. Journal of computational biology, 2022.\"}", "{\"comment\": \"(continued)\\n***\\n**Question 4**\", \"we_should_clarify_two_important_points_about_our_implementation\": \"1. The \\u03b5 updating scheme in our implementation uses a geometric progression (0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0) rather than the arithmetic progression described in the question. This significantly reduces the number of iterations needed to explore the meaningful range of \\u03b5 values.\\n2. The runtimes reported are indeed for a single hyperparameter combination, as our primary objective was to demonstrate the computational efficiency of the core WOT algorithm itself. We acknowledge that we should have been more explicit about this in our presentation.\\n\\nWhile hyperparameter selection is an important practical consideration, we focused our computational analysis on the WOT algorithm since it represents our main theoretical contribution. The current hyperparameter selection procedure is admittedly heuristic and was not optimized for computational efficiency, as our primary goal was to handle invalid transport plans. We recognize that for practical applications, more efficient hyperparameter selection strategies could be developed.\\n\\nWe have added a note in the paper to clarify these points and to acknowledge that the total computational cost including hyperparameter selection would be higher than the reported single-run times.\\n***\\n**Minor Comment on Question 5**\", \"we_want_to_clarify_our_experimental_design_rationale\": \"we specifically chose Euclidean distance as our baseline to isolate and demonstrate the noise-reduction capabilities of wavelets themselves. While we agree that alternatives like heat kernels or shortest path distances in nearest neighbor graphs often perform better in practice for single-cell applications, these methods already incorporate their own noise-reduction properties.\\n\\nOur bifurcation experiment was deliberately designed to show that wavelets provide meaningful improvements even when using simple Euclidean distances as the affinity matrix. This helps demonstrate the fundamental value of the wavelet approach independent of other noise-reduction techniques. Importantly, we view our wavelet-based method as complementary to, rather than competing with, distance metrics like shortest path distances. In fact, for all later experiments, we use the shortest paths to construct the affinity matrix for WOT (and for baselines). \\n***\\n[1] Scotv2: Single-cell multiomic alignment with disproportionate cell-type representation. Journal of Computational Biology, 2022.\"}", "{\"metareview\": \"The paper proposes Wavelet Optimal Transport (WOT) for aligning unpaired single-cell datasets by leveraging spectral graph wavelets to address challenges like noise, dropout, and non-isometry. While the method offers a novel approach with multi-resolution alignment, the reviewers highlighted multiple weaknesses. These include unclear presentation, limited comparisons to key baselines like Harmony and scDML, and insufficient experimental validation on diverse real-world datasets. Reviewers emphasized that, while the work is non-trivial and technically valid, it lacks broader validation and deeper theoretical insights, reducing its appeal to the broader community. Furthermore, concerns were raised regarding the fixed hyperparameter settings, computational costs, and the lack of evaluation for multiple dataset alignments. Despite the authors\\u2019 efforts in addressing some of these points in the rebuttal, significant gaps remain in experimental breadth and theoretical depth, weakening the overall contribution. Given these limitations, the paper does not yet meet the standards for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the discussion focused on the lack of experimental diversity, hyperparameter sensitivity, and unclear practical applications. The authors responded with clarifications and incremental updates, such as analyzing filter scales and providing hyperparameter tuning details. However, these changes were not sufficient to address the broader concerns. The reviewers\\u2019 consensus weighed heavily on the need for broader applicability and deeper insights, ultimately leading to the decision to reject the paper.\"}", "{\"comment\": \"Thank you for your continued engagement in our work. We answer your questions below:\\n***\\n**Questions 1 & 2**\\n\\nHaving real-world separately sequenced datasets is indeed an important consideration. While time constraints of the rebuttal period prevent us from conducting the full suggested experiment, we instead performed a focused analysis to address the core concern about performance in non-1-to-1 mapping scenarios.\\n\\nSpecifically, we tested our method on simulated cell-type imbalance and missing cell types in the SNARE-seq and scGEM datasets. This simplified experiment allowed us to evaluate how our self-tuning procedure performs when the underlying correspondence is not strictly 1-to-1. We followed the experimental setup described in section 3.1.2 of Demetci et al. [1], where they simulated unbalanced single-cell datasets. We matched all hyperparameters (including $\\\\rho$) used in [1], except for the epsilon regularization term, which is determined by Algorithm 2.\\n\\nThe results of this new experiment are presented in the table below (note that all baseline results are taken directly from Table 1 in [1]):\\n\\n| Label Transfer Accuracy | SNARE (missing cell-type) | SNARE (subsam. cell-type) | scGEM (missing cell-type) | scGEM (subsam. cell-type) |\\n|-------------------------|----------------------------|----------------------------|----------------------------|----------------------------|\\n| E-WOT (heat kernel) | 0.722 | 0.806 | **0.670** | 0.563 |\\n| E-WOT (simple tight) | 0.684 | 0.803 | 0.661 | 0.569 |\\n| L-WOT (heat kernel) | **0.924** | 0.775 | 0.596 | 0.503 |\\n| L-WOT (simple tight) | 0.691 | **0.842** | 0.624 | **0.642** |\\n| SCOTy2 | 0.653 | 0.751 | 0.521 | 0.415 |\\n| SCOT | 0.572 | 0.588 | 0.323 | 0.314 |\\n| Pamona | 0.423 | 0.419 | 0.414 | 0.308 |\\n| MMD-MA | 0.407 | 0.431 | 0.296 | 0.287 |\\n| UnionCom | 0.406 | 0.422 | 0.315 | 0.276 |\\n| bindSC | 0.584 | 0.475 | 0.254 | 0.262 |\\n| Seurat | 0.477 | 0.428 | 0.377 | 0.329 |\\n\\nThe results demonstrate that WOT is not only capable of handling unbalanced datasets but also surpasses the performance of current state-of-the-art methods for aligning unbalanced single-cell datasets.\\n\\nHowever, we want to clarify that the primary focus and novelty of this paper is in the underlying framework of WOT where the balanced formulation is most explored. While we have shown the capability of WOT to handle unbalanced datasets, we do not claim to have thoroughly explored all the edge cases of the unbalanced formulation in this work. We believe that a comprehensive analysis and evaluation of the unbalanced version of WOT on unbalanced datasets is beyond the technical scope of this paper. We acknowledge that further research is needed to fully investigate the performance and behavior of the unbalanced formulation.\\n\\n***\\n**Question 3**\\n\\nYou are correct that the threshold \\u03b7 should ideally scale with the size of the T matrix to account for varying sample sizes. In our current experiments, we used a fixed threshold independent of matrix size, which is a limitation of our current implementation.\\n***\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces the Wavelet Optimal Transport (WOT) method, a framework for aligning unpaired single-cell datasets from different modalities. WOT leverages spectral graph wavelet coefficients to capture multi-scale relationships within data and improve alignment robustness against noise, dropout, and non-isometric between data spaces. The authors present two implementations: E-WOT, using entropy heuristics to filter out irrelevant scales, and L-WOT, which dynamically learns the filters to improve alignment adaptivity. Experiments further demonstrate the WOT's superiority in both simulated and real scenarios.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Empirical Effectiveness. WOT consistently outperforms SOTA methods in high-noise scenarios, non-isometric conditions, and dropout cases across synthetic and real single-cell multi-omic datasets.\\n2. Theoretical Novelty. Using spectral graph wavelets for multi-resolution dataset alignment offers a novel view, and further theoretical rigor by generalizing GW-OT and ensuring robustness.\\n3. Flexibility. WOT allows different filters and optimization strategies to enhance performance based on dataset-specific characteristics.\", \"weaknesses\": \"1. Few misspellings in the main text (e.g., in line 012, 'throughput' should be 'throughout').\\n2. Figure 1 lacks clarity in illustrating both the task and framework.\\n3. Lack of comparisons with more recent single-cell paired alignment methods such as Harmony and scDML. They are the most common methods for single-cell alignment and deserve to be discussed. Additionally, the description of the weakness of related works in Section 2.1 remains for more detailed analysis about these methods.\\n4. The study lacks a broader range of experimental scenarios involving real single-cell multi-omic datasets. Results would be more convincing with diverse single-cell multi-omic datasets from different tissues and sequencing technologies, to demonstrate the method's effectiveness across various empirical conditions.\\n5. The proposed method is designed for two datasets' alignment, which restricts its applicability when aligning multiple datasets simultaneously. In fact, alignment of multiple samples may be a more common scenario.\\n6. The fixed hyperparameter setting is limited in applicability across real scenarios. Although the authors provide some advice in hyperparameter selection, utilizing an adaptive strategy is encouraged. Furthermore, even if it is difficult to give an adaptive strategy, sensitivity analysis of hyperparameters should be provided to show that this method can achieve good performance in most cases.\", \"questions\": \"1. In Sections 4.1 \\\\& 4.2, task-specific baseline methods should be included.\\n2. How does your framework handle high-dimensional single-cell datasets? Common single-cell dataset dimensions are above 20K. Even if feature selection is performed, the most common input is 1-3K genes.\\n3. In Figure. 8, I can see that this method can keep the cell types separate. But I am more confused whether the SNARE-seq and ATAC-seq data have been mixed together successfully (need to color based on the dataset).\\n4. It seems to me that this method is generally applicable, and I wonder why it should be limited to the alignment of single cells? In fact, I think that if this method cannot effectively solve the problems unique to single-cell alignment, such as alignment of high dimensions and multiple datasets, it would be better to use public datasets for comparison and emphasize the general applicability of this method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Few questions on the self-tuning procedure\", \"comment\": \"**Edit:** I previously posted this under the wrong comment, deleted it and moved it here.\\n\\nDear authors,\\n\\nThank you very much for responding to my questions and updating the document. I do believe single-cell multi-omic integration in a fully unpaired setting, as described in the paper, is an important task for scientific applications, so I think the details of self-tuning procedure are important. My initial worry was that the hyperparameters may have been chosen with some validation data (since the descriptions on this were missing), leading to unfair benchmarking, but it appears this is not the case, so I will incline towards raising my score. However, I have some additional questions and concerns regarding the self-tuning procedure in Algorithm 2 in Supplementary Section C. My understanding is that you are doing the following:\\n\\n1. Start out by a very small $\\\\epsilon$ value, $10^{-4}$, and use sum for aggregation function and RBF for norm. If this already gives a valid coupling, just use these hyperparameters.\\n2. But small $\\\\epsilon$ values could give degenerate couplings, e.g. due to NaNs. In that case you gradually increase $\\\\epsilon$ until you get a valid coupling (no NaNs) thats sufficiently different than a uniform coupling, \\\"sufficiently different\\\" described by $\\\\eta$.\\n3. If you've raised $\\\\epsilon$ up to $\\\\epsilon$ $\\\\geq 1$ and still don't have a valid coupling, you cycle through this procedure trying out different aggregation functions first, and then different norm values.\\n\\n**Based on this here are my questions:** \\n**1.** Tending to small $\\\\epsilon$s (i.e. stopping hyperparameter tuning at the first point of sufficiently high enough $\\\\epsilon$ to get a valid coupling) makes sense for aligning datasets that are already paired (e.g. SNARE-seq, scGEM experiments from the paper). These datasets have underlying 1-to-1 mapping of cells since they were jointly measured so the ground-truth coupling is super sparse (I am aware you aren't using this info when solving the coupling, just using it for benchmarking). However, this likely won't be the case for real-world separately sequenced datasets since the underlying biological manifold won't have 1-to-1 matches between cells. In 1-to-many settings, higher $\\\\epsilon$s would likely be more favorable, so this self-tuning procedure may not be as effective. I don't know if this is practically possible in the next 5 days of the rebuttal phase but: Could you test the method out on a couple of real-world datasets, where we may have some confidence in cell-type annotations so they could be used for benchmarking (e.g. ideally through FACS sorting but that could be hard to find, expert annotated data from a bio publication could be fine too) ? If possible, It would be additionally nice to see how large the discrepancy is between the quality of alignment with the self-tuned procedure vs a small grid of varying hyperparameters to see where in the range the self-tuned hyperparameters are performance-wise.\\n\\n**2. More of a concern than a question:** It appears that you don't consider $\\\\rho_1$ and $\\\\rho_2$ in the self-tuning procedure and fix them to 1.0. This works well in your practice because all the dataset you showcase your algorithm on are essentially balanced dataset; they have the same number of cells and same proportion of cell types because they came from experiments where measurements were jointly taken. This will not be the case for real-world applications and when datasets are unbalanced, $\\\\rho$ values really matter based on my practical experience with unbalanced OT. I think testing your algorithm on some real-world separately sequenced datasets will likely show this.\\n\\n**3.** Shouldn't the threshold value eta be dependent on size of T matrix (number of samples)? \\n\\n**4.** Are the runtimes reported in the table on page 22 for a single combination of hyperparameter? I ask because $\\\\epsilon$ updates in Algorithm 2 are going to be super small initially. For a starting value of $\\\\epsilon=10^{-4}$, the next 10 $\\\\epsilon$ values would be: 0.00010000005, 0.0001000001, 0.00010000015, 0.0001000002, 0.00010000025, 0.0001000003, 0.00010000035, 0.0001000004, 0.00010000045, 0.0001000005, 0.00010000055. These are tiny increments and for each update, the whole spectral wavelet GWOT is run again. This seems like quite a computationally heavy procedure and could prohibit the adoption of the method in real-world applications. If you run your algorithm on a couple of real-world datasets, could you also report how long the self-tuning procedure takes for these?\"}", "{\"summary\": \"The authors propose a new method, \\\"Spectral Graph Wavelet Optimal Transport\\\" (WOT), to align unpaired single-cell multi-omic datasets. To do so, they perform Gromov-Wasserstein OT alignment between two domains (-omic measurement types) but unlike existing methods, intra-domain distances are computed based on spectral graph wavelets in order to capture multi-scale structural information on the graphs constructed on each modality. They include scale-varying filters in their framework in order to filter out uninformative signals (or noise). There are two proposed strategies for choosing filters: (1) heuristically choosing them based on the entropy of the wavelets in each scale, as estimated based on kernel density estimation and (2) learned via an alternating optimization procedure, where the transport plan and the filters are alternatingly optimized via Sinkhorn iterations and stochastic gradient ascent, respectively.\\n\\nOverall, the approach is well-motivated and the proposed method is novel. While computational complexity is a challenge, experiments demonstrate the benefits in non-isometric and noisy settings. However, I have several remaining questions after reading the paper, especially around how these experiments and baselines are set up. Most importantly, the lack of sufficient information around the hyperparameter selection worries me about the fairness and quality of benchmarking experiments. My current score of 3 is mostly based on this. I am happy to update my score and recommend acceptance if the authors satisfactorily address this concern in the discussion period.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written (other than missing details) and easy to follow and the problem is well-motivated. Experimental section includes a number of simulated and real-world datasets that cover a mix of scenarios and the results are compared to relevant baselines.\", \"weaknesses\": \"**Updated Review** After the rebuttal process, seeing the description of the unsupervised hyperparameter tuning procedure, I am updating my score from 3 to 5, as it partially addresses my concern from #1 below. Having said so, it appears that the hyperparameter tuning procedure does not account of hyperparameters like \\\\rho, which will matter in real-world unpaired dataset integration, as these datasets tend to contain disproportionate cell type representation. Overall, while I think the authors make a contribution towards improving OT-based single-cell multi-modal data alignment (a challenge with scientific impact), I believe additional work needs to be done on **real-world unpaired datasets** before wrapping up this work.\\nI keep my original review below for reference.\\n\\n**1.** The largest concern I have with this submission is the lack of information around hyperparameter selection in benchmarking experiments. The proposed method has a high number of hyperparameters that require tuning: \\n- (1) choice of k for the initial kNN graph, \\n- (2) the bandwith \\\\sigma for the RBF kernel that forms the weighted adjacency matrices, \\n- (3) scale parameters [0...S], \\n- (4) \\\\epsilon for the entropic regularization of OT, \\n- (5) choice of aggregation function {min, max, sum}, \\n- (6) choice of wavelet generating function g {low-pass heat kernel, tight-frame Meyer kernel, ...}, \\n- (7) the hyperparameters associated with the chosen g function,\\n- (8) threshold variable \\\\delta for L-WOT, \\n- (9) hyperparameters associated with KDE for E-WOT (bandwidth). \\n\\nThe authors emphasize that the experiments are conducted in \\\"fully unpaired scenario, where **we don't have validation data to conduct hyperparameter tuning**\\\". Then, in Appendix C, they report different sets of hyperparameters for each dataset. If the experiments weren't conducted with default parameters (i.e. default changes based on dataset) but neither was any validation data was used, then how were different sets of hyperparameters chosen for each dataset? Is there a heuristic / self-tuning procedure to adopt hyperparameters to each dataset without tuning with validation data? This information is lacking. I looked at Appendix C1 on \\\"Guidance on Choosing Hyperparameters\\\" but it does not explain the experiments: it describes keeping aggregation function fixed as summation while reported hyperparameters use a range of aggregation function and secondly, the only heuristic described for self-tuning is from Demetci (2022b), which would only account for k and \\\\epsilon, leaving many other hyperparameter to tune. If there was a self-tuning procedure used, this would be crucial to describe in the paper. The results they compare against uses default parameters for UnionCom, Pamona, MMD-MA, Seurat, bindSC and a self-tuning heuristic for SCOT, SCOTv2 and cross-modal AE. If hyperparameters of E-WOT and L-WOT were, in fact, tuned using data from the experiments, then the correct numbers to compare against would be Fig 2 results from Demetci (2022a):\\n\\n| |SCOTv2 | SCOT | UnionCom | Pamona | MMD-MA | Cross AE | BindSC | Seurat | \\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \\n**SNARE-seq** | 0.927 | 0.982 | 0.423 | 0.686 | 0.942 | 0.689 | 0.734 | 0.684 | \\n**scGEM** | 0.643 | 0.576 | 0.582 | 0.651 | 0.588 | 0.523 | 0.449 | 0.423 | \\n\\n**2.** (Minor point) I think the discussion of previous methods not explicitly reducing noise is not entirely accurate: \\\"[previous methods] do not explicitly reduce dataset-intrinsic noise or signal\\\". E.g. UnionCom, Pamona, SCOT, SCOTv2 all use kNN graphs (using only connectivity or Euclidean distance between PCs) on dimensionality-reduced data, which would reduce noise, albeit in a much more naive way than the proposed method here. Cross-modal autoencoders align datasets in a smaller latent space learned by autoencoders, which would be expected to have lower noise level, as well. \\n\\nI have other questions and comments below, but the major questions I have are in point #1 above.\", \"questions\": \"**1.** How were the hyperparameters of SCOT, SCOTv2, UnionCom and Pamona chosen for the SHREC20 experiments? What were these hyperparameters?\\n**2.** What does \\\"Multiple\\\" mean for \\\"Wavelet\\\" parameter in Appendix C? How do you choose the set of scales?\\n**3.** When computing the kNN graph, what representation of data is used for single-cell datasets? Is it the raw gene expression data, normalized data, or some dimensionality-reduced version of the data? If it's dimensionality reduced, is it based on PCA, LDA, tSNE etc? Do you consider Euclidean distances or correlation or a different \\n**4.** What discrepancy measure do you use for L in Equation 4?\\n**5.** For the GW baseline used in bifurcation matching experiments, how are the intra-domain distances computed (i.e. what is the choice of d_A and d_B used from Eq 1 for this experiment)? Why are single-cell alignment baselines excluded from this experiment?\\n**6.** Have you attempted to compare methods in settings with validation data? Is the advantage from having a better self-tuning procedure or do you also outperform existing methods when validation data is available?\\n\\nThese are less major questions than the ones in weaknesses section but answers to 1-5 would be good to include in the paper for completeness and replicability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Common Questions & Concerns\", \"comment\": \"We thank the reviewers for their time, effort, and constructive feedback. We address some common questions and concerns brought up by multiple reviewers:\\n***\\n**Analyzing Filters and Wavelets Scales of E-WOT and L-WOT (Reviewer oyEN, Txuv)**\\n\\nWe have **added a new section (Appendix D.2.2)** that analyzes the wavelet scales of the single-cell datasets and filters of E-WOT and L-WOT. This section provides further insight into which wavelet scales are informative and why filters based on L-WOT or E-WOT may perform better in different single-cell datasets. \\n***\\n**Hyperparameter Selection and Many Tunable Hyperparameters (Reviewer 1Hb6, Txuv)**\\n\\nWhile we acknowledge that the range of flexibility with our framework can be initially daunting, we want to make clear that most hyperparameters can be fixed to default values as specified in Table 3 of Appendix C. There are only three hyperparameters that are variable between each experiment: (1) the $\\\\epsilon$ regularization, which is a hyperparameter common to any entropic OT method, (2) the aggregation operation, and (3) the weight normalization of the graph edges.\\n\\nWe **added Algorithm 2 to Appendix C**, which summarizes the unsupervised hyperparameter procedure originally described at the beginning of Appendix C in our initial submission. \\n\\nIt is worth noting that the challenge of understanding hyperparameter interactions is inherent to most modern machine learning methods. For instance, neural networks require tuning of learning rates, batch sizes, learning rate schedulers, weight initialization schemes, layer widths, and gradient clipping, among others. Similarly, different optimal transport variants (entropic, unbalanced, etc.) each introduce their own sets of hyperparameters. While we aim to provide clear guidance where possible, WOT's parameter space is comparable to many standard machine learning approaches.\\n***\\n**General Applicability of WOT (Reviewer tWjN, Txuv)**\\n\\nIndeed, WOT can be applied to general unpaired dataset alignment problems, which we view as a strength rather than a limitation. Many successful methods in machine learning, such as Gromov-Wasserstein optimal transport, were initially designed for specific applications (e.g., object matching) but found wide applicability across various domains.\\n\\nWhile WOT is a general solution, it was specifically inspired by and tailored to address challenges in single-cell data analysis:\\n\\n* High dimensionality: Single-cell data often has thousands of features, requiring methods that can efficiently handle high-dimensional spaces. WOT's approach is well-suited for this task, unlike many shape-matching methods that can only operate on 3 dimensions [2, 3].\\n* Non-isometry: Unlike many natural language processing or computer vision [4, 5] tasks that often assume isometry, single-cell data from different modalities (e.g., RNA-seq vs. ATAC-seq) can have fundamentally different structures. WOT's multi-scale approach elucidates the similarities while filtering the dissimilarities between single-cell modalities.\\n* Noise and dropout: Single-cell data is notoriously noisy and sparse due to technical limitations in data acquisition. WOT's filtering mechanisms (both entropy-based and learned) are designed to mitigate these issues.\\n\\nWhile these characteristics are not unique to single-cell data, their combination and severity in this domain motivated our approach. Our extensive experiments on single-cell datasets (SNARE-seq and scGEM) demonstrate WOT's effectiveness in this specific biological context.\\n\\nThat said, we agree that WOT's applicability extends beyond single-cell biology. This broader applicability is, in fact, a significant strength of our method.\\n***\\n[1] SCOT: single-cell multi-omics alignment with optimal transport. Journal of computational biology, 2022.\\n\\n[2] NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go. CVPR, 2021.\\n\\n[3] NCP: Neural correspondence prior for effective unsupervised shape matching. NeurIPS, 2022.\\n\\n[4] Gromov Wasserstein Alignment of Word Embedding Spaces. EMNLP, 2018.\\n\\n[5] Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing. NAACL, 2019.\"}", "{\"comment\": \"We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer\\u2019s concerns and questions below:\\n***\\n**Weakness 1**\\n\\nWhile we acknowledge that the range of flexibility with our framework can be initially daunting, we want to make clear that most hyperparameters can be fixed to default values as specified in Table 3 of Appendix C. There are only three hyperparameters that are variable between each experiment: (1) the entropic $\\\\epsilon$ regularization, which is a hyperparameter common to any entropic OT method, (2) the aggregation operation, and (3) the weight normalization of the graph edges.\\n\\nWe **added Algorithm 2 to Appendix C**, which summarizes the unsupervised hyperparameter procedure originally described at the beginning of Appendix C in our initial submission. We do not use result metrics from experiments to guide hyperparameter selection. \\n\\nThis unsupervised procedure explains the different parameter values observed across datasets. Importantly, we default to sum aggregation and RBF normalization in most cases, only deviating when necessary according to Algorithm 2. The $\\\\epsilon$ parameter shows the most variation, which aligns with previous work showing that fixed defaults often yield uninformative transport plans [1]\\n\\nFurther, to address your concerns about how we initially selected the defaults for the hyperparameters, we **added Table 4**, describing where each value came from. \\n***\\n**Question 1**\\n\\nWe have written a **new section (Appendix C.2)** with details on hyperparameter selection for baselines.\\n***\\n\\u200b\\u200b**Question 2**\\n\\nMultiple refers to evaluating multiple kernels in the experiment. For instance, in Experiment 3, we evaluated both a simple tight kernel and the heat kernel. We have added this clarification in Table 5. The specific scales are determined by the wavelet kernel, which we default to the values in [3]\\n***\\n**Question 3**\\n\\nWe directly use the reduced-dimension single-cell datasets provided by [1] and [5]. For completeness, we summarize their procedure:\\n\\n- scGEM: both datasets are reduced to their corresponding dimensions through normalized PCA\\n- SNARE-seq: RNA-seq was reduced through PCA and ATAC-seq was reduced through the topic modeling framework cisTopic [6] \\n***\\n**\\u200b\\u200bQuestion 4**\\n\\nThank you for pointing this out. We use the quadratic loss $L(a,b) := \\\\frac{1}{2}(a - b)^2$. We have revised the beginning of the experiment section to include this important detail. \\n***\\n**\\u200b\\u200bQuestion 5**\\n\\nFor the bifurcation experiment, we use the Euclidean distance as the intra-domain distances for both GW and our method. We focused on comparing WOT with only GW-OT in the bifurcation experiment since GW-OT forms the theoretical foundation of WOT (as shown in Remark 1). This controlled comparison allowed us to specifically demonstrate the benefits of incorporating wavelets into the optimal transport framework, particularly in high-noise scenarios.\\n***\\n**Question 6**\\n\\nOur work specifically focuses on the unpaired setting since it reflects the reality of single-cell data collection - paired samples are often impossible or prohibitively expensive to obtain. While we haven't evaluated performance with validation data, this was a deliberate choice aligned with our goal of developing methods that work effectively \\\"out of the box\\\" in real-world scenarios where paired samples are unavailable. Evaluating performance with validation data would be an interesting direction for future work, but it would not address the core unpaired challenge. \\n***\\n\\n[1] SCOT: single-cell multi-omics alignment with optimal transport. Journal of computational biology, 2022.\\n\\n[2] Large sample analysis of the median heuristic. 2017.\\n\\n[3] PyGSP: Graph Signal Processing in Python. 2014.\\n\\n[4] Unsupervised topological alignment for single-cell multi-omics integration. Bioinformatics, 2020.\\n\\n[5] MATCHER: manifold alignment reveals correspondence between single cell transcriptome and epigenome dynamics. Genome Biology, 2017.\\n\\n[6] cisTopic: cis-regulatory topic modeling on single-cell ATAC-seq data. Nature Methods, 2019.\"}", "{\"comment\": \"(Continued)\\n\\n**The proposed method is designed for two datasets' alignment, which restricts its applicability when aligning multiple datasets simultaneously. In fact, alignment of multiple samples may be a more common scenario.**\\n\\nWe acknowledge this limitation, but it's important to note that this is not unique to WOT - it is an inherent challenge for all optimal transport-based alignment methods. The pairwise nature of optimal transport formulations makes multi-sample alignment fundamentally challenging.\\n\\nWhile extending WOT to handle multiple datasets simultaneously would be valuable future work, our current focus was on improving the quality of pairwise alignment, particularly in handling the noise and non-isometry challenges specific to single-cell data. \\n***\\n**The fixed hyperparameter setting is limited in applicability across real scenarios. Although the authors provide some advice in hyperparameter selection, utilizing an adaptive strategy is encouraged. Furthermore, even if it is difficult to give an adaptive strategy, sensitivity analysis of hyperparameters should be provided to show that this method can achieve good performance in most cases.**\\n\\nPlease view our response to this question in the \\u201cResponse to Common Questions & Concerns\\u201d comment.\\n***\\n**In Sections 4.1 & 4.2, task-specific baseline methods should be included.**\\n\\nSection 4.1 is a toy dataset specifically designed to isolate and evaluate noise handling capabilities. There are no task-specific baselines because this is a controlled experiment to demonstrate that WOT can maintain accurate alignment even in high noise regimes.\\n\\nRegarding Section 4.2 (shape correspondence), yes, there have been shape-matching specific methods that benchmark on the SHREC20 dataset. However, these methods can only operate on 3D shapes and cannot be scaled to higher dimension datasets like single-cell datasets - directly comparing WOT with these method would not be a fair and meaningful baseline. As such, we have chosen only to compare with other methods (OT-based like GW and non-OT based like UnionCom) that can scale to higher dimensions.\\n***\\n**How does your framework handle high-dimensional single-cell datasets? Common single-cell dataset dimensions are above 20K. Even if feature selection is performed, the most common input is 1-3K genes.**\\n\\nWOT operates on the spectral graph wavelets (SGWs) derived from the pairwise distance matrices of the datasets, rather than directly on the high-dimensional data points themselves (Section 3.1). Hence, the dimensionality is primarily related to the computation of pairwise distances rather than the SGWs or the WOT algorithm itself. \\n\\nHowever, to directly address your concern, pairwise distance subroutines have been highly optimized in popular libraries like SciPy and PyTorch and have not been an issue in our experiments (in Section 4.1, each sample has 2000 dimensions.)\\n***\\n**In Figure. 8, I can see that this method can keep the cell types separate. But I am more confused whether the SNARE-seq and ATAC-seq data have been mixed together successfully (need to color based on the dataset).**\\n\\nWe **updated Figure 8 with a new plot** displaying samples colored by their respective dataset rather than cell type. \\n***\\n**It seems to me that this method is generally applicable, and I wonder why it should be limited to the alignment of single cells? In fact, I think that if this method cannot effectively solve the problems unique to single-cell alignment, such as alignment of high dimensions and multiple datasets, it would be better to use public datasets for comparison and emphasize the general applicability of this method.**\\n\\nPlease view our response to this question in the \\u201cResponse to Common Questions & Concerns\\u201d comment.\\n***\\n[1] Fast, sensitive and accurate integration of single-cell data with Harmony. Nature methods, 2019.\\n\\n[2] Integration of spatial and single-cell data across modalities with weakly linked features. Nature Biotechnology, 2023.\\n\\n[3] GLUER: integrative analysis of single-cell omics and imaging data by deep neural network. BioRxiv, 2021.\\n\\n[4] Single-cell multi-omic integration compares and contrasts features of brain cell identity. Cell, 2019.\\n\\n[5] Deep generative modeling for single-cell transcriptomics. Nature methods, 2018.\"}" ] }
BXMoS69LLR
Blind Baselines Beat Membership Inference Attacks for Foundation Models
[ "Debeshee Das", "Jie Zhang", "Florian Tramèr" ]
Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks are often used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 9 published MI evaluation datasets, we show that blind attacks---that distinguish the member and non-member distributions without looking at any trained model---outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data.
[ "machine learning privacy", "evaluation", "membership inference attacks", "machine learning security", "foundation models" ]
Reject
https://openreview.net/pdf?id=BXMoS69LLR
https://openreview.net/forum?id=BXMoS69LLR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yKGonWi327", "xoyoF5ohnM", "uU5EOMMBiz", "siWIBIKPeO", "nLuP1xEfox", "iV48Ov62SM", "fOLEbbtB6h", "dj8Ydt6XcV", "cJDdMRsjfA", "YjQ577F9UZ", "T78Qbo5yo3", "NQQi64XmZs", "MrUxodSW7I", "KzuRn0zWN0", "ItSD6mOpE1", "GprER4mJp9", "BFFdUJqYGt", "AMXnDUhDTp", "AMF0hUQG43", "1BuoLLCFjt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1731997369034, 1731951358670, 1731951808020, 1730405291848, 1731960240270, 1731990364550, 1730116328870, 1731958115146, 1732475298601, 1731952649435, 1731952412389, 1730663695763, 1731951461244, 1737523801916, 1731963346773, 1732344178762, 1732326314820, 1729978014833, 1734903872784, 1732331128875 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_GGin" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_f6sv" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_zRbN" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_zRbN" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_f6sv" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_o2wH" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_f6sv" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6919/Authors" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_GGin" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_zRbN" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_o2wH" ], [ "ICLR.cc/2025/Conference/Submission6919/Area_Chair_ER7h" ], [ "ICLR.cc/2025/Conference/Submission6919/Reviewer_f6sv" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"We're glad we could address your main comments.\", \"regarding_your_remaining_questions\": \"> However, for vision foundation models, the distribution of training and testing data typically does not exhibit such a significant gap. Therefore, would blind attack not be generalizable to other domains and has limited extension ability?\", \"we_are_confident_that_similar_issues_apply_to_vision_foundation_models_for_the_following_reasons\": \"1. Training sets for modern vision foundation models consist of images *and* text (e.g., LAION, CC, etc). These datasets don't have an official train-test split, and so MI attacks on these models still need to create a set of non-members after the fact. This is exactly what happens in the LAION-MI and Multi-Webdata case-studies in our paper.\\n\\n2. We focused on text-only blind attacks in our paper mainly for simplicity (i.e., we can use simple models like bag of words). But we see no reason why we shouldn't be able to train image models to distinguish between members and non-members for these datasets.\\n\\n> There lacks an exploration of how foundation models handle data from before and after different dates, even if it were just some analytical experiments\\n\\nIt is not clear to us what the point of such an experiment would be. Even if *current* MI attacks use (or don't use) the same features as our blind attacks, this says nothing about future attacks. Our worry is precisely that future attacks would be evaluated on the same biased datasets, and that we then have no idea if the attacks actually work or not.\"}", "{\"title\": \"Response [1 / 2]\", \"comment\": \"Thank you for your detailed feedback and for taking the time to review our work. We understand that the significance of our findings may not have been fully apparent, and we appreciate the opportunity to clarify the key contributions of our paper.\\n\\n---\\n\\n>The authors' framing of certain works as \\\"concurrent\\\" appears to minimize substantial overlaps, particularly with [1] and [2] which preceded the ICLR deadline by 4 and 8 months respectively.\\n\\nIt was not our intent to minimize overlaps with these works. Our work was originally released on arxiv two weeks after [1], but this is of course a while ago now.\\n\\nRegarding overlap, we acknowledge that past works have pointed out issues of distribution shifts for MI evaluations. However, these works have all focused on one specific type of shift present in one or two datasets (temporal wiki and arxiv). \\nWe go further and present a systematic analysis of biased members and non-members across 9 published MI evaluation datasets of three different types. We show with experiments that the shift is so severe, that any MI evaluation scores on these datasets cannot be trusted. We also show, more alarmingly, that this issue pervades other kinds of datasets - specifically our other two types of dataset constructions (biases in data replication, distinguishable tails) that are not constructed based on a temporal split.\\n\\nWe will clarify our contributions and relationship to prior work.\\n\\n>The conclusion (L413) takes a problematic stance by seemingly absolving model trainers of accountability. \\n\\nWe do not intend to suggest that model trainers should be absolved of any kind of accountability. It is not clear to us where our paper suggests this. Our point is mainly that performing MI evaluations as they are done today provides no signal.\\nWhat we meant to say in L413 is that if researchers want to show they have a strong MI attack, they cannot use mostfoundation models to demonstrate this (unless we find a better way to construct member and non-member datasets).\\n\\n>The claim on L464-465 about \\\"no knowledge of the model\\\" requires clarification. \\n\\nA MIA necessarily requires the attacker or auditor to be able to interact with the target model. In contrast, our blind attack directly distinguishes members and non-members based on intrinsic differences in features.\\n\\n>L101: \\\"Many of these\\\" - Please be exact. Consider adding a table that talks about MI attack attempts on foundation models, the benchmarks they use (propose or new), and whether they suffer from the train-test split issues that the authors mention here.\\n\\nPlease refer to Section 4 and Table 2, which provide a detailed report on the most effective MI attacks on foundation models. All the datasets listed in the table encounter issues related to train-test splits, as evidenced by the high scores achieved by the blind attack.\\n\\n>L154: The assumption about future dates is oversimplified and overlooks legitimate cases in fiction, climate research, and policy documents\\n\\nYes, that\\u2019s why we say it is a heuristic. Our goal here is to create a blind MI attack with high TPR at low FPR. This heuristic is enough to do this (although it does have some false-positives as the reviewer suggests). It is not clear to us why this is an issue?\\n\\n>Some of the \\\"blind\\\" baselines rely on detecting data references like 2024 etc. As an adversary cognizant of the training cutoff (or even an auditor), a simple solution to fix this shift would be arbitrarily replacing all such data references for non-members\\n\\nThe point of our blind attacks is not to obtain the highest possible performance on membership inference in a robust manner. The point of our blind attacks is to highlight that evaluation datasets that are popularly being used are not suitable for evaluating membership inference attacks.\\nThese edits would indeed make the specific blind baseline we consider weaker, but it would not fix all distribution shifts and so a different blind baseline would likely still work.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your detailed feedback and for taking the time to review our work. We understand that the significance of our findings may not have been fully apparent, and we appreciate the opportunity to clarify the key contributions of our paper.\\n\\n--- \\n\\n>The authors claim that current state-of-the-art MIAs fail to extract meaningful membership information, relying only on biased dataset evaluation results. However, this assertion may be overstated\\n\\nThis is not the claim we make.\\nWe claim that the performance of state-of-the-art MIAs cannot be distinguished (or is even outperformed) by naive baselines without access to the target model.\\nOur baselines indeed use information that an MI attack might have a hard time extracting. But the point here is that an attack could, in principle, use such features which give no information about the actual ability to extract membership signal.\\n\\n>As a path forward, the paper advocates for future MIA evaluations using PILE, DataComp, or DataComp-LM. However, it is unclear whether these datasets also suffer from distribution shift issues.\\n\\nSince the train and test sets of these datasets were selected IID, we are guaranteed that no distribution shift exists.\\nFor completeness, we tested our attack strategies on the train/test split of the PILE, and got an advantage over random guessing that was not statistically significant.\\n\\n\\n> To ensure robustness of the evaluations, it would be beneficial to repeat the experiments with different random dataset splits, recording the mean and variance of attack success rates. \\n\\nThe results reported are averaged over repeated experiments (10-fold cross validation as mentioned on line 197) with random splits - in the cases where the blind attacks need a \\u201ctraining\\u201d phase - such as the bag of words attack and the greedy rare word selection attack. \\n\\nDate detection based blind attack does not require any \\u201ctraining data\\u201d so the entire dataset is used as the test data. Since there is no randomness in this method: neither in the train-test split nor in the actual attack method, this experiment is not repeated and cross-validated. Any repetition will give the same score.\\n\\n>The novelty and technical contributions of this paper appear incremental. Distribution shift issues in evaluation datasets have been previously discussed by Duan et al. and Maini et al.\\n\\nDuan et al. and Maini et al. focus on temporal shifts within a single dataset (WikiMIA), whereas we provide a systematic analysis of biased members and non-members across three types of biases in nine published MI evaluation datasets. In contrast to these works, we also don\\u2019t merely show that a distribution shift exists, but that it is large enough to invalidate all MI evaluations conducted on these datasets to date.\\n\\nWe believe this broader perspective deserves greater attention in the community, as we show that issues in MI dataset creation are not a one-off event, but a systematic issue that plagues the entire field. And yet, many of these datasets are still being used for evaluating MI attacks (as a timely example, ref [3] below which was recently awarded an Outstanding Paper Award at EMNLP proposes a new MIA method and evaluates it on WikiMIA and BookMIA\\u2014datasets that we show are severely biased). \\nSo can we truly trust these reported numbers? It is crucial for researchers to approach evaluation with caution to avoid producing and propagating misleading results. \\n\\nWe believe that publishing this work in a venue such as ICLR, which plays a pivotal role in shaping the trajectory of machine learning research, will help ensure these concerns are brought to the forefront of the community's attention.\\n\\n>How do state-of-the-art attacks perform relative to blind attacks on unbiased datasets?\\n\\nOn a truly unbiased dataset, our blind attacks would perform no better than chance (by definition).\\nPrior work (e.g., Duan et al. 2024) show that current MIAs also do essentially random guessing on such datasets, but it is possible that future, stronger attacks could extract meaningful signal from unbiased datasets.\\n\\n[1] Do Membership Inference Attacks Work on Large Language Models? ICLR 2024.\\n\\n[2] LLM Dataset Inference: Did you train on my dataset? Arxiv,2406.06443.\\n\\n[3] Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method. Arxiv:2409.14781.\"}", "{\"summary\": \"This paper examines the datasets used in evaluating membership inference attacks on large language models and text-to-image generation models. The authors argue that current MIA evaluations are unreliable, as it is possible to differentiate members from non-members through blind attacks that do not utilize any information about the target model. Consequently, they suggest that state-of-the-art MIAs may not actually extract membership information effectively. To improve evaluation, the authors recommend using datasets with minimal distribution shifts between members and non-members, such as Pile or DataComp.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper investigates a significant issue in MIA research, highlighting the importance of unbiased evaluation datasets for accurately benchmarking attack effectiveness on text-based large foundation models.\", \"It provides a systematic evaluation of various datasets and baseline attacks, identifying three common distribution shift patterns that influence the success of MIAs.\"], \"weaknesses\": [\"The authors claim that current state-of-the-art MIAs fail to extract meaningful membership information, relying only on biased dataset evaluation results. However, this assertion may be overstated, as blind attacks use dataset-specific prior information (e.g., timestamps), which the proposed state-of-the-art attacks may intentionally avoid as they may aim to propose a general attack. These attacks might still capture useful membership signals, albeit weaker than the dataset-specific prior information. To better support this claim, experiments on less biased datasets (like Pile or DataComp, as suggested) are necessary. If state-of-the-art methods perform close to random guessing on such datasets, it would indicate their inability to capture membership information effectively.\", \"As a path forward, the paper advocates for future MIA evaluations using PILE, DataComp, or DataComp-LM. However, it is unclear whether these datasets also suffer from distribution shift issues. A simple approach to evaluate this would be to apply the proposed blind attacks on these datasets; if the success rate is near random guessing, it could indicate that these datasets are indeed less biased by distribution shifts, at least concerning the three identified types of shift.\", \"As highlighted by Dubinski et al. (2024), different splits of training and evaluation sets can yield significantly varied membership inference attack results. To ensure robustness of the evaluations, it would be beneficial to repeat the experiments with different random dataset splits, recording the mean and variance of attack success rates. This approach would provide a more reliable comparison between blind attacks and existing MIAs.\", \"The novelty and technical contributions of this paper appear incremental. Distribution shift issues in evaluation datasets have been previously discussed by Duan et al. and Maini et al., and while I appreciate the systematic evaluations in this paper, it largely provides a measurement study rather than new technical contributions or insights. Thus, the paper might lack the innovation typically expected at top-tier conferences, just my two cents.\"], \"questions\": [\"How do state-of-the-art attacks perform relative to blind attacks on unbiased datasets?\", \"Are the recommended datasets (PILE, DataComp, or DataComp-LM) genuinely unbiased?\", \"What are the effects of repeating experiments using different dataset splits on the evaluation outcomes?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns are involved.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> They are missing because no existing work reports that particular metric or provides a means to reproduce that metric using their attack.\\n\\nMost of these attacks have open-source implementations and corresponding datasets that are available. \\n\\n> We did test our blind baselines on this dataset and confirmed that the results were the same, which is why we chose not to include it.\\n\\nGood- please mention these details and this result somewhere in the paper\"}", "{\"comment\": \"Thank you for addressing my comments. I agree that for text-based foundation models, there will indeed be different lexical combinations and data distributions before and after certain dates. However, for vision foundation models, the distribution of training and testing data typically does not exhibit such a significant gap. Therefore, would blind attack not be generalizable to other domains and has limited extension ability?\\n\\nFurthermore, I think that merely highlighting the shortcomings of existing benchmarks is insufficient. The MIA proposed in this paper merely reveal the distribution differences in the data for foundation models, neglecting the differences that may exist in the model's processing of these data. As the authors mentioned in their response, ''we wouldn't know if this is because the attack actually is better at extracting membership signals from the model, or if it is just picking up the biased features that our attacks rely on''. There lacks an exploration of how foundation models handle data from before and after different dates, even if it were just some analytical experiments. Such exploration could provide the MIA community with new insights or observations, and would make the contribution of this paper more sufficient.\"}", "{\"summary\": \"This paper reveals that current evaluations of membership inference (MI) attacks on foundational models are flawed due to the use of different distributions when sampling members and non-members. The authors demonstrate this issue through an analysis of nine published MI evaluation datasets. They show that directly classify the samples in the MI evaluation datasets can outperform existing MI attacks. This finding indicates that current evaluation methods cannot accurately reflect the membership leakage of a foundational model's training data. This paper also proposes simple blind attack techniques, such as date detection and bag-of-words classifiers, which remain effective on datasets designed to eliminate distribution differences between members and non-members. The authors suggest that future MI attack evaluations should be conducted on models with a clear train-test split.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper focuses on the irrationality of MI evaluation datasets is important, especially in an era where foundation models are widely applied.\", \"This paper analyzes 9 published MI evaluation datasets, demonstrating that blind attacks outperform existing MI attacks on these datasets. This reveals the incompleteness of current MI evaluations.\", \"The attack methods proposed in this paper perform exceptionally well, showing significant performance improvements compared to existing MI attacks.\"], \"weaknesses\": [\"The comparison experiment setup is unclear. Were the same data conditions used the experiment section? (see Q1)\", \"The core of this paper is to point out the shortages of existing MI attacks on foundation models. However, in the introduction, the discussion does not revolve around this point but rather focuses on how simple attacks can also achieve good results. It is recommended to revise the structure of the introduction to highlight the main contributions of the paper.\", \"The experimental section is divided into sections based on the datasets, which makes it difficult to correspond with the previously mentioned common reasons for the intrinsic differences. This hinders the reader's understanding of the experiments and the paper's arguments.\"], \"questions\": \"Q1: In Section 3, the authors first extract all dates present in the text and then proceed with date detection attacks. Are samples lacking dates excluded from your inference attacks? Specifically, in the experimental section, are the 'Ours' attack results shown in Table 2 based on a dataset that has filtered out samples without dates? Similarly, are the 'Best Attack' results measured on such a filtered dataset, or are they directly taken from the original papers? This distinction is crucial as it determines the fairness of your comparison experiments.\", \"q2\": \"In your date detection attack, could you elaborate on how the date threshold is selected? The relevant section does not detail the method for choosing the threshold. Is it a matter of testing various thresholds and selecting the one that yields the best attack performance?\", \"q3\": \"Regarding your Bag-of-words classification attack, what is the underlying insight or motivation for this approach? Why can different word combinations be used to infer membership properties, particularly without any interaction with the foundational models?\", \"q4\": \"This paper primarily concentrates on the datasets used for MI evaluation, but it does not account for the influence of foundational models. However, the objective of MI attacks is to discern differences between a model's behavior on members and non-members. Even with datasets that are easily distinguishable based on dates or words, existing attacks still fail to achieve satisfactory results after considering the foundational models. Does this suggest that the current use of these datasets remains valid? Furthermore, by focusing on direct distinctions in membership labels and disregarding interactions with foundational models, are the signals you use for differentiation, such as dates and words, being utilized by existing MI attacks that do interact with foundational models? If current MI attacks are not leveraging your information, does it further indicate that the use of these datasets is reasonable (at least at current stage)?\", \"q5\": \"In Section 3.1, this paper proposes three common reasons for the intrinsic differences between member and non-member samples. However, since different MI evaluation datasets are constructed using various strategies, it is unclear which aspects are evaluated with respect to these datasets. It would be beneficial to clarify this point in Table 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"\\\"An alternative avenue is to forego MI evaluations on arbitrary foundation models,...\\\" explicitly recommends not using membership inference evaluations on foundation models, without any viable suggestion *specifically* for those foundation models.\\n\\n> However, these works have all focused on one specific type of shift present in one or two datasets (temporal wiki and arxiv).\\n\\nYes, but the final takeaway about membership inference evaluations being broken for LLMs remains the same. In that aspect, this paper's contributions are limited to showing the same result on some more datasets. The fact that it exists for multiple other models and datasets is sufficient to show how the current pipeline of privacy evaluation is flawed.\\n\\n> In contrast, our blind attack directly distinguishes members and non-members based on intrinsic differences in features.\\n\\nThis is not true, as other reviewers have also pointed out rightly. The attacks here are not truly \\\"blind\\\" - you are inspecting the train and test data, knowing which is which, to pick certain identifiers that are useful in distinguishing between the two. While in theory this should be no better than random guessing, calling it \\\"blind\\\" is misleading.\\n\\n> These edits would indeed make the specific blind baseline we consider weaker, but it would not fix all distribution shifts and so a different blind baseline would likely still work.\\n\\nYes, that was exactly my question- *how much* would it help? Can you still design \\\"blind\\\" attacks that work or is the distribution shift heavily dependent on this explicit mention of dates.\"}", "{\"comment\": \"Thank you for your response. I now understand that your work aims to highlight the shortcomings of existing datasets used for MI attacks in text and demonstrate that a very naive approach based on text features can achieve strong performance. I agree this is an important issue, as noted by the other reviewers. However, I don\\u2019t believe it is accurate to describe your approach as \\\"blind,\\\" as its success relies on knowing the distribution shift between training and testing data and then selecting or training a classifier accordingly.\\n\\nI have also reviewed the comments from the other reviewers and I have a similar feeling. Therefore, I will maintain my score.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your detailed feedback and for taking the time to review our work.\\nUnfortunately, it seems that the reviewer has misunderstood the core contribution of our work. We hope the response below can clarify what our work does.\\n\\n---\\n\\n>The assumption of this paper on \\\"blind\\\" is not correct. \\\"Blind\\\" should be on both the model and data set [2021], but this paper relies on too much target dataset information. \\n\\nThe referenced paper uses \\u201cblind\\u201d in a different sense than ours. In Hui et al., a blind attack is one that only has black-box access to the model.\\n\\nWe consider attacks that do not have any access to the model at all! Such an attack shouldn\\u2019t even be able to do membership inference since it can\\u2019t possibly infer any information from the model. So in principle it should be irrelevant how much information our attacks have about the target dataset. And yet we show that such \\u201cblind\\u201d attacks can still distinguish \\u201cmembers\\u201d and \\u201cnon-members\\u201d because the evaluation datasets are badly constructed.\\n\\nWe will clarify this terminology in our paper.\\n\\n>The paper provides limited experimental details. For instance, it does not specify which models were targeted for the membership inference attacks.\\n\\nThis is a misunderstanding of our paper. Our attacks do not use any access to a model\\u2014this is a core point of our entire work.\\n\\n>The paper proposes ideas for constructing better datasets for evaluating membership inference attacks, but it does not provide experimental results or analysis on whether the blind attack would still outperform SOTA methods on these improved datasets.\\n\\nBy definition, a dataset with an IID split would not allow any blind attack to work better than random chance.\\nFor completeness, we verified this on the PILE, and found that our blind attacks indeed cannot distinguish the train and test sets better than random.\\n\\n>Current membership inference attacks are typically evaluated across multiple datasets. For example, Zhang et al. [2024a] evaluate their Min-K%++ attack on Wikipedia, GitHub, Pile CC, PubMed Central, and many other datasets to demonstrate generalizability. However, the blind attack\\u2019s performance on other datasets is not explored in the paper, making it difficult to conclude that current evaluations are entirely flawed based on the results from just one dataset.\\n\\nIndeed, we are not claiming that existing evaluations are completely flawed. We are just saying that the use of the specific datasets we consider should be avoided because these datasets do not provide any meaningful signal.\\n\\nSo, for example, if a paper evaluated their MI attack on WikiMIA and the Pile, we argue it would be better if they just evaluated on the Pile.\\n\\nAlso note that we are not claiming that any individual attack (such as Min-K%++) is flawed. We are saying that (parts of) the evaluations of these attacks are flawed. It is entirely possible that the attacks are very effective, but the current evaluations cannot be trusted to show this.\\n \\n>Duan et al. (2024) propose that temporal shifts can influence the performance of membership inference attacks. Could you elaborate further on the differences between your work and Duan et al. [2024]?\\n\\nPast works have indeed pointed out issues of distribution shifts for MI evaluations. However, these works have all focused on one specific type of shift present in one or two datasets (temporal wiki and arxiv).\\n\\nWe go further and present a systematic analysis of biased members and non-members across 9 published MI evaluation datasets of three different types. We show with experiments that the shift is so severe, that any MI evaluation scores on these datasets cannot be trusted. We also show, more alarmingly, that this issue pervades other kinds of datasets - specifically our other two types of dataset constructions (biases in data replication, distinguishable tails) that are not constructed based on a temporal split.\\n\\nWe will clarify our contributions and relationship to prior work.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your detailed feedback and for taking the time to review our work. We understand that the significance of our findings may not have been fully apparent, and we appreciate the opportunity to clarify the key contributions of our paper.\\n\\n---\\n\\n\\n>The comparison experiment setup is unclear. In Section 3, the authors first extract all dates present in the text and then proceed with date detection attacks. Are samples lacking dates excluded from your inference attacks? \\n\\nFor samples that do not contain a date, our attack simply has to revert to random guessing (or, to minimize false positives, we simply output \\u201cnot member\\u201d). So we don\\u2019t do any special filtering here. We apply our attack (and prior ones) to the full dataset.\\n\\n>In your date detection attack, could you elaborate on how the date threshold is selected? \\n\\nSince we know the dataset and how it was constructed, we simply select the dataset\\u2019s cutoff date as our threshold date. More generally, we could also iterate over a few choices of dates and choose the best one based on a validation set.\\n\\n>The core of this paper is to point out the shortages of existing MI attacks on foundation models. However, in the introduction, the discussion does not revolve around this point but rather focuses on how simple attacks can also achieve good results. \\n\\nOur point in the introduction was to say that since blind (\\u201cnaive\\u201d) attacks can achieve high scores, existing MI evaluations are flawed. We will clarify this.\\n\\n>The experimental section is divided into sections based on the datasets, which makes it difficult to correspond with the previously mentioned common reasons for the intrinsic differences. \\n\\nWe do break down the evaluation section into subheadings 4.1, 4.2, and 4.3, each corresponding to a particular reason for differences as the reviewer suggests. Within each of these subsections, we have one sub-subsection per defense.\\nDoes the reviewer have a different structure in mind that would be easier to follow? \\n\\n>Regarding your Bag-of-words classification attack, what is the underlying insight or motivation for this approach? Why can different word combinations be used to infer membership properties, particularly without any interaction with the foundational models?\\n\\nThe motivation is basically a generalization of our date-extraction attack. Since the members and non-members come from different distributions, there are likely some words that are more likely in one distribution than the other, that can be used to make a good guess.\\n\\n>This paper primarily concentrates on the datasets used for MI evaluation, but it does not account for the influence of foundational models. However, the objective of MI attacks is to discern differences between a model's behavior on members and non-members. \\nEven with datasets that are easily distinguishable based on dates or words, existing attacks still fail to achieve satisfactory results after considering the foundational models. Does this suggest that the current use of these datasets remains valid? Furthermore, by focusing on direct distinctions in membership labels and disregarding interactions with foundational models, are the signals you use for differentiation, such as dates and words, being utilized by existing MI attacks that do interact with foundational models? If current MI attacks are not leveraging your information, does it further indicate that the use of these datasets is reasonable (at least at current stage)?\\n\\nThis is a good question. Current MI attacks indeed perform very poorly even on very biased datasets (worse than our blind baselines). While this suggests there is room for improvement, we would still argue for these datasets to be dropped. Indeed, if a new attack comes along that does much better on these datasets, we wouldn\\u2019t know if this is because the attack actually is better at extracting membership signal from the model, or if it is just picking up the biased features that our attacks rely on.\"}", "{\"summary\": \"This paper demonstrates that existing MI evaluations for foundation models perform poorly due to distribution shifts between member and non-member data. The authors show that simple \\\"blind\\\" attacks, which do not query the model, can outperform state-of-the-art MI attacks on common MI evaluation datasets. They identify temporal shifts, biases in data replication, and distinguishable tails as common causes for the distribution mismatch between members and non-members.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": [\"While this topic has been explored by several recent works (both concurrent and prior), this work goes a step beyond to demonstrate the extend of distributional differences between members and non-members, for both LLM and VLM evaluation data for membership inference.\", \"The paper is well written and supports most of its claims with empirical evidence and extensive evaluation.\"], \"weaknesses\": [\"The submission has significant issues regarding originality and the characterization of related work. The authors' framing of certain works as \\\"concurrent\\\" appears to minimize substantial overlaps, particularly with [1] and [2] which preceded the ICLR deadline by 4 and 8 months respectively. This timeframe makes it difficult to justify as concurrent research. The paper's main conclusion about flawed non-member selection methods introducing detectable distributional shifts largely mirrors the findings already established in [2].\", \"The conclusion (L413) takes a problematic stance by seemingly absolving model trainers of accountability. Instead of abandoning membership inference evaluations, research should focus on developing methods that either avoid non-member requirements (like data-extraction attacks) or leverage trainer-provided evaluation data. Dismissing these evaluations would encourage (proprietary) model trainers to evade scrutiny of their data usage practices.\", \"The claim on L464-465 about \\\"no knowledge of the model\\\" requires clarification. The paper's \\\"blind\\\" baselines actually incorporate significant domain knowledge about data collection patterns (e.g., dates and special tokens). The authors should explicitly state that \\\"blind\\\" specifically refers to lack of model access, as the attacks still utilize direct data knowledge and split information to construct rules and meta-classifiers.\", \"## Other Comments\", \"L89: Current state of the art is RMIA [3], nor LiRA.\", \"L89: Please provide some more context for the 'standard' membership inference game (member and non-members should be same distribution etc.) Context is especially important for this work to understand the nuances behind different member/non-member data distributions.\", \"L101: \\\"Many of these\\\" - Please be exact. Consider adding a table that talks about MI attack attempts on foundation models, the benchmarks they use (propose or new), and whether they suffer from the train-test split issues that the authors mention here.\", \"L154: The assumption about future dates is oversimplified and overlooks legitimate cases in fiction, climate research, and policy documents\", \"Table 1 appears redundant given Table 2's more comprehensive presentation\", \"Table 2: Please make a distinction between datasets for LLMs and those for VLMs.\", \"L467: \\\"... auditing unlearning methods\\\" - there are several works describing better ways to audit unlearning [4]. It should also be pointed out that these membership-inference attacks do not work well to begin with even with properly split train/test data [2], so it is not surprising that it will not be used to audit unlearning.\", \"#### References\", \"[1] Maini, Pratyush, et al. \\\"LLM Dataset Inference: Did you train on my dataset?.\\\" arXiv:2406.06443 (2024).\", \"[2] Duan, Michael, et al. \\\"Do membership inference attacks work on large language models?.\\\"COLM, 2024\", \"[3] Zarifzadeh, Sajjad, Philippe Liu, and Reza Shokri. \\\"Low-Cost High-Power Membership Inference Attacks.\\\" ICML, 2024.\", \"[4] Lynch, Aengus, et al. \\\"Eight methods to evaluate robust unlearning in llms.\\\" arXiv:2402.16835 (2024).\"], \"questions\": [\"Some of the \\\"blind\\\" baselines rely on detecting data references like 2024 etc. As an adversary cognizant of the training cutoff (or even an auditor), a simple solution to fix this shift would be arbitrarily replacing all such data references for non-members, maybe shift them by N nears back to match the cutoff range of the model. What happens in such a scenario? Can \\\"blind\\\" attacks still be successful?\", \"In Table 2, why are some of the entries missing values, or entire metrics (AUC ROC) not reported? These datasets are publicly available (which is how the authors get their attack's results to begin with) so I do not see why corresponding values cannot be filled in for existing works.\", \"I am confused by the 'Greedy rare word selection' protocol- is the sorting done using the TPR/FPR ratios on test data? If so, one can always design and selective use metrics to get good performance on test data if you are using metrics from test data to begin with. Please explain this part a bit more clearly.\", \"L240: \\\"...and thus do not include it\\\" - should be fairly simple inclusion and I do not see why it must be excluded like this? Also, as a reader I do not know what \\\"their construction is\\\" - if it is so similar, please briefly explain how they do it.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response [2 / 2]\", \"comment\": \">In Table 2, why are some of the entries missing values, or entire metrics (AUC ROC) not reported? These datasets are publicly available (which is how the authors get their attack's results to begin with) so I do not see why corresponding values cannot be filled in for existing works.\\n\\nThey are missing because no existing work reports that particular metric or provides a means to reproduce that metric using their attack. The column with missing entries is supposed to report \\u201cBEST Reported MIA\\u201d in the literature. It is only sensible to leave it blank if there is nothing reported in the literature at all.\\n\\n>I am confused by the 'Greedy rare word selection' protocol- is the sorting done using the TPR/FPR ratios on test data? If so, one can always design and selective use metrics to get good performance on test data if you are using metrics from test data to begin with. Please explain this part a bit more clearly.\\n\\nNo, the sorting is not done on the test data and you are absolutely right that if we did that we could always easily get a good score. In each iteration of our attack, we split the dataset in a train-test split (90:10) and perform this greedy selection of rare words using only the training subset. The metrics are evaluated on the held-out test set. We repeat our experiment 10 times and report the mean.\\nThis is mentioned in line 204 of our paper. We will further clarify this important point.\\n\\n>L240: \\\"...and thus do not include it\\\" - should be fairly simple inclusion and I do not see why it must be excluded like this? Also, as a reader I do not know what \\\"their construction is\\\" - if it is so similar, please briefly explain how they do it.\\n\\nWe did not include this dataset because it is almost the exact same dataset as BookMIA (already discussed in our paper). This dataset starts with all the members of BookMIA and adds a few more books using the exact same concept as the construction of BookMIA\\u2014i.e., books published before a particular cut-off date are included as members, and books published after the cut-off date are considered non-members. We did test our blind baselines on this dataset and confirmed that the results were the same, which is why we chose not to include it.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response\", \"comment\": \"> The fact that it exists for multiple other models and datasets is sufficient to show how the current pipeline of privacy evaluation is flawed.\\n\\nYes, we're glad we agree that showing this phenomenon for multiple datasets is important. This is exactly what our work does.\\n\\n> The attacks here are not truly \\\"blind\\\" - you are inspecting the train and test data.\\n\\nIt is unclear to us how an \\\"attack\\\" could be doing any less than this. We are essentially learning how to distinguish members and non-members (in a generalizable way, without over fitting). \\nAre we supposed to show we can beat SOTA MI attacks without any knowledge of the model *or* data?\\n\\n> Yes, that was exactly my question- how much would it help? Can you still design \\\"blind\\\" attacks that work or is the distribution shift heavily dependent on this explicit mention of dates.\\n\\nYes, our bag-of-words attack outperforms prior MI attacks even in the absence of explicit dates. We omitted this result as it is less interpretable but we are happy to add it if the reviewer believes it helps illustrate our point.\\n\\n> Most of these attacks have open-source implementations and corresponding datasets that are available.\\n\\nYes, but they were clearly not intended or optimized to be used with these metrics, as the original papers don't mention them. So we don't think a comparison would be fair. If anything it would weaken our point.\"}", "{\"comment\": \"Thank you for your response. I have carefully reviewed the rebuttal and the comments from other reviewers. For a top-tier conference, it is important for the paper to demonstrate sufficient scientific contributions. Including potential solutions would be one direction that could significantly strengthen the work. I also recommend that the paper quantitatively incorporate experimental results from prior MIA attacks using datasets such as PILE, DataComp, or DataComp-LM to support its claims better. Based on the above reasons, I will maintain my score.\"}", "{\"comment\": \"Thank you for your response. I understand that the focus of your work is to highlight the inherent shortcomings of the datasets used to evaluate MIA in LLMs, which I agree is a significant challenge in this field. However, for a top-tier conference, simply pointing out issues with a naive attack method may not be enough in terms of contribution and novelty. My suggestion would be to either develop a more accurate method for assessing MIA in LLMs or to conduct a deeper analysis of MIA performance on existing evaluation datasets. Such an analysis is expected to reveal the relationship between current MIA performance and differences in data distribution. Providing solutions or even potential solutions to address this important problem would enhance the significance of your work. Based on these considerations, I will maintain my current score.\"}", "{\"summary\": \"The paper argues that previous evaluations of membership inference attacks are flawed due to the distributional differences between members and non-members. The paper analyzes nine datasets and demonstrates that blind attack techniques, such as date detection (assume some text samples contain dates), bag-of-words classification (classifier is trained on 80% of the members and non-members, then test on the left 20% members), and greedy rare word selection, outperform previous state-of-the-art (SOTA) methods in the evaluation metric.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides extensive experimental results to support their claims. These results demonstrate that blind attack techniques outperform state-of-the-art methods under their settings.\", \"weaknesses\": \"1. The assumption of this paper on \\\"blind\\\" is not correct. \\\"Blind\\\" should be on both the model and data set [2021], but this paper relies on too much target dataset information. For example, one of the proposed methods only works if the dataset contains data information, and another method even needs 80% of labeled member data as the attacker's training samples. From my understanding of the literature, this rich information may not be available to other membership inference attacks, potentially giving the proposed blind attack an unfair advantage.\\n\\nHui, Bo, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, and Yinzhi Cao. \\\"Practical blind membership inference attack via differential comparisons.\\\", 2021\\n\\n2. The paper provides limited experimental details. For instance, it does not specify which models were targeted for the membership inference attacks.\\n\\n3. The paper proposes ideas for constructing better datasets for evaluating membership inference attacks, but it does not provide experimental results or analysis on whether the blind attack would still outperform SOTA methods on these improved datasets.\\n\\n4. Current membership inference attacks are typically evaluated across multiple datasets. For example, Zhang et al. [2024a] evaluate their Min-K%++ attack on Wikipedia, GitHub, Pile CC, PubMed Central, and many other datasets to demonstrate generalizability. However, the blind attack\\u2019s performance on other datasets is not explored in the paper, making it difficult to conclude that current evaluations are entirely flawed based on the results from just one dataset.\\n\\nJingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Yang, and Hai Li. Min-K%++: Improved baseline for detecting pre-training data from large language models. arXiv preprint arXiv:2404.02936, 2024a.\", \"questions\": \"Duan et al. (2024) propose that temporal shifts can influence the performance of membership inference attacks. While you mention that there are differences between your paper and this concurrent work, from my perspective, both papers seem to demonstrate similar findings. Could you elaborate further on the differences between your work and Duan et al. [2024]?\\n\\nMichael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer, Yulia Tsvetkov, Yejin Choi, David Evans, and Hannaneh Hajishirzi. Do membership inference attacks work on large language models? arXiv preprint arXiv:2402.07841, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors evaluate membership inference attacks against foundation models, and find that existing attacks are ineffective for determining the membership of a given sample. In particular, the author find that a blind baseline that distinguishes between member and non-member distributions achieves higher success rate compared to existing attacks.\\n\\nReviewers generally found the message of the paper to be important and timely. However, there exist prior and concurrent work that made similar discoveries, and while the paper's message is impactful, the paper currently lacks depth and would greatly benefit from designing a practical solution. AC agrees with the reviewers and recommend rejection, but encourage the authors to improve the paper's technical depth and resubmit it to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers and authors discussed concerns such as the claim of concurrent work, the exact definition of blindness, and practical implications of the paper's message. These concerns remain even after the author rebuttal.\"}", "{\"comment\": \"I think Reviewer zRbN's recent comment (https://openreview.net/forum?id=BXMoS69LLR&noteId=BFFdUJqYGt) accurately captures my feelings about this work. I do agree that there is a problem in the way membership evaluations have taken place in foundation models, and it needs to change. However, this has been explored and discussed by other papers already and while this work does add some value in exploring *how* bad it is, it is very incremental (if we already know that non-member selection is flawed, does it really matter whether it is \\\"very flawed\\\" or \\\"somewhat flawed\\\"; the conclusion either way is to look for alternatives).\"}" ] }
BWuBDdXVnH
ControlAR: Controllable Image Generation with Autoregressive Models
[ "Zongming Li", "Tianheng Cheng", "Shoufa Chen", "Peize Sun", "Haocheng Shen", "Longjin Ran", "Xiaoxin Chen", "Wenyu Liu", "Xinggang Wang" ]
Autoregressive (AR) models have reformulated image generation as next-token prediction, demonstrating remarkable potential and emerging as strong competitors to diffusion models. However, control-to-image generation, akin to ControlNet, remains largely unexplored within AR models. Although a natural approach, inspired by advancements in Large Language Models, is to tokenize control images into tokens and prefill them into the autoregressive model before decoding image tokens, it still falls short in generation quality compared to ControlNet and suffers from inefficiency. To this end, we introduce ControlAR, an efficient and effective framework for integrating spatial controls into autoregressive image generation models. Firstly, we explore control encoding for AR models and propose a lightweight control encoder to transform spatial inputs (e.g., canny edges or depth maps) into control tokens. Then ControlAR exploits the conditional decoding method to generate the next image token conditioned on the per-token fusion between control and image tokens, similar to positional encodings. Compared to prefilling tokens, using conditional decoding significantly strengthens the control capability of AR models but also maintains the model efficiency. Furthermore, the proposed ControlAR surprisingly empowers AR models with arbitrary-resolution image generation via conditional decoding and specific controls. Extensive experiments can demonstrate the controllability of the proposed ControlAR for the autoregressive control-to-image generation across diverse inputs, including edges, depths, and segmentation masks. Furthermore, both quantitative and qualitative results indicate that ControlAR surpasses previous state-of-the-art controllable diffusion models, e.g., ControlNet++.
[ "controllable image generation", "autoregressive models", "autoregressive image generation", "diffusion models", "image generation" ]
Accept (Poster)
https://openreview.net/pdf?id=BWuBDdXVnH
https://openreview.net/forum?id=BWuBDdXVnH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q9T4vLE0Qp", "pXGZGtEEq5", "lfBsc0dQa4", "lIodla1QpW", "fnYOEWLZjk", "e09DK9Xtvy", "cMFCp2TjSm", "ZnoHAl6nyD", "ZUlFoZlZ5I", "WjJT7XFqEK", "VLUP9PHYcH", "UnsptVu002", "QSXJZtBows", "ORS3gWRFFq", "NJNAtnlHS5", "Gyb9wLvKod", "GTN7mRDO4w", "FCcoQ2cWAX", "DVk7cC9Qls", "BTTAfsSjV9", "3SZ4n1DwHv", "2NcLUSSWYA", "1XFagtOOhM", "12Xtq3OwPC" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review" ], "note_created": [ 1730743931510, 1732383226817, 1732523320327, 1732437011973, 1730697088343, 1732379457231, 1732523290741, 1732382862717, 1732383070251, 1732854406448, 1730576560402, 1733107920697, 1732556463282, 1732412767117, 1732618581000, 1732380539219, 1730792980478, 1732379533062, 1732864356592, 1732717559174, 1732588003005, 1737523383312, 1733059865086, 1735023021418 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission170/Reviewer_uMML" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_1dGR" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_Scc6" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_Scc6" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_uMML" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_1dGR" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_GBJo" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_GBJo" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Reviewer_GBJo" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission170/Authors" ], [ "ICLR.cc/2025/Conference/Submission170/Area_Chair_5RrA" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a method to condition an autoregressive generative image model on different modalities, such as edges, depth, and others. The application formulation is very similar to ControlNet for diffusion models, but the method is novel for AR models. The proposed method consists in (1) generating patch embeddings for the conditioning input (2) adding those embeddings to the image embeddings in certain augmented layers in the AR model (3) processing the combined patches normally. The new layers are trained on a large dataset with conditioning inputs and the method achieves strong results.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper has several strengths that make it compelling:\\nThe work has a very simple formulation that is elegant. There is good demonstration on how it\\u2019s better than the other obvious approach of conditional prefilling. Also, very few other work exists tackling this problem and this is, to the best of my knowledge, a novel approach for conditioning AR models. They also present class-to-image and T2I evaluations and show strong results on several datasets.\\n\\nAlso, this direction of research discovers essential knowledge for these new models, which we already have for diffusion models. And we can also see that ControlAR is smaller than a typical ControlNet. Further, the paper has good details on experimental setup, good ablations, specifically some interesting ones on position+quantity of control layers.\", \"weaknesses\": \"I don't think I have found weaknesses in the work that should lead to rejection. I am curious about what would happen if certain experiments were run, and these are not very extensive. Some examples:\\n1. Which layers are ideal to introduce the new control layers on? Right now we have a coarse study of this but it could go deeper, although it's a lot of work that might not be super useful in the end.\\n2. Some output images shown in the paper show some color saturation or excess contrast - is this an effect of the control layers or just the base model? Is training the control layers biasing the model towards some unrealistic outputs?\", \"questions\": \"I think I am strongly decided for acceptance given the strengths of the paper. I'll read the other reviews in case I missed anything but currently don't have major questions. The paper presents a straightforward improvement that is necessary for these types of models and does a great job in presenting it, and in evaluating it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for Reviewer 1dGR [3/3]\", \"comment\": \"> **Q: It would be beneficial if the author evaluate the text-image alignment.**\\n\\n**A:** Thanks for your suggestion! We used the **CLIP-score** to evaluate the alignment between the generated images and the text prompts, with experiments conducted on the ADE20K dataset. As shown in the table below, the CLIP-score between text and images in ADE20K is 31.26, primarily due to the use of pseudo-caption annotations, which contain significant noise. Our method achieved a CLIP-score of 30.86, which is very close to the inherent CLIP score of the dataset.\\n\\n|Method | CLIP-score |\\n|----------------|--------------|\\n|ADE20K | 31.26 | \\n|T2I-Adapter | 30.65 |\\n|UniControlNet | 30.59 |\\n|UniControl | 30.92 |\\n|ControlNet | 31.53 |\\n|ControlNet++ | 31.96 |\\n|Ours | 30.86 |\\n\\n\\n> **Q: About resolution control. The paper saied ControlAR extend the ability of the autoregressive model to generate arbitrary resolution, making it \\\"easy\\\" to achieve any-resolution image generation without resolution-aware prompts. I have two questions and suggestions: (1) Can ControlAR or its extension enable resolution control when no specific control image like edge or seg map is available? If so, how? (2) Since both ControlAR and resolution-aware prompts require additional training, it is unclear if ControlAR actually offers a easy solution, despite the intuition tells us so. A quantitative or qualitative comparison with resolution-aware text prompts would strengthen this argument.**\\n\\n**A:** Thanks for your questions and suggestions very much!\\n\\n(1) It's not really difficult to make our ControlAR do things like control resolution even when there's no specific control image input. We can generate a grayscale map of the corresponding resolution according to the desired height and width, this grayscale map consists of the number of 16 \\u00d7 16 small squares, and the grayscale value of each row decreases from left to right, the leftmost 255, the rightmost 0. This grayscale image is the control image that determines the resolution. Thanks to the strong positional dependence of the control decoding strategy between the image token and the control condition token, the model only needs to generate a sequence as long as the control condition sequence. And since the grayscale value of each row is decreasing from left to right, the model can easily know when it is necessary to switch to the next row. We have verified the feasibility of this approach on a small experimental scale. We provide some visualization results in **Fig. 8 of the revised version**.\\n\\n(2) Using resolution-aware prompts to control the resolution as in Lumina-mGPT[1] requires the constant generation of `<end-of-line>` tokens during the prediction of the image and the eventual prediction of `<end-of-image>` token. This approach requires the model to make its own decisions about where to make line breaks and where to end generation, but our ControlAR is directly telling the model where to make line breaks and end generation. We try to train our ControlAR using the approach mentioned in (1) and only need to fine-tune the weights based on LlamaGen-XL (512\\u00d7512) on about 1M text-image paired data for 30k steps to achieve a good arbitrary resolution generation capability without specific control image. This proves that our ControlAR can be a very effective strategy for controlling resolution.\\n\\n> **Q: Minor: Typo: double \\\",\\\" in the 3rd contribution**\\n\\n**A:** Thank you very much for the correction, we have corrected this error in the revised version.\", \"references\": \"\\\\\\n[1] Dosovitskiy A. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv:2010.11929, 2020.\\\\\\n[2] Oquab M, Darcet T, Moutakanni T, et al. Dinov2: Learning robust visual features without supervision[J]. arXiv preprint arXiv:2304.07193, 2023.\\\\\\n[3] https://github.com/huggingface/transformers\\\\\\n[4] Liu, Dongyang, et al. \\\"Lumina-mgpt: Illuminate flexible photorealistic text-to-image generation with multimodal generative pretraining.\\\" arXiv preprint arXiv:2408.02657 (2024).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nHi! May we ask if our response has addressed your concerns? If you have any other questions, we would be more than happy to discuss them with you. In the revision version, we have added many new experimental results that can help address your concerns.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Thank you for your recognition!\", \"comment\": \"Thank you very much for your recognition and for increasing the score!\", \"about\": \"> \\\"Firstly, we encode these diverse controls into a sequence of our expected length (which determines the resolution of the output image), e.g., 512 tokens. And then we use the control sequence with conditional decoding for the controllable image generation.\\\"\\n\\nI'd like to explain further about how to extend the proposed ControlAR to more general controllable image generation beyond spatial controls, *e.g.,* style transfer, color control or identity-preserving generation. ControlAR can serve as a general paradigm for autoregressive models.\\n\\n* *General Control Encoding*: We adopt Control Encoders to encode various control inputs or combined controls (multi-control generation) into control sequences. This modularity enables seamless integration of diverse control inputs, such as:\\n - Spatial and geometric controls (e.g., depth maps, segmentation maps).\\n - Semantic controls (e.g., text prompts, scene descriptions).\\n - Style and appearance controls (e.g., color palettes, artistic styles). \\n\\n We can either share a unified encoder across multiple tasks or use different encoders for different tasks, offering a flexible and adaptable setup. Various control inputs are encoded into a sequence, and we can further control the resolution of the generated image by adjusting the length of the control encoding sequence.\\n \\n For general controllable generation tasks, a ViT model with 22M parameters has already demonstrated excellent performance. The impact of this component on inference efficiency is much smaller compared to the decoding process. If a control encoder needs to handle multiple controls simultaneously, it indeed requires a larger model to ensure sufficient generation quality. However, compared to generative networks like LlamaGen, the parameter size and computational cost of our encoder are significantly smaller. Currently, our inference efficiency is primarily determined by the autoregressive generation network.\\n\\n\\n* *Conditional Decoding*: we use the control sequence with **conditional decoding** to predict the image tokens.\\n\\n\\nIf you have further questions, we would be more than happy to discuss and exchange ideas with you. We believe that applying ControlAR to more general controllable generation tasks is an exciting research direction to explore.\\n\\nI greatly look forward to and enjoy discussing with you. Have a nice day!\"}", "{\"summary\": \"The paper introduces ControlAR, a method for image-controlled autoregressive-based image generation. A ViT-based encoder is used to extract features from control images such as edge or depth maps, and a conditional decoding method (adding the control features and token features according to a fixed corresponding spatial relationship) is used to generate the output image. This paper has demonstrated good quantative and visualization results. This method also enables image generation at arbitrary resolutions according to the resolution of the control image.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Important topic**: The image-controlled generation task is in general of great interest, and important for the recently re-rised research trend on AR-based image generation models.\", \"**Simple and reasonable**: This method is a reasonable exploration towards controlled generation in image AR models with simple token feature addition in decoding.\", \"**Good ablations** on training strategy, fusion strategy (cross-attention or addition, addition layers), control encoders.\", \"**Good visualization results**.\"], \"weaknesses\": \"I will put all questions in this section. Note that they're not all weaknesses.\\n\\n1. **Equation (4) is not written properly**, here $q_i$ represents a discrete image token, then $q_i \\\\in [V]$, where $V$ is the vocabulary or codebook size. Then, the summation of a discrete token with another continuous feature $q_i + C_{i+1}$ in Equation (4) is not well defined. Besides, ControlAR adds up the control feature to the token feature in three intermediate layers, not the input embedding.\\n\\n2. **About the pathway choice**\\n \\n Throughout the paper and in Fig. 2, the authors argue that compared to putting control tokens in the sequence (\\\"conditional prefilling\\\"), ControlAR benefits from a shorter sequence length and eliminates the need for the model to learn a fixed one-to-one spatial mapping. This makes sense that ControlAR is a cheaper way to train a image-controlled AR generation model.\\n\\n However, I have some concerns about this path: ControlAR trains each controlled generation task separately, rather than offering a general model. Besides, the predefined one-to-one spatial mapping of ControlAR seems to be restricted to specific tasks. In contrast, putting control tokens in the sequence has the potential to support general generation of texts and images, and will allow flexible and diverse control relationships between them. For example, one might require image-controlled generation based on style rather than spatially local controls like edges or segmap; and one might require referencing multiple control images to generate a single output. \\n\\n After all, if the goal is simply to achieve specific, local controls, well-established diffusion models and strategies are already very handy. One of the key motivations of the recent trend in exploring AR image generation models is to achieve a more general and flexible framework that can unify control and generation across modalities, right? I would like to know the authors' opinion on this.\\n\\n3. **Lack some details and explanations**\\n 1. About the ablation on control encoders: What are the messages? Is vanilla training better or self-supervised training better for control feature extraction? It's not intuitive that we need different ViT models to extract features from control images for class-to-image and text-to-image generation. \\n 2. For C2I, the authors initialize the control encoder with VIT-S. The original position encoding is global trainable and of fixed size, and is thus not suitable for multi-resolution. How do the authors handle this?\\n 3. Section 4.2 lacks detailed information on multi-resolution training. It would be helpful if the authors provided more details, such as the size of the multi-resolution dataset used for training, the design of the architecture (e.g., positional encoding) for multi-resolution adaptation, and so on.\\n 4. It would be beneficial if the author evaluate the text-image alignment.\\n\\n4. **About resolution control**\\n\\n The paper saied ControlAR extend the ability of the autoregressive model to generate arbitrary resolution, making it \\\"easy\\\" to achieve any-resolution image generation without resolution-aware prompts. I have two questions and suggestions: (1) Can ControlAR or its extension enable resolution control when no specific control image like edge or seg map is available? If so, how? (2) Since both ControlAR and resolution-aware prompts require additional training, it is unclear if ControlAR actually offers a easy solution, despite the intuition tells us so. A quantitative or qualitative comparison with resolution-aware text prompts would strengthen this argument.\", \"minor\": \"Typo: double \\\",\\\" in the 3rd contribution\", \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for Reviewer GBJo\", \"comment\": \"We sincerely appreciate your recognition of our work and genuinely hope that our response addresses your concerns. If you have any further questions, please feel free to let us know!\\n\\n> **Q1: Performance comparisons with recent models such as Lumina-mGPT and Cm3leon (or Anole), such as in segmentation-to-image tasks, would strengthen this paper. Additionally, an analysis or discussion on the potential for integration with these models would be beneficial.**\\n\\n**A:** Thank you very much for your suggestion, we have added some quantitative comparative results with recent work including OmniGen[1] and Lumina-mGPT[2] , as shown in the following table. \\nAs for CM3Leon[3] and Anole[4], we attempted to compare CM3Leon (7B parameters) on control-to-image tasks; however, its corresponding code is not publicly available, making a direct comparison impossible. Additionally, Anole (7B parameters) focuses on multimodal autoregressive text-to-image generation and does not explore the control-to-image generation method.\\nAdditionally, our method does not require any adjustments to the structure of the generative network or modifications to the length of the sequences, which means that we can easily migrate our ControlAR to other autoregressive image generation models, such as Lumina-mGPT.\\nThanks for your suggestion, we have added the comparisons in the revised version.\\n\\n| Type | Method | Param. | Seg(mIoU\\u2191) | Canny(F1score\\u2191) | Hed(SSIM\\u2191) | Depth(RMSE\\u2193) |\\n|-----------|---------------|:---------:|:----------:|:--------------:|:----------:|:------------:|\\n| Diffusion | ControlNet | 1.2B | 32.55 | 34.65 | 76.21 | 35.90 | \\n| Diffusion | ControlNet++ | 1.2B | 43.64 | 37.04 | 80.97 | 28.32 | \\n| Diffusion | OmniGen | 3.8B | 44.23 | 35.54 | 82.37 | 28.54 |\\n| AR | Lumina-mGPT | 7B | 25.02 | 29.99 | 78.21 | 55.25 |\\n| AR | ControlAR | 0.8B | 39.95 | 37.08 | 85.63 | 29.01 | \\n\\n> **Q: Spatial conditions like segmentation maps and Canny edges impose strong constraints on structure diversity in generated outputs. Exploring whether some structural diversity can be incorporated within the conditional decoding step would be beneficial.**\\n\\n**A:** Thank you for your suggestion. This is a good idea! Given the diversity of structures generated, we sometimes do not want the spatial structure of the generated image to be identical to the input control. To achieve this, it is only necessary to skip the operation of fusing the control condition token with the image token with a probability of 50\\\\% when training ControlAR. \\nSuch an approach ensures ControlAR's generative capability in the absence of control image inputs. At the same time, multiplying the control condition token by a control strength factor $\\\\alpha$ during inference changes the degree of control of the generated result. When $\\\\alpha$ is 1, ControlAR will generate an image exclusively based on the control condition, while when $\\\\alpha$ is 0, the generated results will be related only to the text prompt. Another even simpler way is to freeze the generative network during training, and still adjust the controlled strength by a control strength factor $\\\\alpha$.\\nWe provide some examples by adjusting the strength factor in **Fig. 7 in the revised version**.\\n\\n> **Q: Need for discussion on representative failure cases. A discussion of representative failure cases among the generated results would provide valuable insights into the limitations of the proposed method and potential areas for improvement.**\\n\\n**A:** Thanks for your advice, we added a discussion on representative failure cases in the revised version. When there is a significant discrepancy between the text prompts and the spatial controls, ControlAR may produce some results that are not consistent with the text prompts. For more details, please refer to **Fig. 9 in the revised version**.\", \"references\": \"\\\\\\n[1] Xiao, Shitao, et al. \\\"Omnigen: Unified image generation.\\\" arXiv preprint arXiv:2409.11340 (2024).\\\\\\n[2] Liu, Dongyang, et al. \\\"Lumina-mGPT: Illuminate flexible photorealistic text-to-image generation with multimodal generative pretraining.\\\" arXiv preprint arXiv:2408.02657 (2024).\\\\\\n[3] Yu, Lili, et al. \\\"Scaling autoregressive multi-modal models: Pretraining and instruction tuning.\\\" arXiv preprint arXiv:2309.02591 2.3 (2023).\\\\\\n[4] Chern, Ethan, et al. \\\"Anole: An open, autoregressive, native large multimodal models for interleaved image-text generation.\\\" arXiv preprint arXiv:2407.06135 (2024).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nHello! May we ask if our response has addressed your concerns? If you have any other questions, we would be more than happy to discuss them with you. We sincerely hope to resolve any doubts you may have and kindly ask you to reconsider our submission. Have a nice day!\\n\\nBest regards, \\nAuthors\"}", "{\"title\": \"Author Response for Reviewer 1dGR [1/3]\", \"comment\": \"Thank you very much for your suggestions! You have provided many valuable discussions and suggestions. We sincerely hope our response can help resolve your concerns. If you have any other questions or areas of uncertainty, we would be delighted to discuss them with you.\\n\\n\\n> **Q: Equation (4) is not written properly, here $q_i$ represents a discrete image token, then $q_i \\\\in [V]$, where $V$ is the vocabulary or codebook size. Then, the summation of a discrete token with another continuous feature $q_i+C_{i+1}$ in Equation (4) is not well defined. Besides, ControlAR adds up the control feature to the token feature in three intermediate layers, not the input embedding.**\\n\\n**A:** Thank you for pointing out the inappropriateness of the equation. We modify the formula to the following more intuitive form\\uff1a\\n\\n\\\\begin{equation}\\n S_{out}=\\\\mathcal{F}(S_{in} + \\\\mathcal{P}(C)) = \\\\mathcal{F}([c+C_1,I_1+C_2,I_2+C_3,...,I_{i-1}+C_i]),\\n\\\\end{equation}\\n\\nwhere $\\\\mathcal{F}$ represents a single sequence layer modeling process in the generative network, $\\\\mathcal{P}$ is the projection function, $S_{in}$ and $S_{out}$ are the input sequence and output sequence of each layer respectively, c is the class or text token, $I_i$ is the image token, and $C$ is the control condition sequence.\\n\\n> **Q: Throughout the paper and in Fig. 2, the authors argue that compared to putting control tokens in the sequence (\\\"conditional prefilling\\\"), ControlAR benefits from a shorter sequence length and eliminates the need for the model to learn a fixed one-to-one spatial mapping. This makes sense that ControlAR is a cheaper way to train a image-controlled AR generation model. However, I have some concerns about this path: ControlAR trains each controlled generation task separately, rather than offering a general model.**\\n\\n**A:** Thank you for your comments and suggestions. We chose to train ControlAR separately for each control as ControlNet did in order to pursue better control generation, but this does not mean that our ControlAR can't do a general control. In order to prove this, we made additional experimental attempts, we used Dinov2-base as general control encoder to process multiple controls, including canny, hed, lineart and depth. We report evaluation results are shown in the following table. The last line marked with * is the evaluation results of the general model (one model to process different controls). The results show that even as a general model, our ControlAR is competitive to expert models for different controls.\\n\\n| Method | Canny(F1-Score\\u2191)| Hed(SSIM\\u2191)| Lineart(SSIM\\u2191)| Depth(RMSE\\u2193) |\\n|---------------|-----------------|-----------|---------------|---------------|\\n| ControlNet | 34.65 | 76.21 | 70.54 | 35.90 |\\n| ControlNet++ | 37.04 | 80.97 | 83.99 | 28.32 |\\n| ControlAR | 37.08 | 85.63 | 79.22 | 29.01 |\\n| ControlAR* | 37.42 | 85.09 | 78.79 | 30.88 |\"}", "{\"title\": \"Author Response for Reviewer 1dGR [2/3]\", \"comment\": \"> **Q: Besides, the predefined one-to-one spatial mapping of ControlAR seems to be restricted to specific tasks. In contrast, putting control tokens in the sequence has the potential to support general generation of texts and images, and will allow flexible and diverse control relationships between them. For example, one might require image-controlled generation based on style rather than spatially local controls like edges or segmap; and one might require referencing multiple control images to generate a single output. After all, if the goal is simply to achieve specific, local controls, well-established diffusion models and strategies are already very handy. One of the key motivations of the recent trend in exploring AR image generation models is to achieve a more general and flexible framework that can unify control and generation across modalities, right? I would like to know the authors' opinion on this.****\\n\\nOur ControlAR currently provides an efficient autoregressive model-based technical route for the controllable image generation of spatial structures.\\nFor controllable image generation based on spatial controls, we believe this one-to-one spatial mapping is a highly efficient and effective approach for autoregressive models. However, we also consider that the **conditional decoding** proposed in this paper is not limited to geometric control generation.\\nWe believe this control method can be extended to more general controllable generation. Specifically, our control encoder can serve as a universal control encoder, capable of handling geometric controls, content control, or even color and style control. \\nFirstly, we encode these diverse controls into a sequence of our expected length (which determines the resolution of the output image), *e.g.*, 512 tokens. \\nAnd then we use the control sequence with **conditional decoding** for the controllable image generation. Compared to prefilling decoding, this approach enables *efficient inference*, *strengthens control over generation*, and supports *arbitrary-resolution generation*.\\nFor example, this approach can be directly applied to style transfer, where the style image can be encoded into conditional control tokens.\\n\\n> **Q: About the ablation on control encoders: What are the messages? Is vanilla training better or self-supervised training better for control feature extraction? It's not intuitive that we need different ViT models to extract features from control images for class-to-image and text-to-image generation.**\\n\\n**A:** We compare the two models ViT-s[1] and DINOv2-s[2] in our ablation experiments on the control encoder. The experimental results show that ViT-s performs better in C2I and DINOv2-s is better for T2I. We believe that the reason for this phenomenon is the different pre-training data for the two models. ViT-s is obtained by pre-training on ImageNet and thus is more advantageous for C2I tasks that are also trained on ImageNet. DINOv2-s, on the other hand, is pre-trained on a larger and more diverse data such as LVD-142M, and thus will be more suitable for T2I tasks trained on MultiGen20M, which is also a diverse text-image paired dataset.\\n\\n> **Q: For C2I, the authors initialize the control encoder with VIT-S. The original position encoding is global trainable and of fixed size, and is thus not suitable for multi-resolution. How do the authors handle this?**\\n\\n**A:** When using ViT-s, we interpolate the original positional embeddings based on the input size to adapt to different image resolutions. \\n\\n> **Q: Section 4.2 lacks detailed information on multi-resolution training. It would be helpful if the authors provided more details, such as the size of the multi-resolution dataset used for training, the design of the architecture (e.g., positional encoding) for multi-resolution adaptation, and so on.**\\n\\n**A:** In multi-resolution training we first set the maximum sequence length to 2304, supporting a batch size of 2 per A100 GPU under this limit. The control image is downsampled 16 times to obtain the control sequence. For example, when the resolution of the control image is 768\\u00d7768 or 1024\\u00d7576, then (768//16)\\u00d7(768//16)=(1024//16)\\u00d7(576//16)=2304.\\nDuring the training process we randomly sample the height and width of the training data from 384 to 1024 with a minimum interval of 16, and the image can be resized when it satisfies (H//16)\\u00d7(W//16)$\\\\leq$2304. In addition, we need to adjust the parameter settings of the rotational position encoding in the generative network by simply increasing its maximum sequence length to 2304.\\nWe have added the details in the revised version.\"}", "{\"comment\": \"I do not buy the explanation about the novelty of using dinov2, and the performance improvement. Therefore, I will keep my score.\"}", "{\"summary\": \"This paper introduces ControlAR, a framework that enables autoregressive (AR) models to generate high-quality images with precise spatial controls.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea is simple but effective: a lightweight control encoder that transforms spatial inputs (edges, depth maps, etc.) into control tokens, and conditional decoding method that fuses control tokens with image tokens during generation.\\n\\n2. This work shows strong results across multiple tasks (edges, depth maps, segmentation).\", \"weaknesses\": \"1. The paper's efficiency comparison between ControlAR and ControlNet++ (22M vs 361M parameters) is misleading. Comparing parameter counts between diffusion-based and AR-based models is fundamentally unfair due to their different architectures and generation mechanisms. The paper should instead compare inference time, FLOPs, or other more relevant efficiency metrics between similar architectures.\\n\\n\\n2. The use of pretrained visual encoders (CLIP, DINOv2) is a standard practice in multimodal learning\\n\\n3. The quantitative improvements shown in Tables 1 and 2 are marginal.\", \"questions\": \"see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer GBJo,\\n\\nThank you very much for your reply! \\n\\n**Conditional consistency** is very important among the evaluation criteria for controllable generation of spatial structures, and existing methods based on diffusion models tend to enforce the generated images to have spatial structures that are as similar as possible to the input conditional images. \\n\\nI believe the **structural diversity** you mentioned is a thought-provoking issue and a valuable direction for future exploration. However, we'd like to clarify that control consistency and geometric structural diversity are inherently contradictory. If control consistency is high, the structure will inevitably lack diversity; conversely, if structural diversity is high, control consistency will significantly decrease.\\nAs a result, controllable image generation methods generally struggle to achieve geometric diversity, which we believe is a **common limitation across many approaches, such as ControlNet++ / ControlNet (see Fig. 9), which focus on improving consistency with spatial controls**. However, inspired by your insights, our ControlAR provides a simple yet effective way to mitigate this issue.\\nBy adjusting the control strength factor properly, the generated image can take into account the spatial structure and text prompt.\\n\\nThe issue you raised is indeed very meaningful for the future of controllable image generation. Perhaps we should consider building a more multidimensional and comprehensive evaluation framework for controllable image generation, which not only takes into account the **consistency of the controllable generation** but also evaluates whether it can produce **more diverse structures based on spatial controls**. I believe this is an excellent question to explore.\\n\\nBest regards,\\\\\\nAuthors\"}", "{\"comment\": \"Thank you for the clarifications. These are good answers. I think this paper is very good, I will keep my score.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for the detailed response. I particularly like the newly added experiment of handling multiple control signals with one model and the any-resolution generation experiment in Fig. 8. Although I might still have some concerns about my previous question on that *the predefined one-to-one spatial mapping of ControlAR seems to be restricted to specific tasks*, I think the current contributions have already earned this paper an accept. I'll increase my score.\", \"here_is_my_remained_concern\": \"> Firstly, we encode these diverse controls into a sequence of our expected length (which determines the resolution of the output image), e.g., 512 tokens. And then we use the control sequence with conditional decoding for the controllable image generation. \\n\\nAs far as I understand, your response said that if we want another flexible control, e.g., specified by a prompt, we can use another encoder to encode this control into a sequence of control tokens that is of the sequence length of the image. After this, ControlAR can be applied. If we would like a flexible control, I think this additional encoder needs to be strong. If so, the argued efficiency benefits of ControlAR over a unified method that directly model the joint distribution of interleaved text and images will be reduced.\"}", "{\"title\": \"Remaining concern about the structural diversity\", \"comment\": \"Thank you for the detailed clarification.\\n\\nWhile the authors provide a detailed explanation, a concern about the structural diversity remains. Specifically, in the additional experiments provided in Fig. 7, it appears that even when the value of the control strength factor $\\\\alpha$ is varied, the spatial structure conditioned by Canny edges remains strongly preserved, which brings us back to the initial question.\\n\\nThis observation aligns with the discussed failure cases shown in Fig. 8, where strong constraints from the conditioning input prevent ControlAR from incorporating structural diversity, even when the text prompt specifies features like \\\"burning candle\\\" or \\\"glasses.\\\"\\n\\nIn practice, it is unlikely that the Canny edges used as conditioning inputs will perfectly match the user's intended structure. Instead, they are more often provided as approximate guidance, with the expectation that the text prompt will introduce additional variations or guide the generation toward more diverse interpretations. \\n\\nGiven this, it seems that ControlAR struggles to relax such structural constraints, thereby limiting the structural diversity of the generated outputs. Do you think this could be considered a noteworthy limitation of ControlAR? If not, could you provide further evidence or results to demonstrate ControlAR\\u2019s ability to generate diverse structures under such conditions?\"}", "{\"title\": \"Author Response for Reviewer Scc6\", \"comment\": \"Thank you very much for your suggestions! We sincerely hope our response can help address your concerns. If you have any other questions, we would be more than happy to respond !\\n\\n> **The paper's efficiency comparison between ControlAR and ControlNet++ (22M vs 361M parameters) is misleading. Comparing parameter counts between diffusion-based and AR-based models is fundamentally unfair due to their different architectures and generation mechanisms. The paper should instead compare inference time, FLOPs, or other more relevant efficiency metrics between similar architectures.**\\n\\n**A:** \\nThank you for your questions and suggestions.\\nWe compare the number of additional parameters in the paper to show that our approach does not require a large control encoder to achieve good results, even though the two generative networks are not quite the same in terms of structure and generation mode. \\nAdditionally, we provide the comparisons about computation burget (MACs, Multiply\\u2013Accumulate Operations) in the below table, where both SD1.5 and ControlNet++ set the denoising step to 20. It is clear from the statistical results that the increase in computation of our method is negligible.\\n\\n|method | MACs |\\n|---------------|----------------|\\n|SD1.5 | 0.34\\u00d720=6.8T |\\n|ControlNet++ | 0.46\\u00d720=9.2T |\\n|LlamaGen-XL | 1.5T | \\n|ControlAR | 1.5+0.05=1.55T|\\n\\nSince the control encoder is relatively lightweight, the computational and time overhead introduced by our method is significantly smaller compared to LlamaGen, especially in the generation of high-resolution images, where the additional time cost of ControlAR is negligible.\\nThese autoregressive image generation models adopt next-token prediction, *e.g.*, LlamaGen directly employs the Llama architecture. However, these methods have not yet fully exploited the efficient inference capabilities of autoregressive models. Some effective acceleration techniques, such as FlashAttention, PagedAttention, and vLLM, are already available for AR models like GPT and Llama3. Therefore, we believe that autoregressive image generation can achieve significant inference speedups. We are also actively exploring commonly used tools, such as vLLM, to enhance the inference speed of ControlAR.\\n\\n\\n> **Q: The use of pretrained visual encoders (CLIP, DINOv2) is a standard practice in multimodal learning**\\n\\n**A:** Though CLIP ViT has been widely used for large multi-modal models as the vision encoder, \\nour ControlAR introduces DINOv2 as a control encoder for the first time in a controllable image generation task.\\nPrevious works on controllable generation tend adopt CNNs to encode control images.\\nFor example, diffusion-based controllable generation task, ControlNet uses half of its own U-net network as the control encoder, and T2I-Adapter designs a simple CNN network as the control encoder. Although these methods have achieved good results in diffusion models, they are not suitable for autoregressive models. \\nIn this work, we explore the impact of different pre-trained ViTs on controllable image generation, especially for autoregressive models. This is an interesting direction, as different pretraining methods indeed exhibit varying performance.\\nSpecifically, we compare the effectiveness of ViTs with different pre-training approaches on different data and different tasks, and demonstrate that DINOv2 has better results than other ViTs or CNNs for controllable autoregressive image generation.\\n\\n> **Q: The quantitative improvements shown in Tables 1 and 2 are marginal.**\\n\\nThank you for expressing your opinion on the results of our experiments, but we respectively disagree with it.\\nIn Tab. 1, we show the quantitative metrics for class-to-image controllable generation of ControlAR based on AiM and LlamaGen. Compared with ControlVAR, our method uses LlamaGen-L with only 343M parameters to achieve lower FID values than VAR-30d with 2B parameters. And when the number of generating model parameters is close, our method has a very clear advantage. \\nIn Tab. 2, we show the conditional consistency metrics under different control conditions. Our approach has clear advantages compared to ControlNet for different tasks. And compared to ControlNet++, which continues to make targeted fine-tuning of conditional consistency on top of ControlNet, our approach is also very competitive.\"}", "{\"summary\": \"This paper introduces ControlAR, a method to efficiently enable controllability in autoregressive (AR) image generation models. ControlAR proposes a control encoder that transforms spatial control inputs (e.g., edges, depth maps, segmentation maps) into sequential control tokens, which are leveraged during conditional decoding to enable precise control over the generated images.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method enables fine-grained control in autoregressive image generation by using a control encoder and conditional decoding, achieving high image quality with low additional training cost.\\n\\n2. This method provides effective resolution control, allowing AR models to overcome the limitations of fixed-resolution generation.\", \"weaknesses\": \"1. Performance comparisons with recent models such as Lumina-mGPT and Cm3leon (or Anole), such as in segmentation-to-image tasks, would strengthen this paper. Additionally, an analysis or discussion on the potential for integration with these models would be beneficial.\\n\\n2. Spatial conditions like segmentation maps and Canny edges impose strong constraints on structure diversity in generated outputs. Exploring whether some structural diversity can be incorporated within the conditional decoding step would be beneficial.\\n\\n3. Need for discussion on representative failure cases. A discussion of representative failure cases among the generated results would provide valuable insights into the limitations of the proposed method and potential areas for improvement.\", \"questions\": \"Please see the Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for Reviewer uMML\", \"comment\": \"We sincerely appreciate your recognition of our work and genuinely hope that our response addresses your concerns. If you have any further questions, please feel free to let us know!\\n\\n> **Q: Which layers are ideal to introduce the new control layers on? Right now we have a coarse study of this but it could go deeper, although it's a lot of work that might not be super useful in the end.**\\n\\n**A:** It's an interesting question! Our ControlAR replace layers **1-th**, **13-th**, and **25-th** of LlamaGen-XL's 36-layer Transformer with the proposed *conditional sequence layer* for adding controls.\\nWe further analyzethe impact of adding control at different layers based on the **depth-to-image** generation.\\n\\n|Fusion layer | RMSE\\u2193 | FID\\u2193 |\\n|--------------|---------|--------|\\n|1,13,25 | 29.01 | 14.61 |\\n|13,25 | 30.82 | 16.17 |\\n|1,25 | 36.75 | 19.44 | \\n|1,13 | 35.74 | 17.21 |\\n\\nIt shows that suppressing the conditional fusion of the middle layer (13-th layer) has the greatest impact on the generated results. It is sincerely hoped that this result will be of some help to you in your research.\\n\\n> **Q: Some output images shown in the paper show some color saturation or excess contrast - is this an effect of the control layers or just the base model? Is training the control layers biasing the model towards some unrealistic outputs?**\\n\\n**A:** Thank you for your suggestion! We're inclined to present a visualization that is more visually appealing, so we may have inadvertently selected images with higher contrast. However, our results perform well on the FID metric, which means that our generated images are closer to the real data compared to other methods.\"}", "{\"comment\": \"Thank you for the additional results. However, it appears that ControlAR still demonstrates limited ability to effectively control structural diversity, and the accompanying analysis remains insufficient.\\n\\nNonetheless, considering that this work represents an early attempt to integrate spatial controls into AR models, I find the reasons to accept outweigh the reasons to reject and will maintain my score.\"}", "{\"title\": \"Response to Reviewer GBJo about structural diversity\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for your response!\\n\\nYou have raised a very meaningful question. In response, we have further **revised the submission, adding additional results and comparisons**. We hope that our reply can address your concerns.\\n\\nIn the examples previously shown in Fig. 7, we can observe that as the alpha value changes, the influence of canny control on the generated images gradually decreases. To further demonstrate the role of $\\\\alpha$ in geometric control, we have added more visualization results of control coefficients to Fig. 7 (**the updated revision**), including Canny edges, HED edges, and LineART controls. Fig. 7 illustrates that as the control $\\\\alpha$ decreases, the differences between the generated results and the spatial controls become increasingly significant. Additionally, this results in different image layouts and the diversity of geometric structures is also improved. \\nTherefore, we believe that ControlAR with $\\\\alpha$ allows for **generating images that are both aligned with spatial controls and exhibit structural diversity**.\\n\\nRegarding the second issue, we must admit that **all current control-to-image models face this challenge: the conflict between text prompts and geometric controls**. This issue is prevalent in control-to-image models such as *ControlNet* and *ControlNet++* (as shown in Fig. 9). These well-established diffusion models struggle to balance the text prompts and spatial controls. However, we believe that these control-to-image models are currently focused on generating results that align with spatial controls. In fact, ControlNet++ introduces additional supervision to promote alignment between the generated image and spatial controls, which weakens the influence of the text prompt. Therefore, in the context of controllable generation, these control-to-image models, including our proposed ControlAR, will all try their best to generate results that align with the spatial controls.\\n\\nHowever, in ControlAR, we explore a dynamic way of adjusting spatial control, which allows ControlAR to reduce its adherence to spatial controls and generate results with more structural diversity, as shown in Fig. 7. Similarly when facing the conflict between text prompts and spatial controls, ControlAR can mitigate these conflicts by adjusting the coefficient $\\\\alpha$ of the spatial controls, enabling the generated results to balance both the text and the control. As shown in Fig. 9, when the coefficient $\\\\alpha$ is set to 0.4, ControlAR can generate elements such as \\\"candles\\\" and \\\"cake,\\\" which appear in the text prompt.\\n\\nEven though current mainstream control-to-image methods encounter similar issues, the ControlAR we propose shows great potential in handling these conflicts effectively.\\n\\n\\nWe hope that our response and the updated revision can address your concerns. If you have any further questions, we would be more than happy to discuss them with you. If our response resolves your concerns, we would also appreciate the possibility of an increased score.\\n\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"comment\": \"Dear Reviewer uMML,\\n\\nThank you very much for recognizing our work! Your suggestions are also extremely valuable! Wishing you a pleasant day!\\n\\nSincerely,\\\\\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you very much for your response. I'd like to further elaborate on the two issues you mentioned.\\n\\n* **Novelty of using DINOv2:** \\n\\nFirstly, we'd like to clarify that our primary contribution lies in ControlAR, a simple yet effective method for AR-based controllable generation. Moreover, ControlAR is the first controllable generation method based on next-token prediction.\\nCompared to traditional diffusion-based controllable generation methods, the AR-based model relies on token-level control encoding. Therefore, we further explore the control encoding method using ViT architectures in this work and evaluate the impact of different pre-training strategies.\\n\\nUsing DINOv2 to encode spatial controls in ControlAR is a novel way compared to previous approaches, such as ControlNet or ControlNet++. Previous control-to-image generation methods rarely explore how to encode spatial controls and overlook existing *off-the-shelf* pre-trained models for controllable generation. \\nCompared to other pre-trained ViTs, such as ImageNet, DINOv2 provides superior geometric control encoding. Additionally, due to its pre-training on large-scale data, it demonstrates greater robustness on general datasets.\\nIn ControlAR, we have highlighted these phenomena and provided substantial evidence. We believe this will offer valuable references and inspiration for future controllable generation models and training strategies.\\n\\nIt is worth emphasizing that *using DINOv2 is not our sole contribution*. Our **primary contribution** lies in ControlAR: an autoregressive controllable generation method. Within this framework, (1) we proposed the DINOv2-based control encoding approach, (2) the conditional decoding method for controllable generation, and the capability for (3) arbitrary-resolution controllable generation.\\n\\n* **Performance improvement:**\\n\\nIn Tab. 1, we specifically compared the ControlAR method with ControlVAR (a concurrent controllable generation work based on VAR). The results are highly significant:\\n* (1) With the same number of parameters (e.g., ~300M), ControlAR achieves a Depth FID of 4.19, while ControlVAR scores 13.8, showing a substantial improvement over ControlVAR. \\n* (2) Furthermore, the results from ControlAR (300M parameters) outperform ControlVAR (2B parameters) in terms of FID. This directly demonstrates the superiority of our method\\u2014fewer parameters and better performance. Moreover, LLamaGen-L (343M parameters) obtains 3.80 FID on ImageNet while VAR-d30 (2.0B parameters) obtains 1.92 FID on ImageNet, indicating ControlVAR has better generation models while the performance is inferior to our ControlAR. \\nWe do not consider the results marginal; on the contrary, it is a highly significant improvement!\\n\\nAs for Tab. 2, our ControlAR achieves state-of-the-art results across multiple tasks and significantly outperforms ControlNet overall. These results are not marginal improvements. Notably, ControlNet++ fine-tunes the pre-trained model of ControlNet and employs an additional reward model for supervision, ensuring the generated results are highly consistent with the geometric controls. In contrast, our proposed ControlAR, using a **simple from-scratch** training approach, achieves competitive results with ControlNet++ and outperforms it in some cases. Moreover, it is important to note that the performance of the underlying generative models, such as SD1.5 or LlamaGen, significantly influences the performance of ControlNet and ControlAR models.\"}", "{\"metareview\": \"This paper introduces ControlAR, a framework to integrate spatial controls into autoregressive image generation models. It enables AR-based controllable image generation by introducing a lightweight control encoder and a conditional decoding strategy. This approach generates each token by fusing control tokens with image tokens, enhancing both efficiency and controllability. The framework supports multiple control modalities and enables arbitrary-resolution image generation. Experimental results demonstrate strong performance across a range of tasks, including edge, depth, and segmentation-based generation, competing with state-of-the-art methods such as ControlNet++.\\n\\nOverall, this paper makes a significant contribution to the field of controllable image generation. While some limitations remain, the novelty of the approach and the robustness of the experimental results clearly outweigh these concerns. The majority of the reviewers also holds positive feedback, and thus I recommend accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised a few questions regarding structural diversity, efficiency metrics, novelty, and other aspects. However, these questions do not represent fundamental flaws in the paper, and the authors have adequately addressed most of the concerns. The significance of this work, as one of the first attempts to enable controllability in autoregressive visual generative models, outweighs these issues. Therefore, I recommend accepting the paper.\"}" ] }
BWYR9rfGOU
SATE: A Two-Stage Approach for Performance Prediction in Subpopulation Shift Scenarios
[ "Dongbai Li", "Huan Zhang" ]
Subpopulation shift refers to the difference in the distribution of subgroups between training and test datasets. When an underrepresented group becomes predominant during testing, it can lead to significant performance degradation, making performance prediction prior to deployment particularly important. Existing performance prediction methods often fail to address this type of shift effectively due to their usage of unreliable model confidence and mis-specified distributional distances. In this paper, we propose a novel performance prediction method specifically designed to tackle subpopulation shifts, called Subpopulation-Aware Two-stage Estimator (SATE). Our approach first estimates the subgroup proportions in the test set by linearly expressing the test embedding with training subgroup embeddings. Then, it predicts the accuracy for each subgroup using the accuracy on augmented training set, aggregating them into an overall performance estimate. We provide theoretical proof of our method's unbiasedness and consistency, and demonstrate that it outperforms numerous baselines across various datasets, including vision, medical, and language tasks, offering a reliable tool for performance prediction in scenarios involving subpopulation shifts.
[ "Performance Prediction", "Subpopulation Shift", "Unsupervised Accuracy Estimation" ]
Reject
https://openreview.net/pdf?id=BWYR9rfGOU
https://openreview.net/forum?id=BWYR9rfGOU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vlTDjE6NFS", "vktTXjNY7P", "pd8cTnsbaN", "gMtzQRYjJL", "e4mHrhx6bm", "ZpNNNoh039", "Tb1K0uIYpf", "TK89KoQJtX", "RrHLAsRoM3", "Potn3gmpdZ", "F7nTMmOJrP", "9KoOz2yONE", "8PxV256AxI", "6mvpZf0fdJ", "6RmlOa4IdM", "6QU1QGYKwd", "5g9hEK7H3G", "26kjVtxI20" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732835292822, 1732532872771, 1729050538832, 1733129933893, 1733129752151, 1732533679531, 1730787105955, 1730721360775, 1732532646684, 1730706521190, 1734768983024, 1732533099047, 1732857267009, 1732533281189, 1733129696545, 1732612099048, 1732533354451, 1737523575750 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_9P9W" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_Vhym" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_uHAx" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_9P9W" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_qi9w" ], [ "ICLR.cc/2025/Conference/Submission3435/Area_Chair_myBC" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_Vhym" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Submission3435/Reviewer_uHAx" ], [ "ICLR.cc/2025/Conference/Submission3435/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the feedback, which answers my questions. I updated my score.\"}", "{\"title\": \"Response to Reviewer uHAx (part 2/2)\", \"comment\": \"> **W4**: Should evaluate on domain generalization benchmarks.\\n\\n**R4**: Here we develop a lightweight method to **detect unseen subgroups** after the first step of SATE. We use Mean Square Error (MSE) of the linear decomposition as the indicator of the existence of unseen subgroups. Larger MSE indicates higher probability that test set contains unseen subgroup.\\n\\nWe conducted experiments on the NICO++ [1], a commonly used domain generalization benchmark, to evaluate our detection method. The experimental setup and findings are as follows:\\n\\n- Benchmark Setup: We utilized the NICO++ dataset, focusing on $y \\\\in \\\\{0, 1, 2, 3, 4, 5\\\\}$ and $a \\\\in \\\\{0, 1, 2, 3, 4, 5\\\\}$, resulting in 36 subgroups in total. The training data followed the original split, where subgroup (5,4) was absent. While all 36 subgroups were present in the original test split.\\n- Test Sets: To simulate various conditions, we created 50 test sets, each comprising $k$ randomly selected subgroups from the original test set.\\n- Evaluation: We evaluate the effectiveness of detection by the Area Under the Curve (AUC) between the existence of unseen subgroup and the MSE of linear decomposition.\\n\\n|k|5|10|20|\\n|-|-|-|-|\\n|AUC| 0.950| 0.895| 0.869|\\n\\nOur results demonstrate that while using linear decomposition to estimate subgroup proportions, MSE is a reliable metric for detecting unseen domains. It consistently performs well when the number of subgroups in the test set becomes large ($k=10, 20$), further extending the applicability of our method to domain generalization scenarios.\\n\\n> **Q1**: How is it enforced that $w$ should sum to 1?\\n\\n**RQ1**: Based on our assumptions, the weights $w$ theoretically sum to 1 without external enforcement. In practice, we normalize $w$ after solving the linear equation.\\n\\n> **Q2**: In figure 2, has the model been trained with the same data augmentations?\\n\\n**RQ2**: The models were trained without data augmentation. This decision aligns with the standard practices followed in SubpopBench [2], where no data augmentation is applied during training.\\n\\n[1] https://arxiv.org/abs/2204.08040\\n\\n[2] https://arxiv.org/abs/2302.12254\"}", "{\"summary\": \"The paper addresses how to predict the performance of an unlabeled test set in the presence of subpopulation shifts between the training and test sets. The authors propose a two-stage method. First, they estimate the proportions of different subpopulations in the test set by leveraging the average feature representation of all test samples and comparing it with the prototype features of each subpopulation in the training set. Next, they evaluate the performance of each subpopulation individually using a data-augmented version of the training set. Finally, the predicted overall test set performance is obtained by computing the weighted average of the subpopulation performances. The authors validate this approach with experiments on image and NLP datasets.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The study of performance prediction methods robust to distribution shifts is practical and meaningful.\\n2. The method proposed by the paper is straightforward and reasonable.\\n3. The authors provide the source code, which is highly commendable.\", \"weaknesses\": \"1. The writing of the paper should be improved, as the flow of logic is unclear in several parts. For example, the logic between the first four paragraphs of the introduction is confusing, and the same lack of clarity is present in the four paragraphs of section 4.1.\\n2. If I understand correctly, the terms subpopulation, subgroup, group, and subset in the paper are used interchangeably to convey the same meaning. This inconsistent terminology further increases confusion for the readers.\\n3. The theoretical part of the paper is trivial, lacking valuable insights in both the proof process and the results presented. I suggest that this part should not occupy such a significant portion of the manuscript and could potentially be removed from the main text altogether.\\n4. I have some concerns about the effectiveness of using a data-augmented training set. Modern image classification models typically employ a wide range of data augmentation techniques to enhance model performance. Therefore, the model should also perform well on augmented training images, especially given the simple geometric transformations like Crop, Flip, and RandomRotation used in the paper. I briefly reviewed the source code provided by the authors, and if I understand correctly, these augmentation techniques do not seem to be incorporated into the training process. This implies an assumption that appears to be rather unrealistic.\\n5. The baseline methods mentioned in Section 2, such as Distribution Discrepancy-based and Model Agreement-based approaches, do not appear to be compared in the experiments.\\n6. The authors emphasize spurious correlation in the motivation section, which raises a question for me: is the method aimed at addressing all types of subpopulation shifts, or is it specifically targeting spurious correlations? Based on my understanding, the former is correct. Therefore, what is the purpose of highlighting spurious correlation in this context?\\n\\nBased on my current assessment, this paper is not sufficient for publication at ICLR. I will adjust my score accordingly based on the authors\\u2019 clarifications and modifications during the rebuttal phase.\", \"questions\": \"My questions that need clarification are included in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for taking the time to review our response. We are happy to discuss any concerns that have not yet been addressed.\"}", "{\"comment\": \"Thank you very much for reconsidering and increasing your score. Your support has been very helpful to us, and we would be happy to engage in further discussion if needed.\"}", "{\"title\": \"Response to Reviewer Vhym\", \"comment\": \"Dear Reviewer Vhym,\\n\\nWe thank you for your valuable feedback and constructive suggestions. Below, we address each of your comments in detail: \\n\\n> **W1.1**: The first four paragraphs of the introduction are confusing.\\n\\n**R1.1**: We would like to clarify the logic of the first four paragraphs of the introduction:\\n\\n- Paragraph 1: Highlights the importance of performance prediction.\\n- Paragraph 2: Defines what performance prediction means.\\n- Paragraph 3: Discusses the focus of previous performance prediction methods.\\n- Paragraph 4: Identifies subpopulation shift as a research gap in this field.\\n\\nWe have made some edits to the paper to make the structure more clear.\\n\\n> **W1.2**: Section 4.1 is confusing.\\n\\n**R1.2**: We acknowledge that Section 4.1 includes various content, and we could have used clearer logical connections to make the flow more explicit. We have revised the paper to improve clarity and better guide readers through the section. Briefly, the updated structure is as follows:\\n\\n- The first two paragraphs explain why current methods (confidence-based and distance-based) fail in subpopulation shift scenarios.\\n- The third paragraph introduces the origin of our linear decomposition idea.\\n- The final paragraph highlights the benefits of a method that can flexibly incorporate the validation set.\\n\\nTogether, these elements establish the motivation for inventing SATE. \\n\\n> **W2**: Inconsistent terminology.\\n\\n**R2**: While some terms such as \\\"subset,\\\" \\\"subpopulation,\\\" \\\"subgroup,\\\" and \\\"group\\\" convey similar meanings, we believe their usage does not cause confusion. To clarify:\\n- The term \\u201csubset\\u201d is used **exclusively in the notation section** for mathematical clarity.\\n- The term \\u201csubpopulation\\u201d is **always paired with the word \\\"shift\\\"** to align with established terminology in the field (e.g., \\\"subpopulation shift\\\").\\n- We have **updated the paper and replace the word \\\"group\\\" with \\\"subgroup\\\"** to make terminology more clear.\\n\\n> **W3**: Theoretical part should be removed from the main text.\\n\\n**R3**: We agree with this suggestion. In the revised version, we will move the detailed proof to the appendix and only keep the assumptions and propositions in the main text.\\n\\n> **W4**: Data augmentation is not incorporated into the training process.\\n\\n**R4**: You are correct that data augmentation is not incorporated into the training process. Our implementation is built on SubpopBench, a widely used benchmark in the field, which does not include data augmentation in its codebase [1]. Similarly, ATC, one of the baselines in our experiments, also excludes data augmentation for datasets such as ImageNet, ImageNet-200, and the language tasks in their source code [2]. To ensure a fair comparison with prior work, we follow the same settings by excluding data augmentation during training.\\n\\nAdditionally, incorporating data augmentation during training would not necessarily harm the performance of our method. Specifically:\\n\\n1. When validation data is available, we do not rely on the accuracy of the augmented training set, as our approach leverages validation performance.\\n\\n2. When validation data is unavailable, we could use a data augmentation method different from the one applied during training to conduct our approach.\\n \\nDue to time constraints, we will include additional experimental results on this topic in the final version of the paper.\\n\\n> **W5**: Some baselines are not compared in the experiments.\\n\\n**R5**: We agree that including additional baselines could strengthen the evaluation. However, we did not test distribution discrepancy-based and model agreement-based methods for the following reasons:\\n\\n- **Distribution Discrepancy-based Methods:** These methods rely on hidden features\\u2026 are unsuitable for the Model Comparison task because they cannot distinguish between models that share the same featurizer (e.g. ERM and DFR). Moreover, algorithms such as GroupDRO, which optimize for the worst-group performance, make the model fit into a distribution that differs from the original training set distribution, making distributional distance measures misleading.\\n- **Model Agreement-based Methods:** These methods require retraining the model multiple times, which introduces significant computational cost. Also, they assume full access to the model's architecture and training details, making them impractical for many real-world scenarios.\\n \\n> **W6**: Why mention spurious correlation?\\n\\n**R6**: Our method is designed to address all types of subpopulation shifts. Spurious correlation is mentioned specifically because it provides an intuitive example of how subpopulation shifts can cause confidence-based performance prediction methods to fail. This serves as a rationale for not relying on confidence as a predictor in our approach.\\n\\n[1] https://github.com/YyzHarry/SubpopBench\\n\\n[2] https://github.com/saurabhgarg1996/ATC_code\"}", "{\"summary\": \"The authors tackle the problem of estimating model performance under subpopulation shift. They propose SATE, which estimates test-set group proportions by representing the mean test-set embedding as a convex combination of mean training subgroup embeddings. The test-set accuracy is then a convex combination of the per-group model accuracies. The authors evaluate their method on typical subpopulation shift datasets, finding that they outperform the baselines.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The method is intuitive and easy to understand.\", \"The authors evaluate their method on the common subpopulation shift benchmarks.\"], \"weaknesses\": \"1. My main concern is regarding the significance of the method. To me, the problem of estimating model performance under subpopulation shift is largely trivial, as it is just a matter of estimating group proportions on the test set. If group labels are provided in the training domain as the authors assume, it is even simpler, and also a much more restrictive problem setup, which limits the applicability of the method. Given that the method is only theoretically bounded when subpopulation shift is the only shift that occurs (Assumption 1), and does not take e.g. the variation of sample difficulty within each subpopulation into account, I am not convinced that this method is useful.\\n\\n2. It is not surprising that the proposed method outperforms other performance prediction methods (Figure 4), as these baselines are not specific to subpopulation shift, and do not even utilize the training set attributes. There are several other intuitive baselines that the authors could consider, e.g. learning per-group clusters on the training set, learning a debiased group predictor on the training set, or directly learning a model to predict the errors of the original model.\\n\\n3. The authors should also show the predicted group proportions versus the actual proportions in the appendices.\\n\\n4. To improve the significance of the work, the authors should consider evaluating their method on domain generalization benchmarks such as DomainBed [1] or WILDS [2].\\n\\n[1] https://arxiv.org/abs/2007.01434\\n\\n[2] https://arxiv.org/abs/2012.07421\", \"questions\": \"1. When computing the test-set group proportion $w$ in Algorithm 1 Step 10, how is it enforced that $w$ should sum to 1?\\n\\n2. In the result showing augmentations on the y=x line (Figure 2), has the model been trained with the same data augmentations? It seems like this would be an important factor.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces SATE (Subpopulation-Aware Two-stage Estimator), a novel method for predicting model performance under subpopulation shift scenarios, where the distribution of subgroups differs between training and test datasets. SATE's two-stage approach first estimates subgroup proportions in the test set by expressing test embeddings as a linear combination of training subgroup embeddings, then predicts accuracy for each subgroup using augmented training data to produce an overall performance estimate. Experiments show improvement when compared SATE with baselines such as ATC-MC and DoC.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Novel contribution: First performance prediction method specifically designed for subpopulation shift scenarios and first to address unsupervised performance prediction in NLP tasks.\\n\\n2. Theoretical foundations: Authors provide proofs of unbiasedness and consistency under certain conditions.\\n\\n3. Empirical evaluation: Experiments across multiple domains (vision, medical, NLP) and demonstrates superior performance compared to baselines.\", \"weaknesses\": \"1. Knowledge of group annotations: the method requires attribute annotations for the training data, which may not always be available or could be costly to obtain.\\n\\n2. Scalability: The method may struggle with scalability when dealing with a large number of subgroups.\\n\\n3. Linear decomposition: the method relies on linear decomposition assumption for test set embeddings, which might not always hold.\\n\\n4. Discussions of limitations: there is no clear discussion of failure modes or performance under noisy/incomplete attribute annotations.\", \"questions\": \"1. How sensitive is the method to violations of the linear decomposition assumption for test set embeddings?\\n\\n2. What are the specific conditions required for the theoretical guarantees to hold?\\n\\n3. What is the memory requirement for storing subgroup embeddings?\\n\\n4. How robust is the linear equation-solving step when subgroup embeddings are nearly collinear? What happens when some subgroups have very few training samples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer uHAx (part 1/2)\", \"comment\": \"Dear Reviewer uHAx,\\n\\nWe thank you for your valuable feedback and constructive suggestions. Below, we address each of your comments in detail:\\n\\n> **W1.1**: Problem setting is trivial.\\n\\n**R1.1**: We agree that our method is simple, but we want to highlight our contributions again. Our work is the first to introduce the idea of group proportion estimation to the context of performance prediction. Beyond this, we contribute a novel empirical finding (in Section 4.3), which we leverage to estimate group-wise accuracy effectively. These contributions collectively provide a **new perspective** on performance prediction.\\n\\n> **W1.2**: Problem setup is more restrictive because we need group annotations. \\n\\n**R1.2**: We acknowledge this limitation. **We have updated the paper and mention this limitation in the Limitations Section.** However, in practical scenarios, group annotations can often be feasible in certain contexts, such as when datasets are curated with domain knowledge. Additionally, group labels can be a feature from $X$, which users may identify as being responsible for subpopulation shifts. \\n\\n> **W1.3**: Assume subpopulation shift is the only shift that occurs.\\n\\n**R1.3**: Our method is primarily designed for subpopulation shifts, but it is also **robust to moderate covariate shifts in practice**, as demonstrated in Table 1 in our paper. And we have already discussed this limitation in the Limitations Section.\\n\\n> **W2.1**: It is not surprising that the proposed method outperforms other performance prediction methods since they do not utilize the training attributes.\\n\\n**R2.1**: We agree with you that our method utilizes more information, but not using attributes is not an excuse for current baselines to perform poorly in subpopulation shift scenarios, as shown in our experiments. **Our work highlights this gap in the field and proposes a simple yet effective approach to address it.**\\n\\n> **W2.2**: Intuitive Baselines should be compared.\\n\\n**R2.2**: We agree that some of these intuitive ideas are reasonable.\\n1. **Learning a debiased group predictor.** This can only serve as an\\n**alternative for the first step** of our approach (proportion estimation) rather than a baseline of SATE. \\nWe compared our linear decomposition (LD) method with the debiased group predictor (GP) using both Wasserstein distance and cross-entropy between the ground truth and estimated subgroup distribution. \\n\\n |Wasserstein Distance$(\\\\downarrow)$| Waterbirds | CelebA | CheXpert | MultiNLI | SNLI|\\n | - | - | - | - | -| -|\\n | LD (ours) | 0.053 $\\\\pm$ 0.039| **0.039** $\\\\pm$ 0.031| **0.028** $\\\\pm$ 0.008| **0.049** $\\\\pm$ 0.019| **0.065** $\\\\pm$ 0.023|\\n | GP | **0.050** $\\\\pm$ 0.029| 0.043 $\\\\pm$ 0.026| 0.050 $\\\\pm$ 0.07| 0.050 $\\\\pm$ 0.019| 0.093 $\\\\pm$ 0.032|\\n\\n |Cross Entropy$(\\\\downarrow)$| Waterbirds | CelebA | CheXpert | MultiNLI | SNLI|\\n | - | - | - | - | -| -|\\n | LD (ours) | 1.22 $\\\\pm$ 0.15| **1.19** $\\\\pm$ 0.16| 2.47 $\\\\pm$ 0.02| **1.66** $\\\\pm$ 0.25| **2.26** $\\\\pm$ 0.36|\\n | GP | **1.20** $\\\\pm$ 0.18| 1.22 $\\\\pm$ 0.22| 2.47 $\\\\pm$ 0.02| 1.89 $\\\\pm$ 0.24| 3.46 $\\\\pm$ 0.76|\\n\\n Our results show that LD slightly outperforms GP, while its time complexity is significantly smaller than that of GP. If we have $n$ training samples, $k$ subgroups and $d$ dimensional embeddings, time complexity of LD is $O(k^2d)$ and time complexity of GP is $O(nd^2)$. For Resnet architecture and CelebA dataset, $n=19000,k=4,d=2048$. \\n\\n1. **Directly learning a model to predict the errors of the original model.** Since our problem setting focuses on unsupervised accuracy estimation, we have no access to (dataset, error) pairs other than the training set and training error, so it is infeasible to directly train a model to predict the error of the original model. A possible method to get these pairs may be to split the original training set, retrain several models and get their errors on the reserved part. But this retraining approach requires access to the training details and architecture of the original model, making it less applicable to real world settings.\\n\\n> **W3**: The authors should also show the predicted group proportions versus the actual proportions.\\n\\n**R3**: We have revised the paper and include the Wasserstein distance and cross entropy between the predicted and actual subgroup proportions in the appendix, which provides a quantitative measure of the dissimilarity between them. For clarity, here we randomly select three pairs of predicted and actual proportions from the Waterbirds dataset to illustrate the comparison.\\n\\n| predicted proportions| actual proportions|\\n|-|-|\\n|0.27,0.27,0.23,0.23|0.25,0.25,0.25,0.25|\\n|0.32,0.31,0.18,0.19|0.30,0.30,0.20,0.20|\\n|0.17,0.19,0.32,0.32|0.15,0.15,0.35,0.35|\"}", "{\"summary\": \"This paper proposes SATE, a method for predicting test performance under subpopulation shift scenarios. The approach assumes access to test data but not to test set labels. SATE follows a two-stage process: in the first step, it calculates subgroup ratios by linearly expressing the average embedding of test data using the average embeddings of each subgroup in a subgroup-labeled training set. In the second step, it estimates subgroup performance using a subgroup-labeled augmented set (or validation set). The final predicted test accuracy is obtained by calculating a weighted sum of subgroup performance from step 2, using the subgroup ratios from step 1. The effectiveness of SATE is demonstrated on both image and language tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is clearly written and presents experiments across diverse benchmarks.\", \"weaknesses\": \"[W1] The rationale for predicting average accuracy based on the test distribution rather than evaluating using worst group accuracy is not clear. Is there a realistic scenario that motivates this? From a group robustness perspective, an ideal model should perform well across all subgroups. For this reason, group robustness studies typically evaluate models using worst-group accuracy or the average performance across subgroups (unbiased accuracy). However, this paper appears to prioritize sample average accuracy, aligned with the test environment distribution, rather than worst-group or unbiased accuracy. The reasoning behind this choice is not well-justified.\\n\\n[W2] Along with W1, using the labeled set $S'_i$ to measure subgroup performance seems more like conducting a test evaluation than performance prediction. Does assuming access $S'_i$- a labeled set considered unseen from the model\\u2019s perspective- appear to be an overly strong assumption?\\n\\n[W3] For the experiments in Table 1, is the training dataset also composed of corrupted data?\\n\\n[W4] This method seems to handle only seen subgroups. How does it address unseen subgroups? If the goal is performance prediction, it should ideally be able to handle unseen subgroups as well.\\n\\n[W5] Obtaining subgroup labels is often costly, and thus many studies have long focused on learning methods that do not require subgroup labels. Requiring a labeled training set for performance prediction appears to set up an unrealistic scenario. This is especially relevant given that even the DFR method used in this paper does not require training set labels during learning. \\n\\n[W6] How would the approach perform if evaluated using a retrieval-based method? A straightforward solution, for example, could be KNN with $S'_i$.\\n\\n[W7] Some terms appear in formulas without clear definitions (e.g., $P_{T-emb}$, $P_{g-emb}$, $H_S$)\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces SATE (Subpopulation-Aware Two-stage Estimator), a novel method for predicting model performance under subpopulation shift scenarios, where the distribution of subgroups differs between the training and test datasets. SATE employs a two-stage approach: first, it estimates the subgroup proportions in the test set by representing test embeddings as a linear combination of training subgroup embeddings; second, it predicts the accuracy for each subgroup using augmented training data to produce an overall performance estimate.\\n\\nHowever, several concerns have been raised regarding the significance and practicality of the method:\", \"triviality_of_the_problem\": \"Estimating model performance under subpopulation shift is argued to be a relatively straightforward task, particularly if subgroup labels are available in the training domain, as assumed by the authors. This setup simplifies the problem significantly and makes it much more restrictive, thereby limiting the method's applicability.\", \"limited_theoretical_scope\": \"The method is theoretically grounded only under the assumption that subpopulation shift is the sole type of distribution shift (Assumption 1). It does not account for other factors, such as variations in sample difficulty within subpopulations, raising doubts about its utility in more complex real-world scenarios.\", \"baseline_comparisons\": \"While SATE outperforms other performance prediction methods (Figure 4), this result is not unexpected, as the baselines are not tailored for subpopulation shift and do not leverage training set attributes. The paper misses the opportunity to compare against more intuitive baselines, such as learning per-group clusters on the training set, training a debiased group predictor, or directly modeling the prediction errors of the original model.\\n\\nThese limitations suggest that while SATE offers an interesting approach, its broader significance and utility remain unconvincing, particularly in scenarios beyond the constrained setup considered in the paper.\", \"additional_comments_on_reviewer_discussion\": \"The authors acknowledge that their method is relatively simple, despite being the first to introduce the concept of group proportion estimation within the context of performance prediction. The improved performance prediction achieved by leveraging group-wise accuracy estimation is fairly straightforward. Additionally, the authors recognize that the problem setup is more restrictive, as it requires group annotations, which limits the method's broader applicability.\\n\\nFollowing the rebuttal, the reviewers did not express significant support for the paper.\"}", "{\"title\": \"Response to Reviewer 9P9W\", \"comment\": \"Dear Reviewer 9P9W,\\n\\nWe thank you for your valuable feedback and constructive suggestions. Below, we address each of your comments in detail:\\n\\n> **W1**: Requires knowledge of group annotations.\\n\\n**R1**: We acknowledge this limitation. **We have updated the paper and include this limitation in the Limitations Section.** However, in practical scenarios, group annotations can often be feasible in certain contexts, such as when datasets are curated with domain knowledge. Additionally, group labels can be a feature artificially selected from $X$, which users may identify as being responsible for subpopulation shifts. Thus, we believe the need for this additional knowledge is reasonable in subpopulation shift contexts.\\n\\n> **W2**: Scalability: May struggle when number of subgroups is large.\\n\\n**R2**: We agree that when the number of subgroups is large and the number of samples per subgroup is small, the law of large numbers (LLN) assumption may break, potentially affecting our method\\u2019s performance. However, commonly used subpopulation shift benchmarks do not exhibit a large number of subgroups. Experiments have shown our method\\u2019s effectiveness on the CheXpert dataset which has up to 12 subgroups.\\n \\n> **W3**: Linear decomposition assumption may break.\\n\\n**R3**: In the paper, we make two key assumptions to ensure the validity of the linear decomposition: Assumption 1 (only subpopulation shift occurs) and Assumption 2 (the embedding matrix is column full rank). In our experiments, **these assumptions hold well, as evidenced by the fact that the test embeddings can be linearly expressed with very high $R^2$ values (>0.99) among all datasets**.\\n\\nWe acknowledge the possibility of unknown or extreme cases where these assumptions may fail. To address this, we have included a discussion of such potential limitations in the revised paper's Limitations section.\\n\\n> **W4**: No clear discussion of failure modes or performance under noisy/incomplete attribute annotations.\\n\\n**R4**: We recognize that noisy or incomplete attribute annotations may hurt our method's performance and will mention this limitation in the Limitations Section in the revised paper. Addressing these scenarios is beyond the primary focus of this work.\\n\\n> **Q1**: How sensitive is the method to violations of the linear decomposition assumption for test set embeddings?\\n\\n**RQ1**: Please refer to our response in R3. \\n\\n> **Q2**: What are the specific conditions required for the theoretical guarantees to hold?\\n\\n**RQ2**: As mentioned in R3, the theoretical guarantees rely on two key conditions: 1.Other kinds of distribution shift between the train and test splits should be mild. 2. The embedding dimensionality should be significantly larger than the number of subgroups to ensure the embedding matrix is column full rank. Commonly used subpopulation shift benchmarks, such as Waterbirds and CelebA, satisfy these conditions, providing practical examples where our method performs effectively.\\n\\n> **Q3**: What is the memory requirement for storing subgroup embeddings?\\n\\n**RQ3**: The memory requirement is minimal. Our method only requires **storing the average embedding for each subgroup**. For $k$ subgroups with $d$-dimensional embeddings, the storage cost is $O(kd)$.\\n\\n> **Q4**: How robust is the linear equation-solving step when subgroup embeddings are nearly collinear? What happens when some subgroups have very few training samples?\\n\\n**RQ4**: To evaluate the robustness of the linear equation-solving step against collinearity in subgroup embeddings, we compute the Variance Inflation Factor (VIF) for each embedding: $\\\\text{VIF}_i = \\\\frac{1}{1 - R_i^2}$, where $R_i^2$ is the coefficient of determination when regressing embedding $i$ on all other embeddings. A higher VIF indicates stronger collinearity.\\n\\nThe table below summarizes the average VIF values for subgroup embeddings across datasets. Based on these results, we highlight the following observations:\\n\\n1. Several datasets in our experiments exhibit moderate collinearity among subgroup embeddings (VIF > 10). Despite this, our linear decomposition approach demonstrates robustness to moderate levels of collinearity.\\n2. None of the datasets show very strong collinearity (VIF > 100), alleviating concerns about perfect collinearity in practical scenarios.\\n3. Higher VIF values are associated with increased errors in estimating subgroup proportions. For instance, the Wasserstein distance between predicted and actual subgroup proportions is higher for the Waterbirds dataset compared to CelebA.\\n\\n| |Waterbirds_vit|CelebA_vit|MultiNLI_bert|SNLI_bert|\\n|-|-|-|-|-|\\n|Average VIF|29.9|6.0|22.0|30.5|\"}", "{\"comment\": \"Thank you for your response. I will keep my score.\"}", "{\"title\": \"Response to Reviewer qi9w (part 1/2)\", \"comment\": \"Dear Reviewer qi9w,\\n\\nWe thank you for your valuable feedback and constructive suggestions. Below, we address each of your comments in detail:\\n\\n> **W1**: The rational for predicting average accuracy is not clear.\\n\\n**R1**: We address this in the \\u201cSubpopulation\\u201d paragraph of Section 2 but would like to further emphasize the rationale here:\\n\\n- Known Test Distribution: Unlike group robustness studies that focus on worst-group accuracy (WGA) to ensure uniform performance across unknown test distributions, our context is performance prediction where the test distribution is known. In such cases, sample average accuracy aligned with the test environment distribution is a more straightforward metric.\\n- Broader Implications of Subpopulation Shift: While unfairness (e.g., low worst-group performance) is a critical concern, subpopulation shifts can also cause significant degradation in overall performance. Addressing this issue is equally important, motivating our focus on predicting sample average accuracy.\\n\\n> **W2**: Does assuming access $S_i'$ appear to be an overly strong assumption?\\n\\n**R2**:$S_i'$ is the augmented training data from subgroup $i$ if validation data is not available. **This does not need additional information other than training data and an augmenting method.** We do not view this as an assumption, as no existing performance prediction method can operate without access to training data.\\n\\n> **W3**: Is the training dataset also composed of corrupted data?\\n\\n**R3**: No, the training dataset is not corrupted. We\\u2019ve mentioned \\u201cadd perturbations to test sets\\u201d in the \\u201cReal-World Shift\\u201d paragraph of Section 5.3. Corrupting both training and test sets is against the goal of evaluating performance prediction methods under covariate shifts.\\n\\n> **W4**: How does this method address unseen subgroups?\\n\\n**R4**: Here we develop a lightweight method to **detect unseen subgroups** after the first step of SATE. We use Mean Square Error (MSE) of the linear decomposition as the indicator of the existence of unseen subgroups. Larger MSE indicates higher probability that test set contains unseen subgroup.\\n\\nWe conducted experiments on the NICO++ [1], a commonly used domain generalization benchmark, to evaluate our detection method. The experimental setup and findings are as follows:\\n\\n- Benchmark Setup: We utilized the NICO++ dataset, focusing on $y \\\\in \\\\{0, 1, 2, 3, 4, 5\\\\}$ and $a \\\\in \\\\{0, 1, 2, 3, 4, 5\\\\}$, resulting in 36 subgroups in total. The training data followed the original split, where subgroup (5,4) was absent. While all 36 subgroups were present in the original test split.\\n- Test Sets: To simulate various conditions, we created 50 test sets, each comprising $k$ randomly selected subgroups from the original test set.\\n- Evaluation: We evaluate the effectiveness of detection by the Area Under the Curve (AUC) between the existence of unseen subgroup and the MSE of linear decomposition.\\n\\n|k|5|10|20|\\n|-|-|-|-|\\n|AUC| 0.950| 0.895| 0.869|\\n\\nOur results demonstrate that while using linear decomposition to estimate subgroup proportions, MSE is a reliable metric for detecting unseen domains. It consistently performs well when the number of subgroups in the test set becomes large ($k=10, 20$), further extending the applicability of our method to domain generalization scenarios.\\n\\n[1] https://arxiv.org/abs/2204.08040\"}", "{\"comment\": \"Thank you very much for reconsidering and increasing your score. We really appreciate the discussion and your valuable feedback. We would be happy to engage in further discussion if needed.\"}", "{\"comment\": \"Thank you for the response. The new experiments have addressed some of my concerns, and I have raised my score to a 5 as a result. However, I believe that W1.1-W1.3 are still fundamental weaknesses of the paper that limit its significance.\"}", "{\"title\": \"Response to Reviewer qi9w (part 2/2)\", \"comment\": \"> **W5**: Requiring a labeled training set for performance prediction appears to set up an unrealistic scenario.\\n\\n**R5**: The requirement for access to training set labels ($y$) is **unavoidable for most performance prediction methods**, such as ATC and NI. Some model-agreement-based methods even need to retrain the model. Similarly, DoC indirectly relies on training set labels as it assumes access to the model\\u2019s accuracy on the training set. This reliance on training labels is thus a common requirement across existing approaches.\\n\\n> **W6**: How would the approach perform if evaluated using a retrieval-based method?\\n\\n**R6**: Based on our understanding, \\\"KNN with $S_i'$\\\" refers to a approach for estimating test set accuracy. This method identifies the k nearest training subgroups $S_i'$ of the test set $T$ in the embedding space and uses the average accuracy of these k neighbors as the estimated accuracy.\\n\\nHowever, this approach may not perform well, as illustrated by the following example: Suppose there are 4 subgroups in total. Test set $T_1$ consists of 20% of subgroup 1 and 80% of subgroup 2, while test set $T_2$ consists of 50% of subgroup 1 and 50% of subgroup 2. For both $T_1$ and $T_2$, the two nearest neighbors will be $S_1'$ and $S_2'$. Consequently, they would produce identical accuracy estimates, despite having different subgroup distributions. This outcome is clearly not reasonable.\\n\\nIf this interpretation does not address your concern, we would appreciate further clarification to ensure an accurate response.\\n\\n> **W7**: Some terms appear in formulas without clear definitions ($P_\\\\text{T-emb}$, $P_\\\\text{g-emb}$, $H_s$)\\n\\n**R7**: $P_\\\\text{T-emb}$ is **already defined in the \\u201cEstimating Subgroup Proportion\\u201d paragraph of Section 4.2** as the probability distribution of $h_T$, where $h_T$ represents the embedding of a sample from the test set $T$. Similarly, $P_\\\\text{g-emb}$ refers to the embedding distribution for a specific subgroup $g$. \\n\\nAs for $H_s$, it is a temporary variable **defined in Line 1 of Algorithm 1**, it is a $d \\\\times (c \\\\cdot m)$ matrix, where each column corresponds to the average embedding of a specific subgroup. **We have updated the paper and mentioned where $H_s$ is defined when referenced later.**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
BWU6Xl1nD3
UniG: Modelling Unitary 3D Gaussians for View-consistent 3D Reconstruction
[ "Jiamin WU", "Kenkun Liu", "Yukai Shi", "Xiaoke Jiang", "Yuan Yao", "Lei Zhang" ]
In this work, we present UniG, a view-consistent 3D reconstruction and novel view synthesis model that generates a high-fidelity representation of 3D Gaussians from sparse images. Existing 3D Gaussians-based methods usually regress Gaussians per-pixel of each view, create 3D Gaussians per view separately, and merge them through point concatenation. Such a view-independent reconstruction approach often results in a view inconsistency issue, where the predicted positions of the same 3D point from different views may have discrepancies. To address this problem, we develop a DETR (DEtection TRansformer)-like framework, which treats 3D Gaussians as decoder queries and updates their parameters layer by layer by performing multi-view cross-attention (MVDFA) over multiple input images. In this way, multiple views naturally contribute to modeling a unitary representation of 3D Gaussians, thereby making 3D reconstruction more view-consistent. Moreover, as the number of 3D Gaussians used as decoder queries is irrespective of the number of input views, allow an arbitrary number of input images without causing memory explosion. Extensive experiments validate the advantages of our approach, showcasing superior performance over existing methods quantitatively (improving PSNR by 4.2 dB when trained on Objaverse and tested on the GSO benchmark) and qualitatively.
[ "3D reconstruction", "Gaussian Splatting", "Novel view synthesis", "deformbale Transformer" ]
Reject
https://openreview.net/pdf?id=BWU6Xl1nD3
https://openreview.net/forum?id=BWU6Xl1nD3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ziLGOC1Q1n", "yBg3xbcisj", "ukiyaYF7b0", "u0pO5hReQN", "th94VAfFrf", "tglHBV7UHI", "sxai90oDKk", "s1ACfIBKAi", "qsCsFka0bt", "qQFlSODMUG", "oc3SPsrMHZ", "nGs0il74Fq", "mF1cfi5cqy", "kRdH886PHJ", "guk5oKpru4", "g7jTvVUkDa", "fylumaKfot", "fyD19BvItl", "eiA3Aa2uNN", "bcoz4vlDx6", "Pzp42HWyq1", "OxC7M1utuc", "OIHLoRgpMA", "OBv1bdAraz", "NLJuE1wZqc", "MQuWpkpnOM", "KQ6lRUGsqi", "J2NzlVclon", "HhnFpL9DK2", "H26zbSECQq", "D0V0r7gUG9", "CbcNnWpXM4", "AhL3Gwmuv2", "8FeIvEYDKY", "8F1g8faWpV", "7rmJZWeC7v", "75wh8Hjr8p", "48Y1HgdNU8", "2hwWb4uDlk", "0CalVyJjPG" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733209747895, 1732801926233, 1733144464470, 1732269107403, 1732545561305, 1732869427310, 1732118273823, 1732653142331, 1732550137207, 1732118285294, 1732118327187, 1732118408730, 1732118436154, 1732942663261, 1730674391171, 1732502683965, 1730626068881, 1732550158193, 1733129565701, 1730588989425, 1733144488362, 1732118311583, 1733129542162, 1730719999714, 1733132604131, 1733209918554, 1732550068603, 1733129588018, 1732700598547, 1734420894951, 1733294358165, 1732549201694, 1737523529934, 1732492537557, 1733129606867, 1732700391177, 1733209702479, 1732118414780, 1733134038990, 1732801943711 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_UhUJ" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_9662" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_t736" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_9662" ], [ "ICLR.cc/2025/Conference/Submission2760/Area_Chair_c4RT" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_UhUJ" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_t736" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_4FKX" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_t736" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Area_Chair_c4RT" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_t736" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ], [ "ICLR.cc/2025/Conference/Submission2760/Reviewer_UhUJ" ], [ "ICLR.cc/2025/Conference/Submission2760/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 9662,\\n\\nThank you for dedicating your time and providing feedback on our work. We have presented new experiments and more explanations to address your concerns or questions. As it approaches the end of the discussion period, we really want to know do you still have other concerns or questions so that we can put efforts in the last day to solve them. We do not wish for you to have negative attitude to our paper due to any misunderstanding. Your reply is very important for us, and we are looking forward to it.\"}", "{\"comment\": \"Similar to LGM, both pixelSplat [1] and MVSplat [2] follow a workflow that regress Gaussians from each view within the respective camera spaces and subsequently merge them in the world space. In pixelSplat, the integration of cross-view-aware features is through an epipolar Transformer, and it still suffers from inaccurate depth estimation. MVSplat adopts a design that incorporates a cost volume storing cross-view feature similarities for all possible depth and makes a more accurate depth prediction. However, they assign each pixel with a 3D Gaussian and thereby generates a planar representation rather than the object itself. In addition, MVSplat tends to obscure object details due to the occlusion by 3D Gaussians from other viewpoints, resulting in suboptimal outcomes. To address this issue, we mask the 3D Gaussians on background pixels to help it focus on rendering 3D Gaussians contributing to the object itself, noted as 'MVSplat (masked)' in the results.\\n\\nWe present the comparison to pixelSplat [1] and MVSplat [2] in Appendix A.5 and the quantitative results on the GSO-random dataset is shown in Table 12. From the table, we can see that their results is significantly worse than ours. It is probably due to the fact that they have only been trained on the scene reconstruction dataset RealEstate10 [3], which only contains small camera difference among views. The cameras of object reconstruction dataset GSO-random has larger variations, so we observe more severe misaligned 3D Gaussians (view inconsistency) from different input views for MVSplat, as shown in the visualized results in Figure 21 (we also add the corresponding videos and ply files in the MVSplat\\\\_results folder of supplementary materials). And we find that MVSplat cannot correctly predict the back side of the object. It is not a big issue for scene reconstruction as their camera only moves a little, but would lead to incomplete reconstruction of objects. In the figure, we present the centers of 3D Gaussians generated from different views with different colors and the novel views are rendered from the 3D Gaussians from all views. As for pixelSplat, it almost cannot output reasonable results when use GSO-random dataset for testing, so we have not presented their visualized results. We provide the content of Table 12 as following:\\n\\n**Table: Comparison with MVSplat and pixelSplat on the GSO-random dataset**\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | Inference time \\u2193 | Rendering time \\u2193 |\\n|---------------------------|--------|--------|---------|----------------|----------------|\\n| MVSplat | 12.92 | 0.80 | 0.30 | 0.112 | 0.0090 |\\n| MVSplat (masked) | 16.52 | 0.80 | 0.19 | 0.112 | 0.0045 |\\n| pixelSplat (2 views) | 12.00 | 0.80 | 0.28 | 1.088 | 0.0045 |\\n| pixelSplat (2 views masked)| 12.05 | 0.79 | 0.27 | 1.088 | 0.0023 |\\n| **Ours** | **26.30** | **0.93** | **0.08** | **0.694** | **0.0019** |\\n\\n[1] David Charatan, et al., pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction. CVPR2024.\\n\\n[2] Chen, MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images. ECCV2025\\n\\n[3] Zhou, et al., Stereo magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. (Proc.\\nSIGGRAPH), 2018\"}", "{\"comment\": \"Thank you for your feedback and acknowledgment of our efforts. We are glad to hear that your major concerns have been mostly addressed.\\n\\nFor the number of 3D Gaussians to use in our method, we have plotted a figure in Figure 18 of the main paper. In the figure, we presented the ablation study results on the impact of the number of 3D Gaussians. With the increase of 3D Gaussians' number, the PSNR on the test set also increase almost linearly until it reach \\nthe number of around 19600. Thus, we chose this number to conduct the other experiments to achieve a balance between performance and computational efficiency. If we keep raising the 3D Gaussians' number, our method's PSNR on the test set continues to grow, albeit not as rapidly. Thus, we can generally conclude that our method can also benefit from the scale-up of the number of 3D Gaussians.\\n\\nFor the experiments on the pixelSplat and MVSplat, we regret what we have done is not as you expect. We will remove this part or add the results of them after training on the Objaverse dataset, following your suggestion. We now have successfully run the training code of them on the Objaverse dataset and everything looks correct. We will keep checking the correctness of the codes and will present the quantitative results of them if time permits before the end of the discussion. In addition, we already added the visualized results of them on RealEstate10K on the supplimentary materials (i.e. mv\\\\_splat.mp4 and pixelSplat\\\\_scene\\\\_result.jpg), which also indicated their problems (including view inconsistency).\"}", "{\"title\": \"Add results of TrioSR on GSO dataset for single view situation\", \"comment\": \"# Results of TrioSR on GSO dataset for single view situation\\n\\nWe test TripoSR and Triplane-Gaussian on the single image reconstruction setting with the checkpoint they provide on github. As shown in the following table, our model surpass the previous methods on both the performance and the inference speed. In terms of rendering speed, both Triplane-Gaussian and our model employ Gaussian Splatting, known for its fast rendering speed. Conversely, TripoSR utilizes NeRF, a method slower in comparison to Gaussian Splatting.\\n\\n**Table 1:** Quantitative results trained on Objaverse LVIS and tested on GSO. 3D sup. means need 3D supervision.\\n| Method | PSNR $\\\\uparrow$ | SSIM $\\\\uparrow$ | LPIPS $\\\\downarrow$ | 3D sup. | Inference time | Rendering time |\\n|--------------------------------------|-----------------|-----------------|--------------------|------------|----------------|----------------|\\n| Triplane-Gaussian | 18.61 | 0.853 | 0.159 |\\u2714 | 1.906 | **0.0025** |\\n| TripoSR | 20.00 | 0.872 | 0.149 |\\u2718 | 3.291 | 22.7312 |\\n| Ours | **23.45** | **0.897** | **0.093** | \\u2718 | **0.476** | **0.0025** |\\n\\nWe have updated this table as Table 5 in the revised version of the PDF.\"}", "{\"comment\": \"Thank you for the answers. Should I expect results from MVSplat or pixelSplat within the rebuttal period?\"}", "{\"comment\": \"I have read all the rebuttals and am very grateful for your responses and the additional experimental results.\\n\\nI can understand the differences between this work and TriplaneGaussian and Instant3D. However, like Reviewer t736, as I mentioned in Weakness 2, there needs to be more elaboration and explanation regarding the severity of the inconsistency caused by per-pixel Gaussian prediction; it is not currently demonstrated that this issue is widespread and severe.\\n\\nIn addition, I still do not understand how the consistency between the coarse stage's GS and the refinement stage's GS can be guaranteed. The authors only said that the same structure was used in the coarse stage, but how can it be guaranteed that the results of the single-view prediction in the coarse stage will be consistent with the multi-view images input in the refinement stage? Or perhaps I have misunderstood something.\"}", "{\"title\": \"We update the revised PDF and add supplimentary\", \"comment\": \"Thank you very much for your thoughtful comments on our work. We greatly appreciate your feedback. We will address each of your concerns individually as outlined below. For those that require additional experiments, we will ensure to upload the results as soon as possible. Should you have any further questions or concerns, please do not hesitate to reach out to us. We also provide the revised PDF and some videos in the supplimentary.\"}", "{\"comment\": \"I appreciate the authors' efforts to address the concerns raised and improve the paper's positioning.\\n\\nAfter reviewing the current draft, many sections now look fine. However, I still find the description of GS-LRM and GRM in the related work section problematic, particularly regarding the claim of a \\\"theoretical problem.\\\" At a conference like ICLR, with a significant audience from the machine learning community, a theoretical claim is a very serious statement and typically requires a formal proof. Specifically, a proof that per-view methods theoretically lead to a higher lower bound of geometry error compared to the proposed method is needed if a theoretical claim must be made, which I don't feel you can really provide here. Currently, this discussion remains at an empirical level, not a theoretical one.\\n\\nRelevant to this, I\\u2019m not sure if the authors fully understand how GS-LRM and similar methods work. The paper and rebuttal repeatedly state that those methods predict depth independently, whereas the proposed method uses cross-view information. However, GS-LRM, for instance, employs full attention across all views during prediction, inherently incorporating cross-view information as well, which cannot be trivially seen as an independent prediction. This design also could explain why GS-LRM does not exhibit the view inconsistency. In fact, several other cited methods in the paper also adopt similar designs, so I don't feel the claimed independent prediction and cross-view information are really unique to the paper.\\n\\nOverall, I have the remaining concerns:\\n\\n1. Paper positioning about pre-view vs unitary GS prediction as mentioned above. \\n\\n2. Result Quality: While I understand that absolute fair comparisons are difficult due to the lack of code release, I do not find the results in the paper to surpass prior state-of-the-art methods like GS-LRM. I have compared many examples from the submission with results on the GS-LRM/GRM websites. Since all methods provide GSO results, it is easy to find the same or similar objects. I can easily see results from GS-LRM have sharper details than yours. I note your previous rely says \\\"However, only from their presented good examples to totally negate our analysis is unfair and not reasonable as it also cannot be concluded that they do not have the problem.\\\" But in fact, I found more than 30 GSO results with the interactive viewer and downloadable plys from GS-LRM, which is even more than the total number of results shown in your paper and also overlaps with one or two of your examples. So I don't see potential concerns about cherry-picking here. On the other hand, the paper has few video/3d demonstrations, almost only showing a single result video that combines multiple results at a low resolution. If the authors can provide separate videos or viewable plys, it will be much easier for me and others to view and justify the quality.\\n\\nHowever, while I still have concerns about the quality, I can see the paper does show better results, especially under the fixed-view setting, than other baselines like LGM and MVGamba that are also recently published works. Therefore I won't see the quality issue as a strong blocker here. Since the paper also contributes a new reconstruction pipeline that is different enough from all previous ones, I overall feel the paper is on the bar of acceptance. So I would not object if other reviewers advocate for its acceptance. However, I am not ready to raise my score, as the writing/positioning issues have not been fully addressed. And I do not expect the authors to resolve my concerns about the quality since GS-LRM\\u2019s results are obviously superior qualitatively...\"}", "{\"comment\": \"Dear Reviewer 4FKX,\\n\\nWe want to express our sincere gratitude for your insightful suggestions which are instrumental in enhancing the quality of our work. We hope that our proposed modifications would have addressed your concerns about the clarity of our presentation. We would really appreciate it if you could let us know if there are any further questions or aspects of the paper that require additional clarification.\\nThank you once again for your time and consideration.\"}", "{\"comment\": \"Thank you very much for your thoughtful comments on our work. We greatly appreciate your feedback. We will address each of your concerns individually as outlined below. For those that require additional experiments, we will ensure to upload the results as soon as possible. Should you have any further questions or concerns, please do not hesitate to reach out to us.\\n\\n# For Weaknesses1: \\nThanks for pointing this out. The blurry are mainly caused by the automatic zoom-in of the PDF editing software as our model is trained with the resolution of 128, so as the rendered output image. Now, we are working on the result with the resolution of 512 to solve this problem.\\n\\n# For Weaknesses2: \\nApart from PSNR, other metrics SSIM $\\\\uparrow$ (increased by 0.3) and LPIPS $\\\\downarrow$ (reduced by 0.1739) are also significantly improved. We provide the visualization on Splatter Image and our method in Figure 15 in the revised version. Although the improvement for PSNR is not significant, the visualization of our model is much better than Splatter Image. (The visualizations for other methods are in Figure 10).\\nMoreover, previous novel view synthesis papers like MVGamba[1], Splatter Image[2], Instantmesh[3], pixelsplat[4] also provide PSNR $\\\\uparrow$ improvement around 0.5dB, so it is not a marginal increase.\\n\\n# For Questions1:\\nWe give more view result in Appendix Figure 17 in the revised PDF.\\nOur model is positioned on the 'sparse view' setting, which indicates the number of views less then 10, so we only reports the performance of views from 2 to 8. With the increase of input views, information from similar views becomes redundant, so the gain for our model has become plateaued while other methods suffer from performance drop as they cannot handle too many input views due to the view inconsistent problem. As we keep increasing the number of input views larger than 8, our method can still benefit from more input views (as shown in Appendix Figure 17) while others meet the CUDA-out-of-memory problem.\\n\\n# For Question2:\\nThanks a lot for pointing this out. The setting of random input view is obvious a more challenging task than the setting of fixed input view, thus our method also inevitably suffers from a performance drop but still performs better than other state-of-the-art methods. As for Splatter Image [2], it also meets a significant performance drop when random input views are used as its SSIM $\\\\uparrow$ decreased from 0.9151 to 0.8932 and LPIPS $\\\\downarrow$ increased from 0.1517 to 0.2575 despite its PSNR $\\\\uparrow$ has a slight increase. We visualize the results of the two settings to show the difference in Figure 14 in the revised PDF. Therefore, it does not mean Splatter Image [2] demonstrates superior generalization regarding input view pose distribution, but it appears that the PSNR $\\\\uparrow$ of Splatter Image [2] does not increase when the setting is switched from random input view to fixed input view, which might be caused by some its inherent problems. We add the analysis in our Appendix in the revised version.\\n\\n[1] Xuanyu Yi, et al., MVGamba: Unify 3D Content Generation as State Space Sequence Modeling. arXiv2024.\\n[2] Stanislaw Szymanowicz, et al., Splatter Image: Ultra-Fast Single-View 3D Reconstruction. CVPR2024.\\n[3] Jiale Xu, et al., InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Mod-\\nels. arXiv2404.\\n[4] David Charatan, et al., pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction. CVPR2024.\\n[5] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\"}", "{\"comment\": \"# For Questions1:\\nThank you for pointing it out. Actually, the feature extractor for both stages will output feature maps first, then for the coarse stage, a convolution layer (omitted in the figure) is used to regress pixel-aligned 3D Gaussians as coarse initialization. We have modified the figure to make it clear in the revised PDF.\\n\\n# For Questions2:\\nThe function of coarse initialization is mainly to avoid out-of-boundary projected points, thus minor variations will not make a big difference for the refinement stage. In the coarse stage, no matter which view is selected, the 3D Gaussians are first reconstructed in its camera space, and then transformed to the world space (the camera space of the first view) using camera pose parameters. Therefore, the concept of \\\"front view\\\" does not exist in this context. No matter which view is selected to be the input of the coarse stage, it will not make a big difference. We also add the ablation study on the number of images used during the coarse stage. As shown in Table 9 of the revised PDF, the number of images used during the coarse stage does not influence the final result. We also show the content of Table 9 here.\\n\\n**Table: Ablation study results of different views and different numbers of views for the coarse stage (with 4 views in the refinement stage)**\\n\\n| Number of views in coarse stage | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---------------------------------|--------|--------|---------|\\n| 1 | 30.2312| 0.9608 | 0.0413 |\\n| 2 | 30.4245| 0.9614 | 0.0422 |\\n| 3 | 30.3442| 0.9618 | 0.0419 |\\n| 4 | 30.4521| 0.9620 | 0.0412 |\\n\\n# For Questions3:\\nThe input images (1 or 2 views) for the coarse stage are sampled from the all multi-view input images, and the 3D Gaussians are first reconstructed in the camera space and then transformed to the world space (the camera space of the first input view). The 3D Gaussians in the same world space will be the initialization of the refinement stage, which are already coarsely aligned with the other input views. Their final positions and other parameters will be iteratively refined to align all input views in the refinement stage.\\n\\n[1] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\\n\\n[2] Charles R Qi, et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. CVPR2017a.\\n\\n[3] Stanislaw Szymanowicz, et al., Splatter Image: Ultra-Fast Single-View 3D Reconstruction. CVPR2024.\"}", "{\"comment\": \"Thank you very much for your thoughtful comments on our work. We greatly appreciate your feedback. We will address each of your concerns individually as outlined below. For those that require additional experiments, we will ensure to upload the results as soon as possible. Should you have any further questions or concerns, please do not hesitate to reach out to us.\\n\\n# For Weaknesses1:\\nThanks a lot for your so detailed writting comment.\\nSorry for causing the bad reading experience, we explained the confusing sentences in the following and revised our paper to reduce unclear expressions.\\n\\n## L040 \\\"view inconsistency\\\":\\nView inconsistency means that 3D reconstructions from various input views are misaligned because of the inaccurate depth prediction from single view separately, which can be clearly illustrated by the attached video (inconsistentpc.mp4). Another example is that, in Figure 1, the handle of the pot generated from input image 1 appears at a different position compared to the handle from input image 2. This difference arises also due to inaccurate depth predictions of each view, leading to spatial variations and resulting in multiple handles being rendered in the views. We have added this explanation in our paper.\\n\\n## L044 view-specific camera space:\\nIn the corresponding sentence, we want to point out that, for methods like Splatter Image [1] and LGM [3], the 3D Gaussians for each view are first reconstructed in the camera space of the corresponding input view, then they are merged in the world space after the camera-to-world space transformation. Therefore, in this context, \\\"view-specific\\\" is to stress that they predict 3D Gaussians in a view-independent manner. We have modified this statement in the revised version. \\n\\n## L045 \\\"These Gaussians are then converted to world space\\\":\\nFor methods Splatter Image [1] and LGM [2], the 3D Gaussians are first reconstructed in the camera space of each input view, then they are merged in the world space after the camera-to-world space transformation. Therefore, the step of transforming 3D Gaussians from the camera space to the world space cannot be ignored, and it is the main cause of the view inconsistent problem.\\n\\n## L049 The authors should make sure that the general concepts and ideas are understandable just from reading the text they provide:\\nThank you for your suggestions, we will modify these in the revised version. DETR-like models link object bounding box as queries and treat image tokens as keys and values in Transformer, which have made a great success. We borrow the similar philosophy that links each 3D Gaussian with queries and also treat image tokens as keys and values, then refine 3D Gaussians iteratively. % We will use more widely accepted paper and explain DETR briefly.\\n\\n## L070 camera modulation:\\nSorry for the bad reading experience again, we have modified the paper following your suggestions. To be clear, camera modulation means we linearly transform image features of each view, say $F' = WF + b$, where $F$ is the original image feature, $W$ and $b$ are weights and bias regressed by an MLP with camera parameters as input. Such operation gives each view its corresponding camera pose information. We have added a brief explanation in the introduction.\\n\\n## L074 \\\"multi-view distinctions:\\nThe \\\"multi-view distinction\\\" means that we use camera modulation to distinct queries before projecting them to each view to retrieve image features, so as that queries can be aware of different camera poses.\\n\\n## L140 \\\"In total\\\" This is not needed:\\nThanks for pointing it out, we will delete it.\\n\\n## L151 unitary 3D Gaussians representation: Also, typo: \\\"3D Gaussians representation\\\" -> \\\"3D Gaussian representation\\\".\\n\\nUnitary 3D Gaussian representation means we define a unique set of 3D Gaussians in the world space no matter how many input views are given. By contrast, previous methods predict one set of 3D Gaussians in camera space for each input view, so there will be multiple sets of 3D Gaussians given multiple input views, and then merge them together in the world space to get the final output 3D Gaussians. We will fix the typo.\\n\\n## L158 \\\"Spatially efficient self-attention\\\":\\nThanks for pointing this out, we miss the reference to Section 3.2.3 and will add it in the revised version.\\nSpatially efficient self-attention means to do self-attention in a memory efficient way by sampling part of 3D Gaussians as keys and values, and we detailed this part in the Section 3.2.3.\\n\\n[1] Stanislaw Szymanowicz, et al., Splatter Image: Ultra-Fast Single-View 3D Reconstruction.CVPR2024.\\n\\n[2] Zi-Xin Zou, et al., Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers. CVPR2024.\\n\\n[3] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\"}", "{\"comment\": \"Thank you very much for your thoughtful comments on our work. We greatly appreciate your feedback. We summaries the main concerns in weaknesses1 and 2 and response them in the following.\\nWe agree that these related methods should be mentioned in our paper to position our paper in the field of 3D reconstruction. We discuss each method below and have added them in the related work part in the revised PDF. \\nWe will add the quantitative comparison with those that open-sourced codes are available as soon as possible. Should you have any further questions or concerns, please do not hesitate to reach out to us.\\n\\n# Weakness\\nWe summaries the main concerns in weaknesses1 and 2 and response them in the following. For GS-LRM and GRM, by merely reading their papers, both of them claim that they use a scalable large transformer with the similar pixel-aligned structure and are trained with many A100 GPUs but it is not clear what are the key components that has made their model work well. \\nMoreover, the view inconsistent problem theoretically exists in all those methods that first predict 3D Gaussians in each camera space and then naively merge them in world space as the depth of the predicted 3D Gaussians in each view would always have errors, which will inevitably lead to the misaligned 3D Gaussians merged in the world space. The problem is clearly illustrated in the attached video inconsistentpc.mp4.\\n\\nHowever, we are not able to fairly compare our method with them as they have not released the codes. And their quantitative results are computed not following the same dataset or settings with previous methods, making them hard to be fairly compared quantitatively. For MVSplat and pixelSplat, we are now working on comparing them and will release the comparison results as soon as possible. For One-2-3-45++, it is a generative model, suffering longer inference time that take 20 seconds to 1 minute for one generation while fast-forward methods only takes around 1 second. As for Mesh-LRM and MeshFormer, they use representations like NeRF and voxel, which have disadvantages in the aspect of computational efficiency, especially for rendering. Moreover, MeshFormer requires 3D supervision while Mesh-LRM utilize triplane, who compresses 3D space, leading to a lack of detailed information in the 3D structure and imposing a rigid grid alignment that limits flexibility ([1][2]). TripoSR is a single image 3D reconstruction model, which uses an encoder-decoder structure with triplane decoder and triplane-based NeRF, and it also has the limitations of NeRF and Triplane as discussed earlier. Its inference time is 3.29s for a single forward process (without mesh extracting) and rendering time for 22.73s, which is much slower than 3D GS-based models. We added the discussion of the mentioned methods in the related work part of the revised version and will add the comparison with MVSplat and pixelSplat as soon as possible.\\n\\n# Questions1:\\nThe main contribution of our method is proposing a new unitary 3D Gaussian modeling approach for multi-view 3D reconstruction that can avoid the view inconsistent problem. This new design can also be adopted by other mentioned 3D GS-based methods. We will discuss more about the mentioned prior works in the revised PDF.\\n\\n[1] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\\n\\n[2] Charles R Qi, et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR 2017a\"}", "{\"comment\": \"Thank you for your reply. We offer additional clarification addressing your concerns as follows:\\n\\n# For the concern about more elaboration and explanation regarding the severity of the inconsistency\\nTo present the severity of the inconsistency caused by per-pixel Gaussian prediction, we have added more visualized results in the supplementary materials (Figure 16, Figure 21). We can observe that 3D Gaussians from different views (in different colors) are misaligned. On the other hand, we have tested most of the high impact open-sourced per-pixel methods (including LGM, Splatter Image, PixelSplat, MVSplat) on the GSO datasets and visualized their 3D Gaussian centers in different colors for each view. As shown in the video inconsistentpc.mp4, masked\\\\_point\\\\_cloud.mp4, and the videos in MVSplat\\\\_results folder, it is evident that the Gaussians depicted from different views in different colors do not align when representing the same object segment. This misalignment leads to blurred final rendering outcomes and can even produce 'ghosting' artifacts, as demonstrated by the presence of two shoes in the rendered novel views in Figure 16. This inconsistency, characterized by such 'ghosting' artifacts, is widespread and notably observed in Figures 4 and 21 as well. As for GRM and GS-LRM, we have not found obvious problems from the results in their websites, but we think they should be considered more as concurrent works and they have not released their codes for fair comparison. Therefore, we may generally conclude that almost all existing per-pixel methods have the inconsistency issue.\\n\\n# For the concern about the consistency between the coarse stage and the refinement stage\\nFor the concern about the consistency between the coarse stage and the refinement stage, we use an example to illustrate. For example, we have 4 input views, named $V_1, V_2, V_3, V_4$ and their corresponding camera parameters $\\\\pi_1, \\\\pi_2, \\\\pi_3, \\\\pi_4$. Without loss of generality, we use $V_1$ as the input of the coarse stage and use its camera space as the world space. We then transform all camera parameters to the world space and get the new camera parameters $\\\\pi_{1}', \\\\pi_{2}', \\\\pi_{3}', \\\\pi_{4}'$. Here, $\\\\pi_{1}'$ is the identity matrix because we define the camera space of $V_1$ as world space. Then, we generate $N$ 3D Gaussians $G_{init}$ from $V_1$ under the world space from the coarse stage and $G_{init}$ as the initialization of our refinement stage. After that, we project $G_{init}$ onto all the 4 views with the new camera parameters $\\\\pi_{1}', \\\\pi_{2}', \\\\pi_{3}', \\\\pi_{4}'$ and gather the information to update $G_{init}$ layer by layer. Throughout the entire process, a single set of 3D Gaussians is defined in world space, utilized in both the coarse and refinement stages. All the input views contribute to this singular set of 3D Gaussians in both stages. Consequently, since the same set of 3D Gaussians is maintained across both the coarse and refinement stages, i.e., we do not have separate predictions for the coarse stage and refinement stage, the inherent consistency of this shared representation precludes the possibility of introducing any inconsistencies.\"}", "{\"summary\": \"This paper introduces UniG, a novel 3D reconstruction and novel view synthesis model that creates high-fidelity 3D Gaussian Splatting from sparse images while maintaining view consistency. To tackle the view inconsistency issue in traditional 3D Gaussian-based methods which directly regressing Gaussians per-pixel for each view, the authors proposed to employ a DETR-like framework that uses 3D Gaussians as decoder queries, refining their parameters through multi-view cross-attention (MVDFA) across input images. This design allows for an arbitrary number of input images without causing a memory explosion, as the number of 3D Gaussians used as queries is independent of the input views. Comprehensive experiments demonstrate UniG's superiority over existing methods in terms of quantitatively and qualitatively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. he pipeline in this submission is technically sound and is clearly written and well organized.\\n\\n2. The authors, drawing inspiration from DETR, propose a Gaussian-based 3D reconstruction and novel view synthesis approach which can achieve SOTA performance. Extensive experiments have validated the model's effectiveness and outstanding performance.\\n\\n3. For the comparison, the numerical results show a significant performance improvement over the baseline method in GSO data. And for the ablation study, the authors show the importance of some designs like the coarse stage initialization and refinement.\", \"weaknesses\": \"1. Overall, the structure of this paper resembles a multi-view version of a combination between TriplaneGaussian and Instant3D and the importance of the MVDFA module and the two stages is not very convincing.\\n\\n2. Although the authors have proposed the MVDFA module to integrate coarse and refine information, attempting to address the inconsistency issue of LGM when predicting from each perspective. However, aside from Figure 1, there are no more images demonstrating the severity of this inconsistency. Additionally, what would be the result by using a simple mask on the LGM or Splatter Image prediction.\\n\\n3. A minor thing is that in Table 2, Splatter Image appears to show promising performance, similar to the results of the coarse stage proposed in the paper, but there is a lack of visual comparison with it.\\n\\n[1] Zi-Xin Zou, et al., Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers, CVPR 2024\\n\\n[2] Jiahao Li, et al., Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model, ICLR 2024\", \"questions\": \"1. In Figure 2, why does the feature extractor of the coarse network output 3D GS, while the same module in the refinement network outputs multiview feature maps?\\n\\n2. In line 152, the paper states that \\\"during the coarse stage, one or more images are randomly selected.\\\" This raises questions about whether a frontal view image is necessary at the coarse stage, or if an arbitrary view would suffice? If an arbitrary view is acceptable, how is the correctness of the coarse output ensured? Furthermore, the paper lacks an ablation study on the number of images used during the coarse stage.\\n\\n3. In the refinement stage, the 3D GS output by the coarse network are used as input for the MVDFA module. However, since the coarse model only uses a single view image as input, the 3D GS generated during the coarse stage may not align with the structure of the four-view input in the refinement stage. This raises the question of whether this discrepancy could impact the MVDFA model? In other words, how can we address the consistency issue between the 3D GS from the coarse stage and the multi-view images in the refinement stage?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"1. In Figure 2, why does the feature extractor of the coarse network output 3D GS, while the same module in the refinement network outputs multiview feature maps?\\n\\n2. In line 152, the paper states that \\\"during the coarse stage, one or more images are randomly selected.\\\" This raises questions about whether a frontal view image is necessary at the coarse stage, or if an arbitrary view would suffice? If an arbitrary view is acceptable, how is the correctness of the coarse output ensured? Furthermore, the paper lacks an ablation study on the number of images used during the coarse stage.\\n\\n3. In the refinement stage, the 3D GS output by the coarse network are used as input for the MVDFA module. However, since the coarse model only uses a single view image as input, the 3D GS generated during the coarse stage may not align with the structure of the four-view input in the refinement stage. This raises the question of whether this discrepancy could impact the MVDFA model? In other words, how can we address the consistency issue between the 3D GS from the coarse stage and the multi-view images in the refinement stage?\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nMany thanks for your reviews on submissions of ICLR 2025. Could you please read the authors' rebuttals and give your replies?\\n\\nBest wishes,\\n\\nAC\"}", "{\"summary\": \"The paper proposes a method for 3D object reconstruction (with 3D Gaussians) by employing a novel encoder-decoder framework. The method operates in a two-stage process: (a) A coarse initialization provides initial 3D Gaussians leveraging only a small subset of the input images. (b) In the refinement stage, the Gaussians are optimized using multi-view deformable attention and spatially efficient self-modules. The final 3D Gaussians are obtained by updating initial estimates based on the multi-view features.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The experimental results seem particularly strong with large improvements over the baselines.\"], \"weaknesses\": [\"The introduction and the first part of the paper, in general, are badly written. It gives the feeling that it was blindly polished by an LLM that had no idea about the topic. There are many unexplained concepts or ones that seemingly mean nothing. It uses fancy words and expressions that ultimately mean nothing and only make it harder to understand what is going on in the paper. I detail this in the minor comments section.\", \"The title is misleading. What the authors do is 3D Object Reconstruction and not general 3D Reconstruction. I also have a hard time accepting 3DGS as a reconstruction method since, to me, reconstruction would involve estimating the camera parameters as well (which is considered given here). However, this is only my concern, and I am fine if the authors go with \\\"3D Object Reconstruction\\\".\", \"Missing comparison to other sparse-view methods, e.g., MVSplat and pixelSplat. The authors propose a sparse-view pipeline so it would be fair to compare with other similar methods. I know that MVSplat and pixelSplat reconstruct the entire scene while the proposed method only has an object. However, they should still be compared as I see no fundamental limitation that would prevent MVSplat/pixelSplat from being applied here.\"], \"the_most_important_comment_here_from_my_side_is\": \"(a) There are missing comparisons that should be added. (b) The introduction and beginning of the paper should be rewritten. (c) The title and narrative should be changed a bit.\", \"minor_comments\": [\"L040 \\\"view inconsistency\\\" I am a bit unsure what this means here. I suggest the authors explain clearly here what this issue is as they build the rest of their introduction on this.\", \"L044 \\\"view-specific camera space\\\" What is a view-specific camera space?\", \"L045 \\\"These Gaussians are then converted to world space\\\". This sentence makes no sense.\", \"L049 This paragraph is full of abbreviations without any explanation for them. The authors state that they use a DETR-like Transformer and they are inspired by DIG3D and TAPTR, but this says nothing without having to read all these papers. This is not a good way of writing an introduction. The authors should make sure that the general concepts and ideas are understandable just from reading the text they provide.\", \"L070 \\\"More specifically, MVDFA utilize camera modulation techniques (Karras et al., 2019; Hong et al., 2024) to diversify queries based on views.\\\" Similary as before, I have no idea what camera modulation techniques are, the authors don't even provide a single example. I had to open the cited papers and read about it, which really does break the flow of reading the introduction.\", \"L074 \\\"our model prioritizes multi-view distinctions to achieve a more precise 3D representation.\\\" - Again, what does \\\"multi-view distinction\\\" mean? This entire thing sounds like it was polished by an LLM that had no idea what really happens.\", \"L140 \\\"In total\\\" This is not needed.\", \"L151 The authors say that they use a \\\"unitary 3D Gaussians representation\\\" but they never explain what such a representation is. Also, typo: \\\"3D Gaussians representation\\\" -> \\\"3D Gaussian representation\\\"\", \"L158 \\\"Spatially efficient self-attention\\\" -> What is a spatially efficient self-attention?\"], \"questions\": \"See in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 9662,\\n\\nWe want to express our sincere gratitude for your insightful suggestions which are instrumental in enhancing the quality of our work. We hope that our proposed modifications would have addressed your concerns about the clarity of our presentation. We would really appreciate it if you could let us know if there are any further questions or aspects of the paper that require additional clarification. Thank you once again for your time and consideration.\"}", "{\"comment\": \"Dear Reviewer 9662,\\n\\nThank you for dedicating your time and providing feedback on our work. We have tried our best to address the concerns you previously raised by providing additional explanations or conducting further experiments, and we have rectified writing issues.\\nWe kindly seek your thoughtful reconsideration for a potential score increase, taking into account of the revisions made based on your invaluable feedback if you have no further concerns. Your time and insights are sincerely valued. If you have further questions, we are also pleased to answer you in the rest discussion period.\"}", "{\"summary\": \"The paper introduces a feed-forward method for 3D object reconstruction from sparse input views, utilizing 3D Gaussian Splatting (3D GS) as its scene representation. This method predicts a unified set of 3D Gaussians from multiple input images rather than generating per-view, pixel-aligned Gaussians as in previous approaches. This approach utilizes a DETR-like transformer framework, treating 3D Gaussians as decoder queries and updating their parameters through multi-view cross-attention layers. The results on several benchmark datasets demonstrate promising quality and better than some previous pixel-aligned approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A key strength of this paper lies in its application of a DETR-like transformer framework to predict an independent unitary set of 3D Gaussians for 3D reconstruction. This method shows promise in applying such a network architecture to solve the sparse-view reconstruction problem, leading to promising results.\", \"weaknesses\": [\"1. There are multiple highly relevant prior works that are neither cited nor discussed in the paper. This includes works on per-pixel Gaussian splatting prediction, such as\", \"GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting\", \"GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation\", \"pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction\", \"MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images\", \"Additionally, other works on sparse-view 3D reconstruction and generation are also absent, including\", \"Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model\", \"One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion\", \"MeshLRM: Large Reconstruction Model for High-Quality Mesh\", \"MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model\", \"TripoSR: Fast 3D Object Reconstruction from a Single Image.\", \"Without proper citation and discussion of these papers, I am highly concerned about the positioning of the proposed approach.\", \"2. The paper criticizes the design of per-pixel Gaussian prediction in previous works (LGM, MVGamba, etc), attributing it to the low quality and view inconsistency issues. However, this argument overlooks the success of other per-pixel methods listed above, such as GS-LRM and GRM, which demonstrate high-quality, view-consistent results, which, to me, look even more visually realistic than the results shown in the paper. The paper compares primarily against weaker baselines like LGM (or others with similar or lower quality), showing improvements over LGM. However, I've seen many existing papers that have demonstrated significantly better quality than LGM, including GS-LRM, GRM, Mesh-LRM, MeshFormer, as listed above. In general, the paper fails to compare with or even discuss these stronger, state-of-the-art methods. In particular, GS-LRM and GRM also employ per-pixel strategies yet seem to achieve even greater improvements when seeing their results in their paper and website. This kind of suggests that the design choice of per-pixel prediction may not be the main issue with the baselines (like LGM) discussed in the paper, and that there could be the other architectural factors in those models that led to the lower quality. As a result, the paper's argument for its method being inherently superior to per-pixel techniques is less convincing.\"], \"questions\": \"While I think the unitary Gaussian prediction technique in the paper is promising, I am really concerned about the paper's positioning due to the absence of numerous relevant prior works. I understand that direct comparisons can be challenging, especially given that many of these previous works have not released code. However, at minimum, a thorough discussion is needed to place this work in context, and any feasible comparisons would significantly strengthen the paper. I feel, at least, the paper needs to tone down and moderate its claim by incorporating and discussing all these relevant works properly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you agian for your positive feedback and kind support!\\nWe sincerely appreciate your recognition of our efforts and contribution.\"}", "{\"comment\": \"Thank you very much for your thoughtful comments on our work. We greatly appreciate your feedback. We will address each of your concerns individually as outlined below. For those that require additional experiments, we will ensure to upload the results as soon as possible. Should you have any further questions or concerns, please do not hesitate to reach out to us.\\n\\n# For Weaknesses1:\\nOur method differs from Triplane-Gaussian in the following aspects. First, Triplane-Gaussian is a single-view reconstruction method so it does not need to consider the multi-view information fusion problem while our method target at reconstructing a unitary set of 3D Gaussians with arbitrary number of input views, resolving the view inconsistent problem that only happens in the multi-view input setting. Second, Triplane-Gaussian adopt the triplane representation, which would lead to a lack of detailed information in the 3D structure and imposing a rigid grid alignment that limits flexibility (LGM[1], PointNet[2]). Third, Triplane-Gaussian requires 3D supervision to achieve a good performance while our method only requires multi-view 2D images. Even so, our method still outperforms the Triplane-Gaussian given the single-view image as input, as shown in Table 5 in the revised version. We also provide the table here.\\n\\n**Table: Quantitative results trained on Objaverse LVIS and tested on GSO. 3D sup. means need 3D supervision.**\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | 3D sup. | Inference time |\\n|----------------------------------|----------|--------|---------|---------|----------------|\\n| Triplane-Gaussian | 18.61 | 0.853 | 0.159 | \\u2714 | 1.906 |\\n| Ours | **23.45**| **0.897** | **0.093** | \\u2718 | **0.476** |\\n\\nAs for Instant3D, it also utilize the triplane and apply the NeRF to represent the reconstructed object. The NeRF representation is fundamentally different from 3D Gaussians and cannot be naively replaced. Therefore, our method is essentially different from the mentioned methods. MVDFA is designed to to save memory occupation and training and inference time so that we can set a larger number of 3D Gaussians to represent an object. The two-stage framework is also important because we empirically found that a good initialization for 3D Gaussians would make a big difference, as shown in Table 4 and Table 10 in our paper.\\n\\n# For Weaknesses2:\\nAs shown in the Figure 7 in the revised PDF, we visualize the predicted 3D Gaussians centers of LGM and Splatter Image and paint them of different views with different colors. From the figure, the misalignment of 3D Gaussians from different views can be obviously seen, which is what we call \\\"view inconsistency\\\", while our method share a unitary set of 3D Gaussians for each view so there is no such severe view inconsistent problem. Moreover, in Figure 4, we also shows the view inconsistency example of LGM that predict not aligned objects from different views.\\n\\nWhen masks are applied to remove backgrounds, as shown in Figure 16 and the attached vedio (masked\\\\_point\\\\_cloud.mp4), the misaligned 3D Gaussians can be alleviated to some extent but the problem still exists. The corresponding quantitative results are presented in Table 6. Removing background points lead to less outliers and better rendering results but there still are obvious artifacts that can be observed. The Table is also given here: \\n\\n**Table: Comparison between masked and original pixel-aligned methods**\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|-----------------------------|--------|--------|---------|\\n| LGM | 17.4810| 0.7829 | 0.2180 |\\n| LGM (masked) | 21.6008| 0.8608 | 0.1232 |\\n| Splatter Image | 25.6241| 0.9151 | 0.1517 |\\n| Splatter Image (masked) | 25.0648| 0.9147 | 0.1684 |\\n\\n# For Weaknesses3:\\nThe coarse stage share a similar structure with the previous method (Splatter Image [3]) whose function is to provide a coarse initialization for the refinement stage to avoid hard convergence problem. The results in Table 2 and Table 4 use different validation dataset, so their numerical values cannot be directly compared. The visualized comparison of Splatter Image and the coarse stage of our model are shown in Figure 15. From the figure, we can see that the output of the coarse stage is not as good as Splatter Image, but with the refinement stage, the final output outperforms the Splatter Image.\\n\\n[1] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\\n\\n[2] Charles R Qi, et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. CVPR2017a.\\n\\n[3] Stanislaw Szymanowicz, et al., Splatter Image: Ultra-Fast Single-View 3D Reconstruction. CVPR2024.\"}", "{\"comment\": \"Dear Reviewer 4FKX,\\n\\nThank you for dedicating your time and providing feedback on our work. We have tried our best to address the concerns you previously raised by providing additional explanations or conducting further experiments, and we have rectified writing issues.\\nWe kindly seek your thoughtful reconsideration for a potential score increase, taking into account of the revisions made based on your invaluable feedback if you have no further concerns. Your time and insights are sincerely valued. If you have further questions, we are also pleased to answer you in the rest discussion period.\"}", "{\"summary\": \"The paper introduces UniG, a new 3D reconstruction and novel view synthesis model leveraging unitary 3D Gaussians for view-consistent 3D scene representation from sparse posed image inputs. Existing 3D Gaussians-based methods usually regress per-pixel 3D Gaussian for each view, create 3D Gaussians per view separately, and merge them through point concatenation. Such a view-independent reconstruction, which often results in a view inconsistency issue. UniG addresses view inconsistency in existing methods by DETR (DEtection TRansformer)-like framework, treating 3D Gaussians as decoder queries updated layer by layer\\nby performing multi-view cross-attention over multiple input images. This design allows UniG to maintain a single 3D Gaussian set, supporting arbitrary input views without memory expansion and ensuring consistency across views.\\nExperiments validate UniG's superior performance in 3D reconstruction on Objaverse and GSO datasets, achieving better results than the selected baselines qualitatively and quantitatively.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The motivation is sound and clear.\\n2. The proposed methods demonstrate improved performance in 3D reconstruction on the Objaverse and GSO datasets, achieving better results both qualitatively and quantitatively compared to selected baselines.\\n3. The proposed methods exhibit scalability with arbitrary views: despite being trained on a fixed number of views, UniG can handle an arbitrary number of input views without a significant increase in memory usage.\\n4. The paper presents adequate experiments and provides a thorough ablation of the design choices in the methods.\", \"weaknesses\": \"1. The visual results are blurry and have obvious artifacts. The resolution is low (no larger than 512)\\n2. Under some cases, the improvements are not obvious compared with previous methods, as shown in Table 2, the PNSR improvement is only ~0.5dB.\", \"questions\": \"1. In Fig. 5, the authors demonstrate that PSNR performance improves as the number of input views increases (from 2 to 8). I am curious about the effect of continuing to increase the number of input views beyond 8. Will performance decline after reaching this point? We can observe that the performance gain diminishes when the input view count increases from 6 to 8, suggesting a potential decline in performance if this number exceeds 8.\\n\\n2. In line 374, the authors state that \\\"previous methods rely on fixed views as input,\\\" which leads to a performance drop when random input views are used. By comparing Tables 1 and 2, it appears that this method also experiences a notable performance decline (a reduction of approximately 4 dB in PSNR) with random inputs. Interestingly, however, the baseline method Splatter Image does not show this performance drop (its PSNR increases slightly from 25.6 to 25.8). This suggests that Splatter Image demonstrates superior generalization regarding input view pose distribution compared to this method. I am interested in the authors\\u2019 explanation for this difference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' continued efforts to address concerns regarding the writing and positioning of the paper.\\n\\nWhile I still have some disagreements with the current positioning, I think that my major concerns in this regard have been mostly addressed. It is also helpful that the authors clarified the use of substantially fewer Gaussians compared to prior work. While this design choice benefits efficiency, it likely contributes to the visual quality being somewhat inferior to GS-LRM. In my view, if the current method cannot scale to support significantly more Gaussians, this could represent a limitation, particularly for its potential extension to scene-level reconstructions requiring a larger number of Gaussians. That said, I consider this a minor point.\\n\\nOn the other hand, I find the newly posted quantitative results disappointing and problematic. When comparing your method with Pixelsplat or MVSplat, I would expect you to either train your model on their training dataset or re-train their models on your data to ensure a fair comparison, but it looks like this is not what is done here. Objaverse and RealEstate10K are fundamentally different datasets\\u2014one focusing on object assets and the other on real-world scenes\\u2014so it is unsurprising that models trained on one dataset fail to generalize well to the other. As a result, the new experiment does not provide meaningful insights. I suggest either removing the experiment or redoing it properly (though the latter may not be feasible within the current timeline).\\n\\nI usually hate raising new concerns at the last moment, and I should have checked your response earlier. But I really feel including these results in their current state would be misleading and reduce the overall quality of the paper.\"}", "{\"comment\": \"Dear Reviewer t736,\\n\\nThank you for dedicating your time and providing feedback on our work. We have modified our paper according your new suggestions and will update them in the camera-ready version if the paper is accepted. The experiment of training MVSplat on Objaverse dataset is still running and we will present the results as soon as possible (within the discussion period) when it convergent. As it approaches the end of the discussion period, we really want to know do you still have other concerns or questions so that we can put efforts in the last day to solve them. Your reply is very important for us, and we are looking forward to it.\"}", "{\"comment\": \"Thanks for your feedback and we really appreciate it.\\n\\nWe also checked the results of GRM and GS-LRM provided in their websites, and did not see obvious view-inconsistent reconstruction either. Thus, we agree that using the word ``all\\\" may not be appropriate and we will avoid using it in our main paper. However, only from their presented good examples to totally negate our analysis is unfair and not reasonable as it also cannot be concluded that they do not have the problem. We are not able to analyze more cases (especially bad cases) of them because they have not released their codes, but we tested other open-sourced per-view Gaussian methods, including pixelSplat [3] and MVSplat [4], where we also observe the obvious view inconsistency problem as shown in the supplementary video mv\\\\_splat.mp4. (Due to the supplementary size limitation, we only present one example while we actually observed other examples with the same problem.) Furthermore, we are sure that the simple 3D Gaussian merging of per-view methods are very likely to lead to the misaligned 3D Gaussians (i.e. view inconsistency) because the prediction of z-axis (depth) of 3D Gaussians in each view is an ill-posed problem whose error cannot be totally avoided. The problem maybe alleviated to some extent with proper gradient-based training but can never be avoided as long as the simple 3D Gaussian merging is still there. In some cases, the issue may not that severely be observed, but it may actually exist. In contrast, although our method is also not perfect, our proposed unitary modeling method principally avoid the ill-posed single-view z-axis (depth) prediction and the step of simple 3D Gaussian merging. Moreover, our method has cross-view information aggregation to comprehensively determine the update of unitary 3D Gaussians in the refinement stage. Both qualitative and quantitative results can validate the superiority of our method.\\n\\nIn the single-image setting, our method already achieves a notable PSNR of 21.74 at the coarse stage. Following multi-layer refinement in the subsequent stage, the PSNR surpasses 23. We add visualization results in the revised PDF, showcased in Figure 19, illustrating input views and 360-degree renderings.\\n\\nThe train and test details are as follows. For both training and testing phases, we conduct experiments at a resolution of 128. Consistent with our initial paper, we train on the Objaverse LVIS dataset and test on the GSO dataset. Notably, our model is trained on fixed number of views and can input any number of views during inference. Testing aligns with the strategy outlined in Figure 5, where the model trains on 4 views and is subsequently tested with a single view, evaluating against the remaining 24 views. We use a 4-layer decoder in the refinement stage with each layer regressing the refinement of Gaussian parameters.\\n\\nSingle-image approaches like InstantMesh [1] and MeshFormer [2] require existing diffusion models to generate multi-view images from a single image, then derive the final reconstruction results in a multi-view reconstruction manner. This pipeline can yields visually appealing visualizations, but the PSNR values tend to be lower due to distortions introduced by generative models. Our presented results have not follow this setting. The results following this setting are shown in Figure 11 and Table 8 of the main paper, where the PSNR of our method is 22.35.\\n\\n\\n\\n[1] XU, et al., InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models. arXiv2404\\n[2] Liu, et al., MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model. NeurIPS 2024\\n[3] Charatan, et al., pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction. CVPR2024\\n[4] Chen, MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images. ECCV2025\"}", "{\"comment\": \"Dear Reviewer UhUJ,\\n\\nThank you for dedicating your time and providing feedback on our work. We have tried our best to address the concerns you previously raised by providing additional explanations or conducting further experiments, and we have rectified writing issues.\\nWe kindly seek your thoughtful reconsideration for a potential score increase, taking into account of the revisions made based on your invaluable feedback if you have no further concerns. Your time and insights are sincerely valued. If you have further questions, we are also pleased to answer you in the rest discussion period.\"}", "{\"comment\": \"Thanks for your feedback and for recognizing our efforts.\\n\\nAfter reading your comments, we agree that we are not able to strictly prove the existence of the issue although we may logically infer it probably exists, so we modified our paper to avoid the expression of 'theoretical problem'.\\n\\nWe know that other methods also incorporate multi-view information but they do this mainly in the early feature extracting stage by simply concatenate all image tokens to do self-attention, like GS-LRM. In this way, the geometry relationships among input views are mainly learned via a black-box manner, which may increase the learning difficulty (trained 2 days on 64 A100, while in our situation, trained for 3 days on 8 A100). \\nAnd after that, they predict 3D Gaussians in each view's camera space separately without other cross-view interaction and finally naively merge the multiple set of 3D Gaussians in the world space. Thus, we state 'predict depth independently' was to underscore that they would separately predict multiple set of 3D Gaussians in each input views' camera spaces rather than a unitary set. We have modified the corresponding statements in our main paper to avoid misunderstanding. In this point, the main difference of our method comparing to previous methods is that, in our method, all views collectively contribute to a unitary set of 3D Gaussians, instead of predicting multiple sets separately and then merging them. By projecting the unitary 3D Gaussian centers onto each view, the cross-view interaction module of our method incorporates the explicit geometry relationship between 3D Gaussians and 2D images to facilitate the learning of multi-view feature fusion. We have made the adjustment in our related work section and present the revised version in the following.\\n\\n# Revised version in the related work:\\n\\\"Various techniques such as SplatterImage, LGM, pixelSplat, and MVSplat have extended the application of 3D Gaussian Splatting to multi-view scenarios. \\nIn these approaches, each input view is processed to estimate 3D Gaussians specific to the view, followed by a simple concatenation of the resulting 3D Gaussian assets from all views. GS-LRM and GRM exhibit a model structure similar to LGM, resulting in notable accomplishments through enhanced training processes and consequently more precise depth regression. Nevertheless, these models adhere to the pipeline of predicting 3D Gaussians separately for each view, they demands substantial computational resources, particularly as the number of views grows, the number of Gaussians scales linearly with the number of views. Furthermore, these methods are unable to accommodate an arbitrary number of views as input.\\\"\\n\\n# More results:\\nAs for the concern of the quality, we presented more separate videos for you to check (Supplementary separate\\\\_videos folder). We also provide a new visualization with resolution 512 in the revised PDF Appendix Figure 20. Compared with the results provided by GS-LRM in their website, our results are not that good but comparable, and are better than other baseline methods. We will try to figure out the behind reason of GS-LRM's good results and add the corresponding analysis as they release their codes. Given the description in the paper of GS-LRM, supposing they use the image resolution of $512\\\\times512$, then there will be $512\\\\times512\\\\times4=1048576$ 3D Gaussians to reconstruct a single object in the 4-view setting while our method only use a fix number of $19600$ 3D Gaussians in consideration of the balance of computational resources for training and inference.\"}", "{\"metareview\": \"The paper presents a view-consistent 3D reconstruction and novel view synthesis model using 3D Gaussians representation from sparse images. The main task is to solve view inconsistency from multiple images.\\n\\nIt must be that directly merging 3D Gaussians through point concatenation is not good. The general way to make 3D reconstruction from multiple images is to transform different camera coordinate systems into one coordinate system and then to make optimizations. No people directly make concatenation of different 3D results by each separate image. Therefore, the motivation is not a good idea. The paper needs rewriting to give importance for the proposed method. \\n\\nMoreover, in spite of no comparison with GS-LRM and GRM, the comparisons with PixelSplat and MVSplat are not reasonable. PixelSplat and MVSplat are both scene reconstruction methods, not object-level reconstruction methods. Therefore, comparing them with object-level reconstruction methods would be unfair, even if trained on object-level datasets. Thus, outperforming them on object-level datasets does not necessarily demonstrate the superiority of the method.\\n\\n Also, the method can only support a relatively small number of Gaussians. This may restrict its scalability.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 4FKX raised visual result problem and unobvious improvements compared with previous methods. The authors made new visualization by new resolution of 512 and gave explanation on PSNR. Reviewer 4FKX had no responses.\\n\\nReviewer 9662 raised the importance of the MVDFA module and the two stages are not convincing, inconsistency motivation is not important, there lacks visual comparison. The authors gave the differences with Triplane Gaussian and Instant3D. For the inconsistency, they also used rigid transformations to unify different images. They also gave some visualization results. The reviewer understood the differences between this work and Triplane Gaussian and Instant3D. But, the reviewer still had a question on the inconsistency. \\n\\nReviewer t736 proposed multiple highly relevant prior references are not cited. In particular, it lacks the comparisons with GS-LRM and GRM. Furthermore, Reviewer t736 had questions about the paper positioning and result quality. The authors discussed and added some experiments that addressed some concerns of Reviewer t736. Reviewer t736 thought this discussion remained at an empirical level, not a theoretical one. The authors agreed that they were not able to prove the existence of the issue and modified the paper.\\n\\nIn the final discussions, the reviewers and AC thought there are still the problems: the result quality is not groundbreaking to be state-of-the-art, the positioning and motivation need reinvented, it is unclear to scale to larger scenes.\"}", "{\"comment\": \"We trained the MVSplat on the Objaverse dataset and tested it on the GSO dataset (consistent with the setting of ours). After around 100,000 iterations for training, it appeals to have converged (with little loss decrease and oscillating PSNR on the validation set). The quantitative results on the GSO dataset are shown in the below table, where it have inferior performance than ours no matter whether masking the 3D Gaussians corresponding the background pixels. We also visualized some results of 3D Gaussians centers and still observed obvious misaligned 3D Gaussians from different views (view inconsistency). We plan to update these results in Appendix A.5 and replace the Figure 21 and Table 12 with the new results. As we have modified our paper and added the new experiments following your suggestions, we sincerely hope you can reconsider your current rating on our work since your major concerns have been addressed and you also felt our work is on the bar for acceptance as you said in previous replies.\\n\\n## Comparison with MVSplat on the GSO-random dataset in the 4-view input setting\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | Rendering time |\\n|--------------------|----------|--------|---------|----------------|\\n| MVSplat | 23.06 | 0.90 | 0.13 | 0.0090 |\\n| MVSplat (masked) | 24.10 | 0.91 | 0.12 | 0.0045 |\\n| Ours | **26.30**| **0.93** | **0.08** | **0.0019** |\"}", "{\"comment\": \"Thank you for your reply, we think we can provide the results from MVSplat or pixelSplat within the rebuttal period. We are trying to test them on the GSO dataset to compare with our method. As they are initially designed for scene reconstruction, the datasets they used (RealEstate10K [1] and ACID [2]) has different camera system convention with the GSO dataset, so it may take some time to align them. We have successfully run their official codes on the dataset of RealEstate10K and visualized the centers of 3D Gaussians in each view, where we also observe the view inconsistent problem. The results are shown in the supplementary video mv\\\\_splat.mp4.\\n\\n[1] Zhou, et al., Stereo magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. (Proc.\\nSIGGRAPH), 2018\\n\\n[2] Liu, et al., Infinite nature:\\nPerpetual view generation of natural scenes from a single image. ICCV 2021.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the authors' response and the additional experiments provided.\\n\\nHowever, I'm not convinced that \\\"the view inconsistency problem theoretically exists\\\" in all per-pixel Gaussian prediction methods. While it is true that \\\"the depth of the predicted 3D Gaussians in each view would always have errors,\\\" the authors\\u2019 method will also introduce errors, albeit not view-aligned. In general, those per-view Gaussian methods are designed to learn to aggregate multi-view Gaussians in an end-to-end manner, which learns to fuse the depth and achieve consistency. Theoretically, as these methods minimize errors during training, their consistency and quality improve. In essence, both per-view Gaussian methods and the proposed unitary method rely on similar processes of network optimization and gradient descent to achieve consistency and rendering quality. Of course, the proposed method doesn't have a concept of \\\"view-\\\"consistent depth, but it has 3D errors of point locations, which might also lead to inconsistent renderings across novel viewpoints. So I cannot agree that this is a theoretical limitation. \\n\\nRegarding the inconsistencies shown in inconsistentpc.mp4 and other supplementary examples, I revisited the GRM and GS-LRM websites to verify their results. GS-LRM, for instance, provides a lot of ply files of their GS reconstruction shown in an interactive 3DGS viewer. I reviewed several examples in their viewer and also downloaded multiple ply files to inspect in Meshlab, finding their results to be of high quality and very consistent. I did not observe a similar level of inconsistencies shown in the LGS examples in the supp.\\n\\nOverall, I do not believe the inconsistency issue is a theoretical problem inherent to all per-view Gaussian methods. The evidence suggests that this issue may be unique to LGM or other baselines. To clarify, I do not hold a strong bias toward per-view Gaussian methods, and I am also disappointed about papers like GS-LRM not releasing their code to facilitate comparisons. I also really appreciate the authors' efforts to explore unitary GS prediction, but my main concern is the potential to mislead the community here. The paper currently is written like per-view Gaussian methods are inherently worse, which I do not see sufficient evidence to support. If the inconsistency problem is specific to LGM or other baselines, this should be clearly stated, and the issue should not be generalized to all per-view Gaussian methods.\\n\\n\\nOn the other hand, the new single-image reconstruction results are very impressive. However, I am surprised by the reported PSNR values, as they seem unusually high. To be more specific, I checked the numbers in the recent NeurIPS paper MeshFormer; the SOTA single-view reconstruction methods achieve PSNRs only around 21 but their quality already looks very good. Typically, the PSNR for single-image reconstruction is low because the unseen back side of an object introduces significant uncertainty, making it theoretically impossible to recover accurately. Deterministic models often produce blurry back-side renderings, while probabilistic models, such as diffusion models, tend to generate sharp but potentially mismatched back-side renderings. I personally even feel PSNR and other rendering metrics are not the best choice to evaluate single image reconstruction techniques because this is more of a generative task. But why can the proposed method lead to such a high PSNR over 23? Could the authors clarify how this experiment was conducted? Specifically, details on the resolution, training/testing view settings, and other relevant parameters would be helpful. Additionally, providing more visual examples, including input views and 360-degree renderings, would help illustrate what is happening in the reconstructions.\"}", "{\"comment\": \"Dear Reviewer t736,\\n\\nThank you for dedicating your time and providing feedback on our work. We have tried our best to address the concerns you previously raised by providing additional explanations or conducting further experiments, and we have rectified writing issues.\\nWe kindly seek your thoughtful reconsideration for a potential score increase, taking into account of the revisions made based on your invaluable feedback if you have no further concerns. Your time and insights are sincerely valued. If you have further questions, we are also pleased to answer you in the rest discussion period.\"}", "{\"comment\": \"Dear Reviewer 4FKX,\\n\\nFor weakness1, we now provide the updated visualization with resolution 512 in the revised PDF Appendix Figure 20. Should you have any further questions or concerns, please do not hesitate to reach out to us.\"}", "{\"comment\": \"Dear Reviewer 4FKX,\\n\\nThank you for dedicating your time and providing feedback on our work. We have presented new experiments and more explanations to address your concerns or questions. As it approaches the end of the discussion period, we really want to know do you still have other concerns or questions so that we can put efforts in the last day to solve them. Your reply is very important for us, and we are looking forward to it.\"}", "{\"comment\": \"# For Weaknesses2:\\nWe agree that \\\"3D recontruction\\\" in earlier literature usually include estimating camera parameters, but here we use \\\"3D reconstruction\\\" to denote the reconstruction of 3D Gaussians following recent 3D GS papers Splatter Image[1], Triplane-Gaussian[2]. We also discussed the camera pose problem in the limitation part in our paper.\\n\\n# For Weaknesses3:\\nThank you for pointing this out.\\nWe will add the comparison as soon as possible.\\n\\n[1] Stanislaw Szymanowicz, et al., Splatter Image: Ultra-Fast Single-View 3D Reconstruction.CVPR2024.\\n\\n[2] Zi-Xin Zou, et al., Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers. CVPR2024.\\n\\n[3] Jiaxiang Tang, et al., LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV2024.\"}", "{\"comment\": \"Thanks for the answers and additional experiments. I am happy with the paper.\"}", "{\"comment\": \"Similar to LGM, both pixelSplat [1] and MVSplat [2] follow a workflow that regress Gaussians from each view within the respective camera spaces and subsequently merge them in the world space. In pixelSplat, the integration of cross-view-aware features is through an epipolar Transformer, and it still suffers from inaccurate depth estimation. MVSplat adopts a design that incorporates a cost volume storing cross-view feature similarities for all possible depth and makes a more accurate depth prediction. However, they assign each pixel with a 3D Gaussian and thereby generates a planar representation rather than the object itself. In addition, MVSplat tends to obscure object details due to the occlusion by 3D Gaussians from other viewpoints, resulting in suboptimal outcomes. To address this issue, we mask the 3D Gaussians on background pixels to help it focus on rendering 3D Gaussians contributing to the object itself, noted as 'MVSplat (masked)' in the results.\\n\\nWe present the comparison to pixelSplat [1] and MVSplat [2] in Appendix A.5 and the quantitative results on the GSO-random dataset is shown in Table 12. From the table, we can see that their results is significantly worse than ours. It is probably due to the fact that they have only been trained on the scene reconstruction dataset RealEstate10 [3], which only contains small camera difference among views. The cameras of object reconstruction dataset GSO-random has larger variations, so we observe more severe misaligned 3D Gaussians (view inconsistency) from different input views for MVSplat, as shown in the visualized results in Figure 21 (we also add the corresponding videos and ply files in the MVSplat\\\\_results folder of supplementary materials). And we find that MVSplat cannot correctly predict the back side of the object. It is not a big issue for scene reconstruction as their camera only moves a little, but would lead to incomplete reconstruction of objects. In the figure, we present the centers of 3D Gaussians generated from different views with different colors and the novel views are rendered from the 3D Gaussians from all views. As for pixelSplat, it almost cannot output reasonable results when use GSO-random dataset for testing, so we have not presented their visualized results. We provide the content of Table 12 as following:\\n\\n**Table: Comparison with MVSplat and pixelSplat on the GSO-random dataset**\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 | Inference time \\u2193 | Rendering time \\u2193 |\\n|---------------------------|--------|--------|---------|----------------|----------------|\\n| MVSplat | 12.92 | 0.80 | 0.30 | 0.112 | 0.0090 |\\n| MVSplat (masked) | 16.52 | 0.80 | 0.19 | 0.112 | 0.0045 |\\n| pixelSplat (2 views) | 12.00 | 0.80 | 0.28 | 1.088 | 0.0045 |\\n| pixelSplat (2 views masked)| 12.05 | 0.79 | 0.27 | 1.088 | 0.0023 |\\n| **Ours** | **26.30** | **0.93** | **0.08** | **0.694** | **0.0019** |\\n\\n[1] David Charatan, et al., pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction. CVPR2024.\\n\\n[2] Chen, MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images. ECCV2025\\n\\n[3] Zhou, et al., Stereo magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. (Proc.\\nSIGGRAPH), 2018\"}" ] }
BWS5gVjgeY
Number Cookbook: Number Understanding of Language Models and How to Improve It
[ "Haotong Yang", "Yi Hu", "Shijia Kang", "Zhouchen Lin", "Muhan Zhang" ]
Large language models (LLMs) can solve an increasing number of complex reasoning tasks while making surprising mistakes in basic numerical understanding and processing (such as $9.11 > 9.9$). The latter ability is essential for tackling complex arithmetic and mathematical problems and serves as a foundation for most reasoning tasks, but previous work paid little attention to it or only discussed several restricted tasks (like integer addition). In this paper, we comprehensively investigate the numerical understanding and processing ability (NUPA) of LLMs. Firstly, we introduce a benchmark covering four common numerical representations and 17 distinct numerical tasks in four major categories, resulting in 41 meaningful combinations in total. These tasks are derived from primary and secondary education curricula, encompassing nearly all everyday numerical understanding and processing scenarios, and the rules of these tasks are very simple and clear. Through the benchmark, we find that current LLMs fail frequently in many of the tasks. To study the problem, we train small models with existing and potential techniques for enhancing NUPA (such as tokenizers, PEs, and number formats), comprehensively evaluating their effectiveness using our testbed. We also finetune practical-scale LLMs on our proposed NUPA tasks and find that 1) naive finetuning can improve NUPA a lot on many but not all tasks, and 2) surprisingly, techniques designed to enhance NUPA prove ineffective for finetuning pretrained models. We further explore the impact of chain-of-thought techniques on NUPA. Our work provides a more detailed and comprehensive understanding of NUPA in LLMs.
[ "number understanding", "large language model", "reasoning" ]
Accept (Poster)
https://openreview.net/pdf?id=BWS5gVjgeY
https://openreview.net/forum?id=BWS5gVjgeY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xpeUbdbYan", "xfgl0x4ZQd", "wM6YQZ8FHf", "v5B1t7klGE", "seEarw0Cjg", "pyy1oD8NZr", "lHuUWH48l9", "k9Um9heKr3", "i7QoVo6KHn", "cEXgXAHzgS", "YU0QJBhN2s", "VyFnDEtyP2", "VJCtNRWJQH", "LYx3bdgoF2", "KbfDPUa9nb", "EDSYCmfOVJ", "CUqJ3BVZBR", "BsBa6kwOn0", "BPxnBIvgRh", "AYZVuipksk", "9OFsrIjOEB", "7EIHEiJaQa", "5lGc5VCtYd", "4uChDzva78", "4rS2cUfcTB", "202cVo85uB", "0nU6ufSEzC" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment" ], "note_created": [ 1729114746249, 1733137878481, 1733215446164, 1732583298611, 1732351063088, 1732385810769, 1732350868510, 1732337589283, 1732350916984, 1732584271880, 1730460519065, 1732337808333, 1732533678155, 1732445003147, 1732531935941, 1732449975851, 1732536115262, 1732346703397, 1734575360014, 1732532294977, 1732574890060, 1733216610367, 1730026142066, 1730636317934, 1732338493921, 1737523831643, 1732347356121 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_Z2vn" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_Z2vn" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_T1ZM" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_MwX6" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_m8mv" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Area_Chair_7RMb" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_T1ZM" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_m8mv" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_m8mv" ], [ "ICLR.cc/2025/Conference/Submission7313/Reviewer_MwX6" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7313/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work introduces a benchmark suite of \\\"numerical understanding and processing ability\\\" (NUPA) tasks for transformer-based LLMs. The task suite separates numeracy from potentially confounding logical reasoning problems, and presents a variety of numerical formats. Additionally, various tricks are explored to improve LLM performance in these numeracy tasks; no silver bullets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It is clear that some thought has gone into curating the benchmark tasks and justifying their inclusion. Even though I do not personally agree with some of the justifications and normative statements made, I believe that this sort of conscientious effort to justify a benchmark is commendable. I rate this work highly in terms of integrity, quality of presentation, and clarity.\\n\\nI particularly appreciated the authors' investigation of CoT, as it is generally interesting to understand whether \\\"prompt-level\\\" interventions have a lot of room for improving performance on complex tasks. A recent (contemporaneous, so not factoring into my decisions) paper by Apple on GSM8k qualifying LLM performance as memorisation seems to agree with the findings of this work that numeracy abilities do not appear to scale; potentially a way to frame and situate this work with respect to the broader discourse on generalisation and understanding in LLMs.\", \"weaknesses\": \"I have no issues with the benchmark and evaluation. I do not currently find the conceptual motivations and wrapping compelling; I could be convinced otherwise for each of these with the help of citations, or of better arguments, and I would be happy to raise my score accordingly.\\n\\n1. I am unconvinced that numeracy in LLMs is a problem in need of a solution. First, surely there is a citable source for LLM inadequacy for numeracy. Second, even if they were terrible at numeracy, the onus is on the authors to convince the reader that this a problem worth caring about, for at least two obvious reasons: 1) all of these tasks are already trivially done by a calculator or a python program, and 2) commercially available LLMs can probably do alright at numerical tasks indirectly via code-generation and execution. As it stands, it reads as if the authors are insisting that this is a problem deserving of attention --- I'm sure it could be, but this argument can be better made.\\n\\n2. I am unconvinced that numeracy in LLMs is a problem in search of deeper understanding. Consider that \\\"How many r's in strawberry\\\" is a meme that normal people know about; the idea that tokenization impairs the ability of transformers to reason at a letter-, digit-, or word-unit level is well-digested. So when the authors claim that the weakness of LLMs at digit understanding is surprising (line 298), I find this a bit sensationalist and not grounded with respect to a broader and basic level of discourse around the capabilities of LLMs.\\n\\n3. I am unconvinced of the normative rationales supporting the benchmarks, which seem ad-hoc. Here is an example: the authors claim that multiplication (not a particular algorithm) is O(L^2) in the length L counted in digits (of presumably both inputs; line 169), and take this as justification that models ought to find multiplication difficult. This doesn't hold up to armchair introspection: lookup is O(1), and we have all learned our times tables as children. The competent-human process of mental multiplication is mostly a sequence of lookups along with additive de/recomposition, so the real exponent is probably less than 2. This is a jarring oversight when contrasted with the effort with which the authors bring up cognitive and developmental psychology (lines 116 to 127), recasting what I assumed was a scholarly inclusion as a kind of checklisting exercise. There are other unsupported normative claims about what a good benchmark ought to encompass, such as \\\"NUPAs are necessary and expected to learn by LLMs [sic]\\\" (line 114), \\\"any student who has completed primary and secondary education should be able to accomplish them\\\" (line 160), and \\\"for most practical tasks involving numbers (like arithmetic or math tests), all we care about is whether the answer is right, and there is no significant difference between being almost right and being completely wrong\\\" (line 228). While I understand the importance of establishing normative standards to justify a benchmark with respect to, I find it hard to believe as a reader that the authors have the expertise and authority to do so.\\n\\n4. I am unconvinced that this is worth \\\"taking seriously\\\"; let us suppose that everyone agrees that NUPA in LLMs is a problem worth solving and that improving on this benchmark is the way to solve the problem. We know beforehand that transformers have problems with NUPA due to tokenization, and now the finding of this paper is that fine-tuning and CoT as tricks don't solve the problem. There are many graphs in this paper that must have taken a fair amount of compute to obtain; is it worth it if everyone decided to run similar tests, or fine-tune with respect to the NUPA suite? I don't mean this in an unnecessarily antagonistic way, but one concerned with scientific interest and downstream impacts; if LLMs are just bad at numeracy for architectural or structural reasons as the current evidence suggests, could the introduction of a benchmark be inviting high-compute low-innovation approaches to the problem, and does the problem even merit this kind of resource investment?\", \"questions\": \"1. Why is it important that LLMs be good at NUPA when we python, and even LLM code-generation that can write python to deal with NUPA? I.e. Why is this a worthwhile problem, and who says it is?\\n\\n2. To what extent do your findings suggest that the issue is architectural or to do with tokenization? To the degree that this is the case, to what extent do your findings suggest that the issue is paradigmatic rather than technical?\\n\\n3. Is it necessary to provide normative justifications for the benchmarking suite? If so, I would like to know why, because I think the paper could be stronger if the benchmarking tests were presented matter-of-factly, without detours.\\n\\n4. Experimentally, how much compute (as an estimate) was used throughout?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledgments and Summary of Revisions Based on Reviewer Feedback\", \"comment\": \"We greatly appreciate the reviewers for their careful review and valuable suggestions, which have significantly improved our paper. We have carefully considered all feedback, particularly the recommendations on related work, and incorporated them into the updated PDF. We are really grateful for the constructive discussions, which not only enhance our paper but also reflect a valuable and positive review process.\\n\\nIn summary, our paper focuses on **number understanding and processing abilities** (NUPA) of LLM, independent from other mathematical and reasoning tasks. We provide a comprehensive analysis of tasks involving various numerical representations and abilities, and propose a benchmark. Our tests show that while LLMs perform well on common tasks, their NUPA ability declines with less common numerical representations, tasks or longer sequences. We further explore pretraining techniques (tokenizers, PEs and data formats), fine-tuning, and CoT on NUPA tasks, concluding that the NUPA problem remains unsolved and future work should focus on improving number encoding and increasing task diversity.\\n\\nWe revised the paper based on the reviewers' suggestions: (1) reduced off-topic discussions when introducing representations and tasks; (2) added a related work section for clearer background; (3) refined conclusions for better accuracy; and (4) adjusted figure colors for readability. We also introduced an interactive website for better understanding of the experimental results. We thank the reviewers again for their efforts and are pleased that these revisions are successful to address their concerns. We hope our work will contribute to LLM research, particularly in handling numerical tasks more effectively.\"}", "{\"title\": \"About Few-shot Learning\", \"comment\": \"Thank you for your suggestion and now we have finished our experiments with few-shot learning. Specifically, we test the open-source models with 5-shot examples in the same task and find that though in most cases, few-shot learning can generally improve performance, the conclusions in our paper are still satisfied.\\n\\nOur results are shown in [fig1](https://anonymous.4open.science/api/repo/NUPA_temp-3711/file/nupa_performance_exact_match_1.pdf?v=5f28507c) and [fig2](https://anonymous.4open.science/api/repo/NUPA_temp-3711/file/nupa_performance_exact_match_2.pdf?v=7bb49fc3). For example, the performance also significantly decreases as the length increases or the tasks and representations become unfamiliar (like Add-Fraction, Add-Scientific or floordiv). And the performance of digit-related tasks are still unsatisfying.\\n\\nThank you for your reminder again. We believe it is an important supplementary and we will update them in the final version.\"}", "{\"title\": \"Thank You for Your Thoughtful Review and Suggestions\", \"comment\": \"Thank you again for your thoughftul review of our paper. Your constructive, especially the comprehensive reference, have been incredibly help us to improve our work. We hope this work can contribute to new insights and tools for understanding numerical concepts. Thank you again for your time and effort in reviewing our submission.\"}", "{\"title\": \"Official Rebuttal to Reviewer Z2vn (3/3)\", \"comment\": \"5. **Architecture or tokenization**\\n \\n We believe it\\u2019s not necessary to prioritize either architecture or tokenizer exclusively. As our article demonstrates, both factors matter: architectural choices, such as position encoding (PE), are critical, and tokenizer behavior also impacts performance. Our aim is to provide guidance on both aspects rather than favoring one at the expense of the other.\\n\\n Additionally, we suspect that limited data diversity during training contributes to the challenges observed. Notably, even simple fine-tuning, without additional techniques, significantly improves model performance.\\n\\n In summary, we are confident that technical solutions exist, whether through better tokenizer and PE selection, or by developing new, more effective schemes. Our benchmark serves as a foundation for evaluating these approaches with clarity and comprehensiveness, supporting progress in this field toward more effective solutions.\\n\\n6. **Rationale**\\n\\n We believe it is essential to provide a clear rationale for the benchmark suite, given the diversity of number-related tasks and representations. When discussing model NUPA, the focus should be on tasks that are (1) genuinely important and common, and (2) appropriately challenging\\u2014neither too simple nor overly complex\\u2014to ensure the benchmark\\u2019s utility and relevance. Our justification aligns with these principles.\\n\\n That said, we recognize that some statements in the original version may have been unclear or detracted from the main message. In the updated version, we have streamlined the content, keeping essential scenarios where these representations are relevant and omitting unnecessary details, such as how they are introduced in mathematics. We hope these revisions improve readability.\\n\\nThank you again for your thoughtful feedback. If you have further suggestions, we are happy to consider them.\\n\\n\\n\\n[1] Xu et al. Conveyor: Efficient Tool-aware LLM Serving with Tool Partial Execution, 2024\\n\\n[2] Meta, Introducing Llama 3.1: Our most capable models to date, 2024\\n\\n[3] Yang et al. Qwen2 Technical Report, 2024\"}", "{\"comment\": \"These are better arguments and citations, and my concerns feel addressed. **I will revise my overall score to 6.**, though I reserve the right to modulate again depending on discussion with the other reviewers. I believe that some of the other reviews may have been unnecessarily harsh, and **I intend to advocate on the authors' behalf, using their arguments, in discussion with other reviewers.**\\n\\nNice complexity argument! Even if multiplication can be reduced to single-digit-times-table lookup, addition, and digit-shifts, you are right that it's the double loop that gives us quasipolynomial $\\\\mathcal{O}(mn)$ in the two digit lengths, which becomes $\\\\mathcal{O}(n^2)$ for $n$-by-$n$.\"}", "{\"title\": \"Official Rebuttal to Reviewer Z2vn (1/3)\", \"comment\": [\"We sincerely thank you for your thoughtful and constructive suggestions! We have revised the paper based on your feedback and hope the updates effectively address your concerns.\", \"1. **Numeracy in LLMs is a problem**\", \"Although it is widely recognized that LLMs struggle with numeracy, there is still an absence of detailed benchmarks to evaluate specific challenges. Questions like which numerical tasks, number ranges, and task complexities are most difficult for models remain unanswered. A clear, detailed task definition is critical, rather than relying on vague notions of commonsense.\", \"We believe that NUPA without relying on external tools like Python is essential for an AGI candidate. Number processing is a high-frequency task, and dependence on such tools introduces significant overhead, increases complexity, and reduces parallelism [1]. Therefore, robust intrinsic numeracy is, in our view, critical for achieving efficient performance.\", \"Math ability is a key focus for LLM evaluation, with most models reporting metrics like MATH or GSM8k. Numerical processing is integral to this ability, and poor performance in NUPA directly hinders overall math capability. Notably, these metrics are reported without external tools, emphasizing the importance of intrinsic numeracy [2,3].\", \"While it is reasonable to use external tools for particularly complex problems, such as those involving very large or intricate numbers, models should not be expected to rely on tools for every numerical task. Therefore, it is crucial to establish a reference that identifies tasks the model can handle independently with high accuracy and those that necessitate external tool support. This distinction underscores the importance and value of a comprehensive benchmark.\", \"2. **Tokenizer**\", \"While it is commonly suggested that tokenization affects digit-level performance, its impact remains underexplored. Numbers differ fundamentally from text, as noted in the revised Section 3.1. Further research is needed to understand tokenization's role in handling numbers. As highlighted in our paper, while the trend favors larger vocabularies, one-digit tokenization performs best for NUPA. This finding is novel and has not been adequately addressed by open-source model trainers, warranting consideration for future tokenization design.\", \"When we express surprise at LLMs\\u2019 struggle with digits, we mean that digits are foundational to arithmetic and math. If a model cannot reliably identify digits, its ability to solve complex math problems is questionable.\", \"From an architectural perspective, the poor performance on digits might not be surprising. However, considering the model's expected mathematical capabilities (as many models, such as GPT-4 and Qen2.5, claim to possess strong mathematical abilities), it does seem unexpected. We have revised the paper to clarify this point, but if you still find it unclear or misleading, we\\u2019re more than happy to provide further clarification.\"]}", "{\"title\": \"Official Rebuttal to Reviewer MwX6\", \"comment\": \"Thank you for your valuable review. We have revise the paper according to your suggestions and we hope the updated version and the following answers can address your concerns.\\n\\n1. **Related work** \\n\\n Thank you for pointing out these related works. A related work section has been included in the revision now as section 5, which we hope will help readers better understand our contributions and novelty. We have included two of your suggested papers in the reference list and we find the other two papers about compositionality appear to be less relevant to our study. If you have further suggestions or feedback, we would be happy to consider them.\", \"we_are_also_pleased_to_summarize_the_novelty_of_our_work_as_follows\": \"(1) Unlike previous benchmarks, where NUPA is intertwined with math, language, or commonsense abilities, our benchmark isolates NUPA as an independent focus, allowing for a targeted and detailed analysis.\\n\\n (2) While prior studies have highlighted the diversity of numerical representations and tasks, they often lack structured organization and comprehensive analysis of these aspects. In contrast, our work meticulously categorizes and analyzes numerical representations and tasks, offering a clear and thorough specification of their scope.\\n\\n2. **overly strong novelty claim** \\nThank you for your reminder. When we refer to our work as an initial step, we mean we first emphasize NUPA itself as an independent task, characterized by representation complexity and diversity, separate from math or reading comprehension. We acknowledge that some statements in the previous version may be unclear and overstated. We have revised them into more accurate ones like \\\"*Our work provides a more detailed and comprehensive understanding of NUPA in LLMs*\\\" in the updated submission.\\n\\n3. **Add explanations**\", \"we_include_the_following_explanation_in_the_updated_version\": [\"For the **random tokenizer**, we add a brief introduction at the beginning of the paragraph \\\"random tokenizer\\\" as follows:\", \">Introduced as ``sub-word regularization'' by [5,6], the random tokenizer splits words like \\\"Hello world\\\" into variable tokens such as \\\"He/llo/ world\\\" or \\\"Hell/o/ world\\\". Though not widely used in LLMs, [7] found that it enhances reasoning by introducing variability in generation path. Inspired by this, we apply this to the numbers, segmenting numbers into tokens with lengths randomly chosen between 1 and a predefined maximum, instead of using greedy left-to-right segmentation.\", \"We clarify the distinction between \\\"*length-related*\\\" and \\\"*length-agnostic*\\\" at the beginning of section 3.2 (Now in appendix A.4.3 due to the space limitation). Using integer addition as an example, its rules are length-agnostic since addition involves processing numbers digit by digit, unaffected by their length. However, during training, because the model is exposed to numbers within a limited length range (e.g., 1 to 8), it may develop a length-related rule that combines the original addition rules with an artificial constraint like \\\"the output length must be between 1 and 8.\\\"\", \"We add an *explanation of the PE* results at the end of Appendix A.4.3. In summary, we believe that RoPE allows models to learn positional and length information more effectively, which in turn encourages the model to adopt a length-related rule as a shortcut.\", \"The question of \\\"*why RoPE facilitates learning length information*\\\" is intriguing but beyond the scope of this study. This phenomenon is likely tied to the architecture of the PEs. For instance, RoPE encodes positional information using a d-dimensional vector, whereas Alibi relies on a scalar, and NoPE uses no positional encoding at all. While we provide some intuition, a detailed analysis of the mechanisms behind PEs is a substantial topic on its own, and we look forward to further research in this direction.\", \"We add a explanation about what are and why we choose RoPEs, NoPEs and Alibi.\", \">RoPE, widely used in Llama and its derivatives, is the most classic relative PE. Then alibi, another relative PE, is proposed to address RoPE's length overfitting issues. NoPE (transformers without PE, relying solely on the causal mask to encode the position information) offers a surprisingly easy way to achieve length generalization. Therefore, we compare these three typical PEs to evaluate the performance on NUPA.\"]}", "{\"title\": \"Official Rebuttal to Reviewer Z2vn (2/3)\", \"comment\": [\"3. **Absolute and misleading statements**\", \"We acknowledge that some statements in the earlier version of our paper may have appeared too absolute or misleading. These have been revised for greater accuracy and moderation in the updated submission. Examples include:\", \"\\\"Any student who has completed primary and secondary education should be able to accomplish them\\\" -> \\\"Because these tasks are extracted from the education curricula, students who have completed the stage of education are expected to solve them.\\\"\", \"\\\"for most practical tasks involving numbers (like arithmetic or math tests), all we care about is whether the answer is right, and there is no significant difference between being almost right and being completely wrong\\\" -> \\\"for most practical tasks involving numbers (like arithmetic or math tests), the correctness of the answer is the most important.\\\" In addition, this statement is a concession. We recognize and emphasize the importance of other metrics in just the next sentence \\\"But having a smoother ...\\\".\", \"We maintain that NUPAs are essential and should be expected to be learned by LLMs even when tools or code are available for assistance. As discussed earlier, relying on tools is not always an optimal solution for numerical processing. However, we have expanded and clarified our explanation in the updated version to make this point more reasonable and better aligned with practical considerations.\", \"The statement about $O(n^2)$ is not related to our conclusion and we are glad to remove it. However:\", \"1. In context, we discuss how RF-CoT handles multiplication by following an algorithm with $O(n^2)$ complexity. While table lookups and additions may simplify this, they still break the problem into a double loop structure inherent to multiplication.\", \"2. Even with shortcuts like a multiplication table, the complexity remains $O(n^2)$, as reducing constant terms (e.g., $O(n/2 * n/2) = O(n^2/4)=O(n^2)$) does not change the overall complexity.\", \"However, we want to emphasize that the scope of tasks and their representations were chosen carefully, with reference to the Chinese primary and secondary school curricula, ensuring that the tasks are both common and representative.\", \"4. **Concerning about high-compute low-innovation approaches**\", \"While tokenization is an important factor, its effects on number processing remain poorly understood. There is no consensus on the best tokenization strategy, as current open-source models employ diverse approaches. Our work provides a valuable reference point for future model design. This area of research is still in its infancy, with many techniques proposed but their effectiveness in practical, diverse settings largely unproven. In this case, providing a comprehensive and clear benchmark can contribute to more systematic and orderly progress in this field.\", \"Although our paper includes extensive experiments, future researchers need not replicate all of them. For fine-tuning a model to enhance NUPA, only the fine-tuning experiments are required\\u2014there\\u2019s no need to re-run tokenization, PE, or data format experiments. Similarly, testing a model on NUPA only requires inference on our test set, which is computationally lightweight.\", \"Our benchmark is computationally efficient. Training and inference focus on a small set of numbers, requiring minimal compute resources. For example, fine-tuning Llama on a single GPU takes approximately 5 hours, with inference requiring only 8 hours on the same setup. While some experiments from scratch (e.g., in Sections 3.1 and 3.2) are more demanding, even these remain manageable: 0.1B models take about 5 hours, 1B models about 20 hours, and 3B models about 40 hours on a single H800 GPU. Importantly, these more intensive experiments are not required for end users.\"]}", "{\"title\": \"Grateful for Your Invaluable Feedback\", \"comment\": \"Thank you for your detailed and invaluable feedback. Your sightful suggestions have significantly improved the quality of our paper. We agree with your suggestions about \\\"concept\\\", \\\"case-based learning\\\" and \\\"few-shot learning\\\".\\n\\n1. The use of term \\\"concept\\\" was indeed misleading. What we actually intended to highlight was that \\\"the model fails to solve a series of tasks related to digits\\\". we believe that the current revision will eliminate any misunderstandings.\\n\\n2. For case-based learning, the reference [1] actually use more experimental evidence to support the case-based learning, which we do not specifically investigate in our experiments. We acknowledge that drawing conclusions about case-based learning at this stage may be premature, but we see this as a potential avenue for future research.\\n\\n3. We agree with your perspective regarding the few-shot and we are now conducting further experiments to provide more conclusive results in final version.\\n\\nOnce again, we deeply appreciate your meticulous and thought-provoking review. Your input has been invaluable to us, and we are grateful for the time and effort you have dedicated to our paper. We hope that this work contributes meaningfully to the ongoing developments in the field of LLM, and we are excited about the potential it holds for advancing the capabilities of LLMs.\"}", "{\"summary\": \"This paper evaluates language models on tasks involving numerical and arithmetic reasoning. The authors introduce a test suite with 17 numerical tasks (e.g., addition, subtraction, modulo, max, min, and digit retrieval) across four numerical representations, using it to evaluate nine language models. The paper also analyzes the impact of tokenization, positional embedding type, and data format on a model\\u2019s ability to perform addition and multiplication. Finally, the authors assess whether fine-tuning, with and without chain-of-thought, can improve LLaMA-3 8B\\u2019s performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The evaluation procedure is well-structured and comprehensive.\", \"Some tasks in the evaluation suite offer useful insights into model failure modes.\"], \"weaknesses\": \"- Attribution and discussion of previous work is extremely poor:\\n - Notably, a discussion of the related work is missing. The the proposed evaluation suite, the design choices for it, and the results obtained should be discussed in light of previous studies that evaluated and analyzed the performance of language models on task involving numerical reasoning [e.g., 1,2,3,4,5,6]\\n - The authors fail to reference papers that introduce methods that they directly adopt: chain-of-thought prompting was introduced by Wei et al. [7], and LoRA was proposed by Hu et al. [8]. Both citations are missing.\\n- Although the authors study the impact of tokenization on the performance of a model that they train form scratch, a discussion of how the tokenization of the pre-trained models evaluated might affect the results is missing. Additionally, here again, any reference to previous work that studied the impact of tokenization on numerical reasoning [e.g., 9] is absent.\\n- Some presentation/clarity issues:\\n - Presenting the fine-tuning results in Table 2 is confusing, as they are discussed only much later in Section 4.\\n - Figure 2 can be improved, especially in the choice of color for the LLaMA models, which are quite hard to distinguish in the bar plot.\\n - Line 512.5: I understand that rule-following fine-tuning is introduced by Hu et al., but further details about the fine-tuning process (potentially in the appendix) would be helpful.\\n - The use of negative \\\\vspace is excessive, resulting in cramped spacing between tables/figures and text, especially on pages 7 and 9.\\n\\n---\\n[1] A Survey of Deep Learning for Mathematical Reasoning (Lu et al., ACL 2023) \\n[2] Do NLP Models Know Numbers? Probing Numeracy in Embeddings (Wallace et al., EMNLP-IJCNLP 2019) \\n[3] Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models (Lin et al., EMNLP 2020) \\n[4] A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models (Stolfo et al., ACL 2023) \\n[5] Injecting Numerical Reasoning Skills into Language Models (Geva et al., ACL 2020) \\n[6] Impact of Pretraining Term Frequencies on Few-Shot Numerical Reasoning (Razeghi et al., Findings 2022) \\n[7] Wei, Jason, et al. \\\"Chain-of-thought prompting elicits reasoning in large language models.\\\" Advances in neural information processing systems 35 (2022): 24824-24837. \\n[8] Hu, Edward J., et al. \\\"Lora: Low-rank adaptation of large language models.\\\" arXiv preprint arXiv:2106.09685 (2021). \\n[9] Singh, A.K. and Strouse, D.J., 2024. Tokenization counts: the impact of tokenization on arithmetic in frontier llms. arXiv preprint arXiv:2402.14903.\", \"questions\": \"The lack of discussion on previous work is a significant drawback. I would be willing to reconsider my score if this is addressed in the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Rebuttal to Reviewer MwX6 (2/2)\", \"comment\": \"4. **Guidance for the selection of tasks and techniques**:\\n- The selection of tasks in our paper is carefully designed based on two criteria: (1) the task should be common and representative, which is why we reference (Chinese) primary and secondary school curricula and evaluate each candidate task for inclusion (2) the tasks should be of appropriate difficulty\\u2014not too easy or too hard\\u2014for the models to solve. We have added a detailed explanation of our task selection process. If you have suggestions for additional suitable tasks, we would be happy to consider and include them.\\n- Regarding the selection of techniques, we focused on those commonly used in numerical reasoning and length generalization research. We chose the most typical and representative techniques (tokenizer, PE and data format) to evaluate the performance of NUPA. We mainly aim to check the efficiency of these techniques on our newly proposed tasks. To make the paper readable, we have *moved some discussion to the Appendix*. We hope this revision can address your concern. If you have any further suggestions, please let us know and we are glad to further polish the paper.\\n\\n5. **Statistical reporting**: \\nWe repeat our mainly experiments three times and report the standard error (Figures & Tables) in the updated version. \\n\\n6. **Figure 2** \\nWe have updated the color scheme in Figure 2 to enhance readability. The finetuned model is added in the figure for direct comparison with other models, as we believe presenting it in a separate figure would be less optimal. We can add an explanation at the caption. \\nTo further improve clarity, we have provided an interactive performance report as an anonymous *HTML* page [here](https://huggingface.co/spaces/NUPA-Anonymous/Performance), where the readers can **interact with** the figure and select the models, tasks and metrics. \\n\\n[1] Eric Wallace, et al. Do NLP Models Know Numbers? Probing Numeracy in Embeddings, 2019.\\n\\n[2] Devin Johnson, et al. Probing for multilingual numberical understanding in transformer-based language models, 2020.\\n\\n[3] Bill Yuchen Lin, et al. Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models, 2020.\\n\\n[4] Mubashara Akhtar, et al. Exploring the numerical reasoning capabilities of language models: A comprehensive\\nanalysis on tabular data, 2023.\\n\\n[5] Taku Kudo. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates, 2018.\\n\\n[6] Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. BPE-Dropout: Simple and Effective Subword Regularization, 2020.\\n\\n[7] Ashutosh Sathe, Divyanshu Aggarwal, and Sunayana Sitaram. Improving self consistency in llms through probabilistic tokenization, 2024.\"}", "{\"title\": \"Looking Forward to Further Discussions\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period is nearing its end in two days, we would like to follow up to ensure that our revisions and responses have addressed your concerns.\\n\\nIn particular, we have made several updates to the paper.\\n1. We have added detailed information on the model and training process to provide greater clarity on our methodology. (Appendix A.4.1.)\\n2. Sections 2.1 and 2.2 have been streamlined to focus more on the relevance and importance of the content, removing less pertinent details.\\n3. We have improved the phrasing and clarity in the section on result interpretation. (Section 2.4)\\n4. We have added a section about related work. (Section 5)\\n\\nIf you have any further questions or suggestions, we would be eager to continue the discussion with you over the next few days. Your feedback is highly valued, and we are keen to ensure that the manuscript fully meets your expectations.\\n\\nThank you again for your thoughtful review.\\n\\nBest regards,\\nAuthors\"}", "{\"comment\": \"Thank you for the thorough clarifications and updates. I am raising my score.\\n\\nAnd great that you added standard errors to your plots and tables. They seem to still be missing in figure 2, however. For the future (and perhaps for that figure) I would recommend calculating error bars based on bootstrap sampling, eliminating the need to rerun experiments. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)\"}", "{\"title\": \"Thank You for Your Thoughtful Review and Suggestions\", \"comment\": \"Thank your again for your thorough and insightful review of our work. Your constructive comments and detailed feedback have been invaluable in helping us improve our paper. Your recommendation to use the bootstrap method is appreciated, we will add an analysis based on bootstrap sampling in final version. And we will further refine the figure sizes to improve clarity and include this interactive leaderboard in the main text to make it more accessible for readers.\\n\\nWe truly value your expertise and input. If you have any further suggestions or additional feedback, we would be more than happy to consider them as we continue to improve our work.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you once again for your thoughtful review and recognition of our work. We hope this research will contribute to new insights and tools for understanding numerical concepts. We also welcome further discussions with you and would be glad to hear any additional suggestions you may have.\"}", "{\"title\": \"Thanks for the comprehensive response\", \"comment\": \"Dear authors,\\n\\nThank you for the comprehensive response. My main weaknesses have all been addressed, so I will increase my support for this submission. I just wanted to briefly respond to two things in your response that I disagree with, but I leave it up to the authors to do anything with this or not.\\n\\nFirstly, claiming the model does not understand the concept of a digit because it cannot do tasks requiring returning a digit well. This can be explained by many different things besides the concept of a digit the model has. As your paper also highlights, there can be other things going on that does not allow the model to return a specific digit (like tokenizer problems), which can co-exist with the model having a good \\\"concept of a digit\\\". I respectfully disagree that the ability to consistently solve digit-related tasks directly probes the concept of a digit the model has. In any case, it seems like you already changed the wording around this topic in the submission.\\n\\nSecondly, even though case-based is the opposite of rule-based, a lack of rule-based behaviour does not imply case-based reasoning. It can imply imperfect applications of rule-based behaviour. Again, seems like you already changed this in the submission, but I just wanted to highlight this.\\n\\nFinally, although I agree that zero-shot improvements are important, it doesn't mean few-shot does not need to be tested. Even if a model fails zero-shot, it might very well just be because we are \\\"using it wrong\\\" (i.e. it just needs few-shot examples to properly respond). This is supported by results finding that things like fine-tuning stages on LLMs don't teach the model new capabilities, but enhance existing ones, and that techniques like best-of-N can sometimes be as good as fine-tuning; the capabilities are often already there in the model, we just need to properly use them to get them out. This is totally fair for a generalist model like an LLM, and often there is a simple few-shot prompt that improves performance across a lot of related tasks simply because it clarifies the requested output format. Again, zero-shot improvements are important, but if a model cannot do something zero-shot, few-shot capabilities should always be tested in a paper like this one that aims to map a models capabilities.\\n\\nI think this paper represents a strong submission now because it proposes a comprehensive benchmark and does a lot of experiments trying to understand when and why the numerical capabilities of LLMs are lacking.\"}", "{\"title\": \"Official Rebuttal to Reviewer m8mv (1/2)\", \"comment\": \"We sincerely thank you for your constructive feedback and valuable suggestions. We have revised the paper based on your suggestions and hope the updates address your concerns.\\n\\n1. **Training details** \\n We train our models using an autoregressive Transformer architecture based on Llama-3.1 (unless stated otherwise). Sections 3.1 and 3.2 cover training from scratch with all hyperparameters, except model size, aligned with the original Llama setup. The AdamW optimizer is used with a learning rate of 5e-5, weight decay of 0.01, and batch sizes of 256, 64, and 32 for 0.1B, 0.9B, and 3B models, respectively, following default settings in the Transformers library. \\n\\n Each model is trained on a consolidated dataset, comprising $10^7$ samples for each length (where feasible). Models are trained for one epoch using a cosine decay learning rate scheduler, and the best checkpoint on validation data is reported. \\n\\n While experiments weren\\u2019t initially replicated at submission, the updated version includes three replicates for key experiments, reporting means and standard errors in figures and tables.\\n\\n Our experiments were conducted on a cluster with Nvidia H800 GPUs (80GB memory). Training a 100M model from scratch takes 5\\u20138 hours, a 1B model about 1 day, and a 3B model approximately 2 days on a single H800 GPU. Fine-tuning a pretrained model typically requires around 5 hours.\\n\\n These details are included in the updated version (Appendix A.4.1) for improved reproducibility.\\n\\n2. **In-domain and Out-of-domain**\\n The terms \\\"in-domain\\\" and \\\"out-of-domain\\\" refer to the **length** of numbers. As described in Section 3.1, the model is trained on numbers of lengths 1 to 8 (20) and tested on numbers of lengths 1 to 20 (100). Thus, lengths 1 to 8 (20) are considered in-domain, while lengths 9 (21) to 20 (100) are out-of-domain.\\n\\n3. **Section 2.1 and 2.2** \\n The primary goal of our work is to propose a comprehensive benchmark to formalize NUPA. We believe it\\u2019s important to provide clear reasoning behind the choice of representations and tasks. To enhance clarity, we have significantly streamlined this section in the updated submission. In Section 2.1, we have retained the essential scenarios where these representations are relevant while omitting unnecessary details, such as their mathematical introductions. Likewise, Section 2.2 has been reorganized and condensed for improved coherence. We appreciate your suggestion to enhance the paper\\u2019s readability and invite you to review the updated version for further details.\\n\\n4. **Figure 2** \\n Figure 2 has been updated for better readability. Additionally, to further enhance clarity, we have provided an interactive performance report as an anonymous *HTML* page [here](https://huggingface.co/spaces/NUPA-Anonymous/Performance). This allows readers to **interact with** the figure by selecting models, tasks, and metrics for a more detailed exploration.\\n\\n5. **Interpretation about the results** \\n We recognize that some explanations in our initial submission may have led to misunderstandings due to less precise wording.\\n\\n - For instance, when we state that \\\"*the model does not understand the concept of digit*,\\\" we mean that the model cannot consistently solve digit-related tasks. Here, \\\"concept\\\" refers to a \\\"set of digit-centered abilities\\\" as defined in Section 2.2. We clarify that our paper acknowledges models can comprehend task instructions: \\\"*models can at least comprehend the task instruction.*\\\" This statement has been rephrased for clarity in the updated submission. Thank you for pointing this out.\\n - Regarding \\\"*case-based reasoning*\\\", we intended it as the opposite of rule-based reasoning, where the latter implies that models learn the **rules** of tasks to solve them consistently. (See Reference [1] for more details.) However, case-based reasoning does not contradict dependence on architecture. As noted in Section 3, architectures like RoPE might enable case-based reasoning, potentially offering shortcuts based on sequence length. Recognizing the term\\u2019s potential to mislead, we have removed it in the updated version.\\n\\n - We agree that architecture plays a role and have added this to the listing. However, we maintain that the training approach is more critical. For instance, within the same architecture (e.g., LLaMA), fine-tuned versions with fewer parameters demonstrate significantly better performance than the pretrained versions, as highlighted in Section 3.\"}", "{\"metareview\": \"The authors present a new benchmark for numerical reasoning and several experiments studying the effect of various manipulations on benchmark performance. This addresses an important problem, broadens the scope of previous benchmarks, and raises insights about model failures -- overall presents a clear and useful new contribution.\", \"additional_comments_on_reviewer_discussion\": \"Clarity and contextualization issues resolved significantly. Overall strong engagement from reviewers and authors.\"}", "{\"title\": \"Looking Forward to Further Discussions\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period is set to conclude in two days, we would like to kindly follow up to ensure that our responses have addressed your concerns.\\n\\nSpecifically, we have **revised and refined the presentation** further, with careful attention to **adding detailed content regarding related work**. The references you provided, along with other relevant paper, have now been thoroughly added into the paper.\\n\\nIf you have any additional questions or suggestions, we would be delighted to engage in further discussions with you over the next few days. Your feedback is invaluable to us, and we are eager to ensure that the paper meets the highest standards.\\n\\nBest regards,\\nAuthors\"}", "{\"comment\": \"Thank you for your response and for the revisions to the paper. The discussion of related works contextualizes the contribution and the additional analyses included make the paper a stronger submission. I also appreciated the interactive HTML visualization of the results that you provided. I will raise my score to reflect these improvements.\"}", "{\"comment\": \"That's great! Thanks for your engagement with my suggestions\"}", "{\"summary\": \"Designs a comprehensive benchmark for number understanding, covering four different number presentations (e.g. integers) and 17 tasks (such as multiplication). The benchmarks allows for testing a range of different numerical understanding skills, at different levels of difficulty (through longer numbers in terms of digits for example). The authors do a comprehensive range of experiments on the benchmarks, evaluating off-the-shelf LLMs on it, as well as training their own models to understand the effects of aspects of common LLMs (such as tokenizers) and existing methods for improving numerical understanding (such as representing in reverse order). The findings are the many models still struggle in elementary understanding tasks, especially for larger numbers. Moreover, the existing tricks to deal with this don't fully mitigate the problems.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This paper has the potential to be an important contribution to the field, for two main reasons:\", \"Great benchmark proposed with a comprehensive range of tasks and difficulty levels, and good metrics for evaluating performance\", \"Excellent coverage of experiments. Testing comprehensively many aspects relevant to this problem, like different types of generalisation, different architectural changes, different existing models, effects of scale, existing methods for improving numeric understanding, prompting, etc. I would be very interested in these results, if the weaknesses below could be addressed.\"], \"weaknesses\": [\"Unfortunately, the presentation is lacking and it's very hard to interpret the results because of missing details on how models are trained.\", \"**Main weaknesses**\", \"There is no information at all in the paper about how the models are trained for the experiments in section 3.1 and 3.2. How much examples do you train on? How long? Optimizers, hyperparameters, repeats, etc.? How do you define in and out-of-domain? It's very difficult to interpret the results of this experiment without knowing more.\", \"The paper needs substantial rewrites, and will probably benefit from corrections by someone whose first language is English or by using an LLM to revise and point out mistakes.\", \"There is a long discussion of things like the relevance of integers and how fractions arise (section 2.1 and 2.2), but this seems unnecessary and the space could be used instead to represent the results better. For example, Figure 2 presenting the main results in the text is very small and almost impossible to read without zooming in 200%. The colors used are also hard to distinguish. I would suggest shortening section 2.2 and almost entirely removing 2.1 and use the space to represent the results better. Especially because this is the great part of this paper, you have so many results but you do not discuss the details of the experiments and present the results in a way that are hard to follow.\", \"Some results are interpreted in ways that are not substantiated by the evidence found. For example, line 298 to 309; I disagree that this result means the model does not understand the concept of digit. The fact that it becomes harder to return the right digit for longer numbers actually hints at something else going on that might be more related to the model's inherent difficulties with retrieving something from a position in longer sequences, which has nothing to do with it's understanding of the concept digit. I also disagree that this points to case-based reasoning, because again it might be due to some aspects of its architecture. Additionally, the fact that models of different sizes show the same performance on a task does not necessarily indicate the performance depends on the training approach over size, it might also depend on architecture.\", \"There is no section on related work, and the authors say in the beginning of section 2 that they will show limitations of prior benchmarks while discussing the coverage of their own, but this does not happen. This makes it difficult for the reader to place this contribution w.r.t. existing literature.\", \"**Minor points**\", \"Would be great to already get some more concrete information in the abstract (what probability of error? which 3 factors influence it?). Or at least in the intro some more concrete info.\", \"Figure 4 and 5: it's unclear what's on the X-axis (though one can assume it's training steps), and also unclear what D6 to D10 refers to. It's also unclear what is in-domain and what is OOD without reading the text.\", \"*\\\"These types of errors are a major cause of hallucinations when dealing with math, reasoning, and data analysis tasks, as the model presents seemingly correct problem solving approaches but ultimately produces incorrect results\\\"* -> this statement requires a citation\", \"line 117 to 119; a discussion on the innateness of integer understanding seems somewhat out of scope for this work and not too relevant to the contribution.\", \"Cite / reference openai chatgpt when using it line 135\", \"One approach that seems missing is few-shot examples, which might significantly boost performance for all models. Not just because they might learn from the examples, but because they get primed on the output format required.\"], \"questions\": \"Main questions is can you give details on how the experiments that use training of fine-tuning are done, and what in-domain and out-of-domain refers to (what examples do you hold out for this)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the empirical abilities of LLMs to solve numerical reasoning tasks of varying complexity. It proposes a benchmark, called NUPA, incorporating several different number representations (integers, fractions, etc) and reasoning tasks (arithmetic, comparisons, etc). Its experiments show that several well-known LLMs are prone to errors on this benchmark, particularly as the digits get larger. The paper also analyzes how factors like tokenization, data formats, and finetuning affect performance on NUPA.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper does a good job of broadening the evaluation of numerical reasoning from integers\\u2014which is often the focus in these kinds of studies\\u2014to more general classes of numbers. In general, there are a lot of experiments here, including several tasks and language models. The claim that models struggle with basic numerical understanding is well supported by the results.\\n2. The discussion on tokenization is quite interesting, surprisingly few studies that I am aware of consider how tokenization affects numerical reasoning. Although not very surprising, it is nice to have empirical evidence that 1-digit tokenization yields stronger generalization to more complex examples than those in the training set. \\n3. The paper goes beyond being purely \\u201cdescriptive\\u201d, it also investigates methods that could *improve* numerical reasoning.\", \"weaknesses\": \"1. Numerical reasoning in LLMs is well-studied by now. Yet, the paper lacks a related work section and has very few references to similar studies overall. It is therefore unclear to what extent the insights here are novel. Some of the claims of novelty also come off as overly strong, e.g., the last sentence of the abstract \\u201cour work takes an initial step towards understanding and improving NUPA [numerical understanding and processing ability] of LLMs.\\u201d I include a short list of relevant references below; however, I would suggest the authors to perform a more comprehensive literature review so that they can properly situate their work.\\n2. The paper is somewhat poorly written. First, there are many grammatical errors in this text (e.g., l62-3, l123). Second, many parts lack explanation or justification. For instance, it is not explained how the random tokenizer works. The technique appears to be adapted from Sathe et al. (2024), but that is a recent and probably not very well-known paper. On another note, random tokenizers were not introduced by that paper as suggested in the text; see for instance Kudo (2018). Another part that is unclear to me is the section on positional encodings. It is not explained what is meant by length-agnostic and length-related rules. It also doesn\\u2019t contextualize the methods studied\\u2014RoPE and Alibi. What are they and why are they studied? Is there some plausible explanation for why you observe the results you do?\\n3. The paper is experiment-driven; there is no underlying theory guiding the selection of tasks or techniques for improvement. The scope is also very broad\\u2014the paper attempts to do many things at once. While that may be seen as a strength, I feel that it comes at the sacrifice of depth of analysis and clarity. I would suggest the authors to construct a more focused narrative and provide justifications that are grounded either in previous work or theory. \\n4. The paper lacks important statistical reporting like confidence intervals or p-values. \\n5. As a minor additional note, Figure 2 (and the similar figures in appendix) is quite hard to parse. There is a lot of information and it is difficult to distinguish the models since some of the colors are rather similar. It is also confusing that the results for the finetuned model are here since finetuning is discussed much later in the paper. \\n\\n\\n----\\n\\nDziri et al. 2023. Faith and Fate: Limits of Transformers on Compositionality.\\n\\nKudo et al. 2023. Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?\\n\\nRazeghi et al. 2022. Impact of Pretraining Term Frequencies on Few-Shot Numerical Reasoning.\\n\\nZhang et al. 2024. Interpreting and Improving Large Language Models in Arithmetic Calculation.\", \"questions\": \"How do you get ground truth reasoning traces for RFFT?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Rebuttal to Reviewer T1ZM\", \"comment\": \"Thank you for your insightful review. We have revised the paper based on your suggestions, and we hope the updated version and our responses below address your concerns effectively.\\n\\n1. **Related work** \\n We acknowledge that, due to space constraints, some discussions of relevant literature were omitted in the initial version. We recognize their importance and have added these discussions in a newly created section. Specifically, we have incorporated your suggested papers, along with others, into this section. Additionally, the paper [9] has been cited in the original section 3.1 and we have talked about the results. In fact, we adapt the setting of the paper right-to-left tokenization in our experiments (we also emphasize this point in updated submission). However, our findings indicate that the one-digit tokenizer performs best, while left-to-right and right-to-left tokenizers yield equivalent results.\\n\\n2. **tokenizer** \\n\\n 1. Because the tokenization is bounded to the model, it is impossible to isolate the influence of tokenization in pre-trained models without introducing other confounding factors.\\n 2. In the updated submission, we include a comparison of fine-tuning pre-trained models with modified tokenization in Section 3.3 (see Table 17: Row \\\"RoPE+1d,\\\" which represents the fine-tuned Llama model with a one-digit tokenizer while keeping other settings unchanged). Similar to other fine-tuned models with modifications, the performance of the fine-tuned model with the one-digit tokenizer is worse than both the vanilla fine-tuned model and the original pre-trained model. This aligns with our conclusion: this kind of modification should be directly applied in the pretraining stage, instead of an ad-hoc during finetuning.\\n\\n3. **presentation**\\n\\n 1. The colors in Figure 2 have been updated to enhance readability. The finetuned model is added in the figure for direct comparison with other models, as we believe presenting it in a separate figure would be less optimal. We can add an explanation at the caption. To further improve clarity, we have provided an interactive performance report as an anonymous *HTML* page [here](https://huggingface.co/spaces/NUPA-Anonymous/Performance), where the readers can **interact with** the figure and select the models, tasks and metrics. \\n 2. A brief introduction to RFFT has been added at the beginning of Section 4 in the main paper, along with a detailed explanation and an example in Appendix A.5.1. Furthermore, all prompts and code are included in the supplementary materials for reference. \\n 3. The paper has been reorganized to avoid the excessive use of negative vspace, improving its overall structure and readability. \\n\\nThank you once again for your detailed and thoughtful feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Official Rebuttal to Reviewer m8mv (2/2)\", \"comment\": \"6. **Related work**\\n Thank you for your suggestion. In the updated submission, we have added a related work section that now includes relevant datasets and benchmarks. To summarize, our work differs from previous benchmarks by treating NUPA as an independent task, distinct from math, language, or commonsense abilities. Additionally, we provide a comprehensive and detailed analysis of diverse numerical representations and tasks, which sets our approach apart.\\n\\n7. **For these minor points:** \\n\\n - The abstract has been revised and the vague terms have been removed.\\n\\n - More explanations have been added in the caption of figures. Specifically, the X-axis is the seen samples, Dn is the number length (digit).\\n\\n - The citation about the calculation hallucination has been added.\\n\\n - The discussion on innateness has been removed.\\n\\n - The citation of gpt-4o has been added. The original occurrence of \\\"chatgpt\\\" has been removed due to the space limitation.\\n\\n - Regarding few-shot learning: In practical applications of number processing, such as financial reporting or solving arithmetic problems, we believe models should be capable of handling numbers without reliance on few-shot examples. Moreover, we view zero-shot reasoning ability as a critical direction for future model development.\\n\\n Regarding the output format, we have reviewed the models\\u2019 outputs and found no issues, as our tasks require only simple numerical outputs. We have also tested some models with the results presented in Appendix A.3.1, and the conclusions remain consistent. Thank you for your valuable feedback!\\n\\nAdditionally, we have thoroughly revised and polished the paper to improve its readability and clarity. We hope the updated version is now more accessible and understandable. We are happy to make further improvements if you have any additional suggestions. Thank you for your constructive feedback!\\n\\n[1] Hu et al. Case-Based or Rule-Based: How Do Transformers Do the Math?, 2024.\"}" ] }
BWMZKHTA9M
Suppressing recency bias through implicit task in task-agnostic continual adaptation for foundation language models
[ "Jae-Hong Lee", "Chae-Won Lee", "Ji-Hun Kang", "Joon-Hyuk Chang" ]
Foundation language models have significantly advanced natural language processing but face challenges such as catastrophic forgetting when adapting to dynamic environments with diverse tasks. Recently, among the continual learning (CL) methods for these models, model architecture expansion methods have been spotlighted due to the growth of parameter-efficient fine-tuning (PEFT) methods. However, these methods need to store past PEFT adapters for each task and require task identifiers (task IDs) to distinguish each task, thus limiting their applicability in task-agnostic settings. They also overlook recency bias, where models focus overly on current tasks at the expense of past knowledge. To address these issues, we propose suppressing recency bias (SRB) by using the concept of implicit tasks. SRB assigns a fixed-size adapter to an implicit task, recursively storing historical knowledge through arithmetic operations with current adapters at every time step instead of task IDs. This arithmetic mitigates recency bias by integrating non-overlapping information between historical and current adapters. Our approach requires only simple arithmetic operations without backpropagation, minimizing additional computation, and allocates a fixed-size adapter to the implicit task, resulting in low memory requirements. We evaluate SRB on CL benchmarks for foundational LMs. Experimental results demonstrate that SRB outperforms state-of-the-art methods, achieving superior generalization performance across various task sequences and models by effectively mitigating recency bias.
[ "continual learning", "lifelong learning", "transfer learning", "foundation language models" ]
Reject
https://openreview.net/pdf?id=BWMZKHTA9M
https://openreview.net/forum?id=BWMZKHTA9M
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y4HSjkFLhy", "x7OfQwuRDZ", "uSlndDjfTk", "tP5itVw5KG", "sqEUPv4pN8", "rvyLiSfUMT", "p6ad3v7VUA", "jh7vx1oM5u", "jZuVdEvujA", "iOmD3wTOkg", "gKQ1Ceq8bh", "fJcdvF5dma", "d2V6AONYO3", "coWqvNEi1h", "ZRMgk9A4b4", "ZAzAWBdSdY", "OzuyRpL05P", "GRwflshl59", "G5EuzREgZx", "CZeLnAbrKr", "ApenbUwPOJ", "5ga0xmsWVH", "4BlArN5A5n", "3Gibs7PJcq", "0gj0WS4ReC" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732615429930, 1732157906438, 1730523364823, 1732157768717, 1732157470698, 1732792689570, 1732157561591, 1732158388144, 1732157310479, 1732199872579, 1732158554077, 1734533889672, 1730649663578, 1730695793660, 1732690354978, 1732781685815, 1737523644367, 1732157447220, 1730577110187, 1732158466782, 1732157794951, 1730522978452, 1730619006596, 1732158697154, 1732690535283 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_9Jp6" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_9Jp6" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_xDBY" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Area_Chair_N9CP" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_xDBY" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_jTKg" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_xDBY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_CMCi" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_MLub" ], [ "ICLR.cc/2025/Conference/Submission4505/Reviewer_X9bG" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ], [ "ICLR.cc/2025/Conference/Submission4505/Authors" ] ], "structured_content_str": [ "{\"title\": \"Keep rating at 6\", \"comment\": \"I've read the authors' rebuttal and other reviewers' comments, especially those who gave low ratings. In general, the method seems incremental and the comparison is not complete (pointed out by jTKg, and the authors admitted it). Given all of these, I choose to keep the rating of 6, but I will lower my confidence further.\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"**Q1. Hyperparameter Sensitivity: The approach relies on specific hyperparameters for performance, which might affect its generalizability without tuning.**\\n\\n**A1.** Thank you for pointing out the potential concern regarding hyperparameter sensitivity. While our approach relies on specific hyperparameters $(a, b, c)$, we have demonstrated Appendix E that these parameters exhibit robust performance across diverse task sequences without extensive tuning. **Moreover, under fixed hyperparameters, SRB consistently outperforms the SOTA method on a variety of benchmarks and models without the extensive optimizations in Section 4.3.** As mentioned in Section 6, exploring adaptive or meta-learned hyperparameter strategies is a promising direction for future work to improve the practicality and robustness of SRB.\\n\\n**Q2. Comparison Scope: While the paper benchmarks against key methods, additional comparisons with more diverse baseline approaches, such as advanced replay-based strategies, could strengthen its conclusions.**\\n\\n**A2.** Thank you for highlighting the importance of broader comparisons. Our primary focus was to demonstrate the efficacy of SRB in **task-agnostic continual learning** settings. Unlike traditional methods, SRB does not rely on task identifiers and instead employs **simple vector arithmetic** to suppress recency bias and preserve historical knowledge effectively. To emphasize SRB\\u2019s strengths in task-agnostic scenarios, we selected **IncLoRA** and **O-IncLoRA** as key baselines, as these methods represent state-of-the-art performance in parameter-efficient fine-tuning and continual learning.\\n\\n**Q3. Task Transition Analysis: The paper could benefit from deeper analysis of how SRB handles transitions between tasks, especially in complex sequences involving highly dissimilar tasks.**\\n\\n**A3.** Thank you for highlighting this point. A deeper analysis of task transitions, particularly across highly dissimilar tasks, would provide valuable insights. Section 5.1 discusses the ability of SRB to mitigate recency bias and preserve historical knowledge during task transitions.\"}", "{\"summary\": \"This paper investigates task-agnostic continual learning using foundational language models. To tackle the issues associated with previous PEFT methods, the authors introduce a novel approach called Suppressing Recency Bias (SRB), which enables the model to adapt to current data while retaining historical knowledge. By leveraging the design of implicit task vectors, SRB-based models can be trained to adapt without the need for task IDs. Experimental results demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of employing a fixed-size adapter to recursively store the historical knowledge of an implicit task appears to be interesting and novel. The proposed method can adapt to task-agnostic datasets even without task IDs.\\n2. The authors have conducted a thorough series of experiments to validate their proposed method.\\n3. The results regarding information retention indicate that the SRB significantly outperforms other models, showcasing its ability to adapt to new tasks while preserving performance on previous ones with minimal degradation.\", \"weaknesses\": \"1. The paper lacks a detailed explanation of the motivation behind the model design, particularly concerning the implicit task vector and the regularization term.\\n2. The authors should apply the proposed method to additional foundational models (e.g., BERT) to further validate its effectiveness, similar to previous studies.\\n3. The explanation of how to calculate the task vector is somewhat unclear. For instance, as stated in line 288, the task vector for the current task is defined as $\\\\tau_t=w_t-w_0$. However, the process for calculating the task vector for the next task is not elaborated upon.\", \"questions\": \"1. Why does the implicit task vector act as a low-pass filter, limiting diversity?\\n2. What are the results of LLaMA3 and LLaMA3-chat on Orders 4, 5, and 6?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"**1. The SRB method introduces several hyperparameters (such as a, b, and c for controlling the influence and regularization of task vectors). The paper notes that hyperparameter tuning is essential for SRB\\u2019s performance. This reliance might limit the model\\u2019s robustness, as it may require fine-tuning for different tasks or models, reducing its practicality in real-world, dynamic settings where such tuning isn\\u2019t feasible.**\\n\\n**A1.** Thank you for pointing out the dependency on hyperparameters. Reducing the reliance on manual tuning is important, especially in dynamic real-world environments. **However, under fixed hyperparameters, SRB consistently outperforms the SOTA method on a variety of benchmarks and models without the extensive optimizations in Section 4.3.** As mentioned in Section 6, exploring adaptive or meta-learned hyperparameter strategies is a promising direction for future work to improve the practicality and robustness of SRB.\\n\\n**2. Although SRB is designed for task-agnostic settings, the scalability to larger or longer task sequences is not thoroughly explored. For instance, the implicit task mechanism might become less efficient or struggle to represent historical knowledge accurately when handling an extensive range of tasks. A larger-scale experiment would provide insights into how SRB performs with extensive, varied task sequences.**\\n\\n**A2.** While our experiments included long sequences with up to 15 tasks (shown in Section 4.2), testing SRB on much larger and more diverse task sequences would provide deeper insights into scalability. **However, as mentioned in Section 4.1, the benchmarks we used in our study followed standard scenarios that have been extensively studied previously for LM and LLM.**\\n\\n**3. Since SRB relies on arithmetic operations to balance historical and current knowledge, it may struggle in environments where task characteristics change quickly or drastically. The implicit task representation could fail to adapt promptly in such settings, potentially limiting SRB\\u2019s performance on tasks that require quick, context-sensitive adaptation.**\\n\\n**A3.** We appreciate your suggestion of extended experiments to validate our work. For such validation, it is worthwhile to study improving the existing standard benchmarks (Section 4.1). Nevertheless, we believe that SRB, which shows robust performance in a variety of environments, is designed to allow regularization strategies that reflect the distance between the implicit task and the current task to perform effectively in rapidly changing experimental scenarios (Section 3.4).\"}", "{\"title\": \"Continued\", \"comment\": \"**Q3. The approach conceptually shares similar idea to regularization based CL approaches like L2 regularization (which pushes back parameter updates to their initial states before fine-tuning). But in Table 2 L2 regularization performs very poorly. What could be the reason? Please provide a more in-depth analysis of why the proposed approach outperforms L2 regularization.**\\n\\n**A3.** The SRB approach and L2 regularization share a conceptual similarity in constraining parameter updates, but their mechanisms differ fundamentally, leading to the observed performance gap:\\n\\n1. Selective Regularization:\\n - L2 regularization pushes all parameters back to their initial states, which can over-constrain updates and hinder adaptation to new tasks, especially when task characteristics differ significantly.\\n - SRB, in contrast, applies selective regularization by projecting updates orthogonally to the implicit task vector, ensuring that only parameters irrelevant to previous tasks are adjusted for the new task.\\n2. Task Representation:\\n - L2 regularization lacks a mechanism to explicitly represent past task knowledge, treating all updates uniformly.\\n - SRB leverages implicit task vectors to capture the core knowledge of prior tasks, allowing it to selectively retain relevant historical information.\\n3. Mitigation of Recency Bias:\\n - L2 regularization does not address recency bias directly, often leading to the overwriting of historical knowledge.\\n - SRB explicitly targets recency bias through its orthogonal projection mechanism, maintaining a balance between historical retention and current task adaptation.\\n\\nThese differences explain why SRB significantly outperforms L2 regularization, as evidenced in Table 2, by preserving task-specific knowledge while maintaining flexibility for new tasks.\"}", "{\"title\": \"General Response to All Reviewers\", \"comment\": \"We have carefully reviewed the insightful questions and comments raised by the reviewers and used this opportunity to enhance our paper. We sincerely appreciate the significant effort and time committed by each reviewer in offering constructive feedback. We would like to provide a general response regarding the clarifications of the main contributions and how we addressed the common concerns.\\n\\n# [Main Contributions of the Paper]\\n\\n- **Introduction of Suppressing Recency Bias (SRB):** We propose SRB, a novel method for task-agnostic continual learning in language models that does not rely on explicit task identifiers or boundaries. SRB leverages implicit task vectors and simple vector arithmetic to dynamically preserve knowledge from previous tasks while adapting to new ones, effectively mitigating recency bias and catastrophic forgetting.\\n- **Dynamic Knowledge Integration without Increased Overhead:** SRB operates under fixed memory and computational requirements by using a single implicit task adapter. Unlike methods that require storing multiple adapters or task-specific parameters, SRB maintains scalability and efficiency even as the number of tasks grows.\\n- **Extensive Experimental Validation:** We conducted extensive experiments demonstrating that SRB outperforms state-of-the-art methods in task-agnostic continual learning settings across various benchmarks and models. Our results show that SRB effectively balances historical knowledge preservation with new knowledge acquisition, highlighting its robustness and effectiveness.\\n\\n# [Updates in the Revised Draft]\\n\\n- **Clarification of Continual Learning Setting:** We have clarified that our approach operates in a task-agnostic continual learning setting that does not rely on explicit task identifiers or boundaries (Section 2, Appendix B). This emphasizes that direct comparisons with existing methods, which assume task boundaries, may not be appropriate.\\n- **Expanded Related Work Section:** We have revised the related work section in Appendix B to provide a more comprehensive comparison between SRB and existing methods, including Orthogonal LoRA and other state-of-the-art continual learning approaches suggested by the reviewers.\\n- **Inclusion of Upper Bound Results:** To address the reviewers' suggestions, we have included Multi-Task Learning (MTL) and per-task fine-tuning as additional comparison baselines (Table 2 of Section 4). By comparing SRB's performance to these idealized scenarios, we provide clear context for evaluating our method's effectiveness.\\n\\n# [Final Authors' Note]\\n\\nIn the revised draft, we have prioritized clarity and addressed the highlighted concerns. To clarify the justification of the comparisons in the new proposed research scope, we emphasize that our method is a new task-agnostic continuous learning paradigm. Moreover, we have addressed the reviewers' comments by including MTL and per-task fine-tuning as additional baselines for comparison. By comparing SRB's performance to these ideal scenarios, readers can better understand our contributions and the effectiveness of our method.\\n\\nOur research is a pioneering effort to address recency bias and catastrophic forgetting in task-agnostic continual learning for language models. By leveraging implicit task vectors and dynamic regularization, SRB offers a novel methodology that balances historical knowledge preservation and new knowledge acquisition without relying on explicit task identifiers.\\n\\nWe thank the reviewers for their invaluable feedback and believe that our revisions have significantly improved the quality and clarity of our paper.\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"**Q1. The paper lacks a clear explanation of the division and training situation for $(D1,\\u2026,DT)$\\u00a0\\u00a0in the SRB setting. Does each\\u00a0$D_t$\\u00a0correspond to the data contained in a default training batch, or is it a manually divided portion of the dataset?**\\n\\n**A1.** Thank you for raising this important question about how datasets $D_t$ are divided and trained within the SRB framework. In our work, $D_t$ corresponds to mini-batches sequentially sampled during training, as described in Section 3.1. These mini-batches are not manually divided but are naturally derived from the data stream, ensuring that SRB operates in a truly task-agnostic manner without leaking task ID information.\\n\\n**Q2. If it corresponds to data in a single batch, how many times will this data be updated? What is the impact of different batch sizes on the relevant hyperparameters?**\\n\\n**A2.** In our experiments, each batch is processed **exactly once** during training. To examine the impact of different batch sizes on SRB\\u2019s performance, we conducted additional experiments by varying the batch size while keeping all other settings constant. The results are summarized below:\\n\\n| Batch size | Order 1 | Order 2 | Order3 | Avg. |\\n| --- | --- | --- | --- | --- |\\n| 8 | 77.0 | 77.8 | 77.0 | 77.3 |\\n| 16 | 78.7 | 78.5 | 78.1 | 78.4 |\\n| 64 | 78.1 | 78.2 | 77.5 | 77.9 |\\n\\n| Batch size | Order 4 | Order 5 | Order 6 | Avg. |\\n| --- | --- | --- | --- | --- |\\n| 8 | 73.8 | 70.3 | 72.5 | 72.2 |\\n| 16 | 70.5 | 71.4 | 73.3 | 71.7 |\\n| 64 | 70.5 | 71.4 | 73.3 | 71.7 |\\n\\nAs these results show, SRB achieves robust performance across different batch sizes, demonstrating consistent results regardless of the granularity of updates.\\n\\n**Q3. The results section lacks methods such as multi-task learning or task experts as performance upper bounds for reference. Adding this reference would help readers better understand the improvements and limitations of the proposed method. Additionally, Figure 3(a) in Section 5.1 could also provide the performance of task experts as a reference.**\\n\\n**A3.** Thank you for this valuable suggestion. To provide an upper bound for performance in the continual learning problem, we included **per-task finetuning** as a reference in our experiments. Per-task finetuning assumes access to task identifiers and independently finetunes the model for each task, representing an idealized scenario without task interference. The results for per-task finetuning were sourced from the work by Xiao Wang et al. (2023) [a-1].\", \"the_comparison_results_are_detailed_below\": \"| | | Order | | |\\n| --- | --- | --- | --- | --- |\\n| | 1 | 2 | 3 | avg |\\n| Per-task Finetune | 70.0 | 70.0 | 70.0 | 70.0 |\\n| SRB | 78.1 | 78.2 | 77.5 | 77.9 |\\n| O-IncLoRA | 77.1 | 76.2 | 76.6 | 76.6 |\\n\\n| | | Order | | |\\n| --- | --- | --- | --- | --- |\\n| | 4 | 5 | 6 | avg |\\n| Per-task Finetune | 78.1 | 78.1 | 78.1 | 78.1 |\\n| SRB | 70.5 | 71.4 | 73.3 | 71.7 |\\n| O-IncLoRA | 68.4 | 68.8 | 71.4 | 69.5 |\\n\\nAs shown, SRB outperforms the upper bound (per-task finetuning) on Orders 1, 2, and 3, demonstrating its capability to generalize effectively without task identifiers. For Orders 4, 5, and 6, while SRB\\u2019s performance falls slightly below the upper bound, this is expected given the increased complexity and longer task sequences. Even in these cases, SRB surpasses O-IncLoRA, a task-ID-dependent method, further showcasing its adaptability and strength in task-agnostic continual learning.\\n\\n[a-1] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023.\"}", "{\"title\": \"Continued\", \"comment\": \"**Q5. Can the authors elaborate on how the method performs under highly diverse task sequences, such as tasks involving distinct domains (e.g., code, legal text, medical literature)?**\\n\\n**A5.** Our experiments (e.g., Section 4.2 and Table 2) demonstrate SRB's robustness across benchmarks involving diverse datasets such as AG News, DBpedia, and tasks from GLUE and SuperGLUE. While these tasks vary in domain and complexity, incorporating even more diverse domains (e.g., code, legal text, medical literature) could further validate SRB\\u2019s adaptability. This is an area we plan to explore in future work by expanding the benchmark to include tasks with broader domain-specific challenges.\\n\\n**Q6. Overhead of Hyperparameter Tuning: Although SRB shows robustness, the paper notes fixed hyperparameters, implying that different task sequences might require adjustment for optimal performance.Q6. What strategies can be used to fine-tune the hyperparameters $(a, b, c)$ without extensive trial and error?**\\n\\n**A6.** Thank you for this insightful question. To fine-tune the hyperparameters $(a, b, c)$without extensive trial and error, we recommend the following approach:\\n\\nThe hyperparameter $a$ plays a critical role in controlling the balance between historical knowledge and current information. Since SRB aims to suppress recency bias effectively, $a$ should be set close to 1 to emphasize the **low-pass filter** property. This ensures that the implicit task vector retains a significant portion of historical knowledge, preventing the model from overly adapting to recent tasks at the expense of prior information.\\n\\nIn our experiments, we observed that $a=0.99$ consistently performed well across various task sequences (Section 4.3). This suggests that fine-tuning $a$ does not require extensive adjustments, as its optimal range is relatively stable. Similarly, the parameters $b$ and $c$ can be adjusted within smaller ranges (e.g., 0.01\\u20130.1) based on the task sequence dynamics.\\n\\nDespite the need for some hyperparameter tuning, as shown in Sections 4.2 and 4.3, SRB demonstrated **consistent performance improvements** across multiple models and diverse dataset scenarios. This consistency highlights SRB\\u2019s robustness, as also noted in the conclusion. We will include this guidance in the revised paper to help practitioners adopt SRB with minimal tuning efforts.\\n\\n**Q7. How does SRB handle tasks that might involve conflicting objectives (e.g., creative writing vs. technical report summarization)?**\\n\\n**A7.** Thank you for this thoughtful question. As shown in Table 2, SRB demonstrated consistent performance improvements regardless of the task order, indicating its robustness in handling a variety of task sequences. However, the observed performance differences across orders highlight the challenge posed by tasks with conflicting objectives, such as creative writing and technical summarization. This reflects the impact of task characteristics and order on the implicit task representation, as you have pointed out.\\n\\nAddressing these differences is an important area for further exploration. Despite this, the results illustrate the potential of SRB in managing conflicting objectives within a task-agnostic continual learning framework. We plan to investigate such extreme scenarios more thoroughly in future work, focusing on refining task representations to better handle rapidly changing or highly distinct objectives.\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"**1. The paper lacks a detailed explanation of the motivation behind the model design, particularly concerning the implicit task vector and the regularization term.**\\n\\n**A1.** We appreciate you pointing out this area for clarification. The motivation behind SRB\\u2019s design, particularly the implicit task vector and the regularization term, is to address **recency bias** and **catastrophic forgetting** in task-agnostic continual learning. As mentioned in Section 3.3, the value of $b$ plays a crucial role: when set close to 0, it causes the model to respond slowly to rapidly changing information, effectively acting as a **low-pass filter**. This ensures that SRB prioritizes stable, long-term knowledge over short-term fluctuations, mitigating the risk of overfitting to recent tasks.\\n\\nThe implicit task vector $(\\u03c4_u)$ aggregates historical knowledge, and allows SRB to retain relevant information from past tasks while minimizing interference from noisy or task-specific updates. The regularization term Eq. (13) complements this filtering mechanism by constraining parameter updates orthogonally to the implicit task vector. This dynamic adjustment ensures a balance between retaining historical knowledge and adapting to new tasks.\\n\\n**2. The authors should apply the proposed method to additional foundational models (e.g., BERT) to further validate its effectiveness, similar to previous studies.**\\n\\n**A2.** Thank you for this thoughtful suggestion. Evaluating SRB on BERT or other foundational models is indeed a valuable direction for further validation. In our current work, we focused on two large language models (LLMs) from the LLaMA family to demonstrate SRB\\u2019s robustness and effectiveness across diverse and challenging task-agnostic continual learning scenarios. This choice was motivated by the increasing adoption of LLMs in real-world applications, making them highly relevant for this study.\\nOur results showed consistent performance improvements across these LLMs, as detailed in Tables 2 and 3, highlighting SRB\\u2019s generalizability.\\n\\n**3. The explanation of how to calculate the task vector is somewhat unclear. For instance, as stated in line 288, the task vector for the current task is defined as\\u00a0$\\u03c4_t=w_t\\u2212w_0$. However, the process for calculating the task vector for the next task is not elaborated upon.**\\n\\n**A3.** The task vector $\\\\tau_t$ is calculated as the difference between the current task\\u2019s weight vector $w_t$ and the initial weight vector $w_0$ of the foundation model. When transitioning to the next task $t+1$, the task vector is updated incrementally to capture the cumulative effect of historical and new task knowledge. In line 332-334, this incremental update occurs as:\\n\\n$\\\\tau_{t+1} = w_{t+1} - w_0$.\\n\\n**Q1. Why does the implicit task vector act as a low-pass filter, limiting diversity?**\\n\\n**Q1-A.** Thank you for this insightful question. As mentioned in **Section 3.3** of the paper, the implicit task vector $\\\\tau_u$ acts as a low-pass filter because it is designed to prioritize stable, long-term information over rapidly changing, task-specific updates. This behavior is primarily influenced by the hyperparameter $b$, which, when set close to 0, ensures that the model responds slowly to new information, suppressing noisy or transient updates.\\n\\nThis low-pass filtering property is intentional and helps mitigate recency bias, a common issue in task-agnostic continual learning, where the model might overly adapt to recent tasks at the expense of earlier ones. While this mechanism may limit diversity, it ensures robust retention of essential historical knowledge, as demonstrated in Tables 2 and 3, where SRB outperforms other baselines in maintaining performance across diverse task sequences.\"}", "{\"comment\": [\"Thank you for the author's response. My question about data splitting has been resolved, and I have improved my score.\", \"For the upper-bound section, could the authors explain why they chose to use pre-task fine-tuning instead of multi-task learning as the baseline? Is the poorer performance of pre-task fine-tuning due to the limited amount of data in a single dataset and the less optimal training configuration? The reviewer believes that comparing with multi-task learning better highlights how SRB utilizes positive transfer while avoiding forgetting in continuous learning.\"]}", "{\"title\": \"Continued\", \"comment\": \"**Q4. It is not true that architecture approaches add one adaptor for each task. Also see [1, 2, 3, 4, 5, 6, 7]. Most of the existing methods do not need to task-id either, assuming that you are solving the TIL problem.**\\n\\n**A4.** Techniques such as EWC (Elastic Weight Consolidation), Progressive Networks, and Replay Buffer in continual learning rely on task boundaries to prevent task interference and mitigate catastrophic forgetting. In large language models (LLMs) and general language models, where the same parameters are shared across all tasks, the absence of task boundaries exacerbates issues like learning interference and recency bias, leading to overfitting on specific tasks or forgetting prior knowledge (Section 1, Appendix B).\\n\\nTask boundaries are crucial for the effective functioning of forgetting mitigation techniques (e.g., importance calculation in EWC or data sampling in Replay Buffers). Without these boundaries, the model treats all data as a single task, resulting in degraded performance and difficulties in both knowledge transfer and bias mitigation across tasks. Therefore, task boundaries play an essential role in ensuring interference management, facilitating knowledge transfer, and maintaining learning efficiency in LLM training.\\n\\n**Q5. What is the difference between recency bias and catastrophic forgetting? Catastrophic forgetting is about focusing on the present and forgetting the past, which is the recency bias.**\\n\\n**A5.** Recency Bias and Catastrophic Forgetting are closely related concepts in continual learning, but they differ in their definitions and mechanisms. Recency Bias refers to the tendency of a model to overly focus on tasks or data it has learned recently, causing the outputs to be excessively biased toward the most recent tasks. In contrast, Catastrophic Forgetting describes the phenomenon where knowledge of previous tasks is lost as new tasks are learned. This includes a decline in performance on previously learned tasks and goes beyond bias to involve the actual loss of knowledge.\\n\\nWhen examining the relationship between the two, Recency Bias can accelerate Catastrophic Forgetting. If the model places too much emphasis on recent data, it may not allocate sufficient parameters to older tasks, increasing the likelihood of forgetting them. However, Catastrophic Forgetting encompasses a broader scope, including not just bias toward recent tasks but also interference between tasks that leads to the loss of past knowledge.\\n\\nSRB (Suppressing Recency Bias) addresses Recency Bias by encouraging balanced learning, preventing the model from overfitting to recent tasks. This indirectly mitigates Catastrophic Forgetting by maintaining harmony in parameter updates between past and current tasks, thus avoiding performance degradation.\\n\\nWhile Recency Bias focuses on issues related to skewed outputs, Catastrophic Forgetting deals with the broader challenge of overall performance loss on previously learned tasks.\\n\\n**Q6. Regarding baselines, your non-LoRA baselines, EWC, Replay, and LwF, are very old and not the state of the art. Again, please check out [1, 2, 3, 4, 5, 6, 7].**\\n\\n**A6.** Thank you for recommending these excellent papers. We greatly appreciate your suggestions. Our primary objective is to demonstrate how SRB effectively preserves the foundation model's capabilities and adapts to new tasks in a task-agnostic setting. This is achieved through the use of two adapters throughout the process, requiring minimal additional memory and computation via simple vector arithmetic.\\nTo ensure a fair comparison, we selected O-IncLoRA as a baseline because it represents a state-of-the-art method that uses adapters and shares similar experimental settings with our approach.\"}", "{\"metareview\": [\"This paper introduces Suppressing Recency Bias (SRB), a task-agnostic method for continual learning in language models (LMs). SRB addresses the issue of catastrophic forgetting by integrating past knowledge while learning new tasks without relying on task IDs. The method achieves this by introducing an implicit task adapter that aggregates past knowledge using simple arithmetic operations, avoiding the need for backpropagation and significantly reducing computational overhead. Claimed key contributions of this paper include: 1) Task-Agnostic Continual Learning enables continual learning without task identifiers. This makes the work adaptable to real-world, task-agnostic scenarios. 2) Proposed Suppression of Recency Bias to mitigates recency bias by recursively integrating historical knowledge, and ensuring better retention of information from previous tasks. SRB is a common issue in continual learning where models prioritize recent tasks at the cost of past knowledge. 3) ablations and empirical results showed the efficacy of proposed method.\", \"Strength of this paper\", \"The method addresses the challenge of continual learning without task identifiers, which is important in real-world setting, and is effective in preserving historical knowledge while learning new tasks.\", \"Ablations and empirical results showed the efficacy of proposed method.\", \"The approach is computationally efficient and requires minimal memory, making it practical for large-scale deployment.\", \"Weakness of this paper\", \"Several reviewers raised few concerns/limitations of this paper. By addressing these limitations, the paper could strengthen its experiment and expand impact.\", \"Limited Generalizability: The method is tested on limited setups and foundational models. Broader validation on additional models and datasets is needed to demonstrate robustness. The method may struggle in dynamic environments with rapidly changing task characteristics, or realistic task-agnostic scenarios where tasks are not neatly separated.\", \"Experimental Design and Validation: Some of the chosen baselines (e.g., EWC, Replay, LwF) are outdated, and comparisons with state-of-the-art methods (e.g., Orthogonal LoRA, advanced replay strategies) are missing. The study does not explore upper-bound performance (e.g., multi-task learning or task-specific experts) for reference. Ablation study doesn't cover all the key components, such as the necessity of the proposed regularization, implicit task representation, or its design improvements over existing methods (e.g., Orthogonal LoRA). The scalability of the method to longer or more complex task sequences remains unexplored.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewers also raised some other weaknesses (e.g., ambiguity in problem scope, terminology, and methodology details, adding additional experiments or ablation study) and improvements. Although some of these weakness have been improved / somewhat addressed during rebuttal session (e.g., further explanation, more experiment results), overall review rating was not raised significantly to an acceptance level. I think the session is too short and I would like to see a more comprehensive modification to systematically working on these suggestions. Thus I recommend the authors to re-work on these weakness and re-submitting to future conferences.\"}", "{\"summary\": \"This paper introduces the Suppressing Recency Bias (SRB) method to mitigate catastrophic forgetting during the continuous learning process of language models (LMs) without task IDs. SRB introduces an additional implicit task adapter and designs an update mechanism to appropriately integrate knowledge learned from the current task into the implicit adapter. The updated implicit adapter is then used to initialize the learning of new data. The designed update mechanism reduces duplicated information in classical Model Architecture Expansion (MAE) methods while balancing the increase in adapter diversity and the reduction of recency bias. The method outperforms previous task-agnostic methods and MAE methods using task IDs on benchmark tasks. Ablation studies demonstrate the effects of hyperparameters on reducing recency bias and increasing diversity, and show that SRB is not sensitive to hyperparameters within a certain range.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written and easy to understand. Figure 1 effectively contrasts SRB with other methods, and Figure 2 helps in understanding the purpose of the update mechanisms in SRB.\", \"SRB achieves a balance between increasing adapter diversity and reducing recency bias through a carefully designed update mechanism, which is validated by detailed ablation studies.\", \"SRB is simple and easy to use, and it outperforms existing MAE methods in terms of computational and memory costs.\"], \"weaknesses\": [\"The paper lacks a clear explanation of the division and training situation for $(D_1, \\u2026, D_T)$ in the SRB setting. Does each $D_t$ correspond to the data contained in a default training batch, or is it a manually divided portion of the dataset?\", \"If it corresponds to data in a single batch, how many times will this data be updated? What is the impact of different batch sizes on the relevant hyperparameters?\", \"If it is a manually divided portion of the dataset, how is the division performed? If the division ensures that data from one task forms a single $D_i$, then SRB actually leaks task ID information. If data from each task is evenly divided into N parts, it still contains some task information. In a realistic task-agnostic scenario, it is difficult to ensure that data from different tasks are divided into different groups. The reviewer would like to see the performance of SRB when data from sequential tasks are included in the same $D_t$.\", \"The results section lacks methods such as multi-task learning or task experts as performance upper bounds for reference. Adding this reference would help readers better understand the improvements and limitations of the proposed method. Additionally, Figure 3(a) in Section 5.1 could also provide the performance of task experts as a reference.\"], \"questions\": [\"Please provide more detailed explanations regarding the division of $D_i$ as mentioned in the weaknesses section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed a method for continual learning. But I am unsure what problem it is solving.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method is different.\", \"weaknesses\": \"w-1. Your writing at the beginning sounds like you are working on continual learning of language models and adapting them to different domains. But I believe you are really doing normal continual learning using LMs at feature extractors since your tasks are text classification tasks.\\n\\nw-2. The paper did not state what setting of continual learning it works on. I think that this method works on task incremental learning (TIL). However, the first three baselines are commonly used for class incremental learning. Please clarify. For task incremental learning, the problem of forgetting is largely solved. Please check out [1, 2, 3, 4, 5, 6, 7].\\n\\nw-3. Related to w-2. The paper says that the approach is task-agnostic, which means that no task-id is given, but which means it does class-incremental learning (CIL). But for CIL, by definition, there is no task-id information given. It is very confusing. Please make it clear which continual learning problem you are solving. If you are solving CIL, you should compare your method with another set of SOTA baselines. \\n\\nw-4. It is not true that architecture approaches add one adaptor for each task. Also see [1, 2, 3, 4, 5, 6, 7]. Most of the existing methods do not need to task-id either, assuming that you are solving the TIL problem. \\n\\nw-5. What is the difference between recency bias and catastrophic forgetting? Catastrophic forgetting is about focusing on the present and forgetting the past, which is the recency bias. \\n\\nw-6. Regarding baselines, your non-LoRA baselines, EWC, Replay, and LwF, are very old and not the state of the art. Again, please check out [1, 2, 3, 4, 5, 6, 7]. \\n\\nw-7. What is average accuracy? Please give the definition. In continual learning, there are at least two accuracy measures. \\n\\nw-8. Having a separate adaptor for each model is not in the spirit of continual learning, which aims to use the same network learning multiple tasks. Please give the memory requirement of your approach. \\n\\nw-9. Please give the performance upper bound for the continual learning problem that you are solving. \\n\\nw-10. The writing of the paper needs significant improvement. The paper is confusing. I am not even sure what problem you are solving. If you are solving CIL, how do you deal with documents that may belong to two different classes in two different tasks? For example, a topic-specific document may contain a positive sentiment and a review of a problem may be classified to its product category. \\n\\n [1]. Serra et al. Overcoming catastrophic forgetting with hard attention to the task. ICML-2018.\\n [2]. Wortsman et al. Supermasks in superposition. NeurIPS-2020.\\n [3]. Ke et al. Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning. NeurIPS-2021.\\n [4]. Lin et al. TRGP: Trust region gradient projection for continual learning. ICLR-2021.\\n [5]. Lin et al. Beyond not-forgetting: Continual learning with backward knowledge transfer. NeurIPS-2022. \\n [6]. Ke et al. Sub-network Discovery and Soft Masking for Continual Learning of Mixed Tasks. EMNLP-2023.\\n [7]. Dissecting learning and forgetting in language model finetuning. ICLR-2024.\", \"questions\": \"See the previous section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 9Jp6\", \"comment\": \"Thank you for your valuable and insightful review. Our work specifically addresses **task-agnostic continual learning**, expanding beyond the traditional perspective of incremental learning. Research in this area has been relatively sparse, which is why we conducted a comparison with the state-of-the-art method, **O-IncLoRA**, to highlight SRB\\u2019s performance.\\n\\nWhile we agree that applying a broader range of foundational continual learning methods could diversify research in the field of **continual learning for LMs**, we respectfully disagree with the notion that our comparisons lack completeness. Our study was carefully designed to provide a meaningful and robust evaluation of SRB within the task-agnostic continual learning context.\"}", "{\"comment\": \"Thanks for the authors' response. I believe that adding experiments related to MTL would further demonstrate the capability of continual information incorporation in SRB.\\n\\nAfter reading the opinions of other reviewers, I noticed that the experimental setting used by the authors has certain limitations. Continuously increasing the number of classification tasks does indeed make the setting closer to Continuous Incremental Learning. Perhaps applying SRB to a more general continuous pre-training setting and measuring ppl or relevant task metrics would better showcase the capabilities of SRB.\\n\\nBased on the above considerations, I will maintain my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"# Overall\\nWe sincerely appreciate reviewer X9bG's thoughtful questions and have carefully considered them to refine and clarify our work. In particular, we have added details to **Appendix B** to further elaborate on the differences between SRB and O-LoRA, ensuring a more comprehensive comparison that highlights the methodological distinctions and their implications for performance.\\n\\n# Response\\n**Q1. The intuition behind the performance improvement is not quite clear to me. In a high level, the approach finds a direction of model parameter update that represents previous tasks (termed as \\\"implicit tasks\\\" in this work), and apply regularization in weight updates while learning new tasks by \\\"pushing back\\\" the update a bit towards a direction orthogonal to the \\\"implicit tasks\\\".**\\n\\n**A1.** Thank you for raising these insightful points. The intuition behind SRB\\u2019s performance improvement lies in its ability to dynamically preserve knowledge from previous tasks without requiring explicit task identifiers or boundaries. By leveraging implicit task vectors, SRB effectively captures the core representation of past tasks. During the learning process for a new task, SRB applies regularization through orthogonal projections, as described in Section 3.4.\", \"this_regularization_ensures_that_updates_to_the_model_parameters_are_constrained_in_a_way_that\": [\"1. Minimizes interference with previously learned knowledge by \\\"pushing back\\\" updates orthogonally to the implicit task vector.\", \"2. Allows sufficient flexibility to adapt to the unique requirements of the new task.\", \"This mechanism balances the trade-off between retaining historical information and adapting to new tasks, enabling SRB to suppress recency bias and mitigate catastrophic forgetting, as demonstrated in Tables 2 and 3.\", \"**Q2. It seems Orthogonal LoRA by Wang et al. 2023 does a similar job. What is the design that makes the approach improve over Orthogonal LoRA? Please highlight the key methodological differences and explain how these differences contribute to the improved performance observed in the experiments.**\", \"**A2.** Thank you for raising this important question. While Orthogonal LoRA (O-LoRA) and SRB share the objective of preserving prior knowledge through orthogonal subspace learning, SRB demonstrates superior performance due to the following key design advantages:\", \"Task-Agnostic Setting:\", \"O-LoRA requires explicit task identifiers for constructing task-specific orthogonal projections. This dependency makes O-LoRA less effective in task-agnostic settings where such identifiers are unavailable.\", \"SRB, in contrast, is designed for task-agnostic continual learning. It uses implicit task vectors dynamically constructed from historical information, eliminating the need for task IDs and enabling SRB to generalize more effectively across diverse and sequential tasks.\", \"Dynamic Knowledge Integration:\", \"O-LoRA independently applies orthogonal constraints for each task, which can result in inefficiencies when tasks overlap or share commonalities.\", \"SRB leverages vector arithmetic to dynamically integrate knowledge from prior and current tasks. This ensures that updates reflect the nuanced relationships between tasks, enabling better adaptation to task sequences.\", \"Efficient Regularization:\", \"O-LoRA\\u2019s task-specific orthogonal projections may inadvertently over-constrain updates, particularly for ambiguous or overlapping tasks.\", \"SRB\\u2019s lightweight regularization mechanism (Equation 11) selectively constrains updates based on the implicit task vector, preserving critical parameters for prior tasks while allowing flexibility for current task learning.\", \"Reduced Computational Overhead:\", \"O-LoRA requires additional task-specific parameters for each task, which increases memory and computational demands as the number of tasks grows.\", \"SRB maintains a fixed computational footprint by relying on a single implicit task vector, making it scalable to longer task sequences.\"]}", "{\"summary\": \"The paper presents **Suppressing Recency Bias (SRB)**, a method designed for task-agnostic continual learning in foundation language models (LMs). SRB introduces the concept of an implicit task that integrates knowledge recursively, minimizing the reliance on task identifiers and addressing recency bias\\u2014an issue where models disproportionately prioritize current tasks at the expense of previous ones. This approach achieves low memory overhead by requiring only fixed-size adapters and using simple arithmetic operations for updating, without the need for backpropagation. The paper demonstrates that SRB outperforms existing continual learning (CL) methods in both standard and extended task sequences by maintaining superior performance across diverse tasks and reducing recency bias.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Task-Agnostic Adaptability**: SRB excels in task-agnostic settings, removing the dependency on task identifiers and preserving performance across tasks.\", \"**Computational Efficiency**: The method achieves this while maintaining minimal computational and memory overhead, only needing simple arithmetic operations.\", \"**Reduction of Recency Bias**: The introduction of implicit task vectors effectively mitigates recency bias, ensuring that historical information is preserved during adaptation.\", \"**Empirical Validation**: Experimental results on standard and long CL benchmarks show that SRB outperforms other state-of-the-art methods in accuracy and efficiency.\"], \"weaknesses\": [\"**Hyperparameter Sensitivity**: The approach relies on specific hyperparameters for performance, which might affect its generalizability without tuning.\", \"**Comparison Scope**: While the paper benchmarks against key methods, additional comparisons with more diverse baseline approaches, such as advanced replay-based strategies, could strengthen its conclusions.\", \"**Task Transition Analysis**: The paper could benefit from deeper analysis of how SRB handles transitions between tasks, especially in complex sequences involving highly dissimilar tasks.\", \"**Overhead of Hyperparameter Tuning**: Although SRB shows robustness, the paper notes fixed hyperparameters, implying that different task sequences might require adjustment for optimal performance.\"], \"questions\": [\"Can the authors elaborate on how the method performs under highly diverse task sequences, such as tasks involving distinct domains (e.g., code, legal text, medical literature)?\", \"What strategies can be used to fine-tune the hyperparameters (a, b, c) without extensive trial and error?\", \"How does SRB handle tasks that might involve conflicting objectives (e.g., creative writing vs. technical report summarization)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"# Overall\\nWe have carefully reviewed the questions raised by reviewer jTKg and used this opportunity to enhance our paper. Specifically, we revised the related work section in Appendix B and included the upper bound results for MTL and per-task finetuning in Table 2 of Section 4. These additions provide a clearer context and improve the comprehensiveness of our analysis. \\n\\n# Response\\n**A1.** Thank you for insightful comments. While it is true that the tasks in the continual learning benchmark are text classification tasks, they span a variety of subtasks such as Boolean QA, Sentiment Analysis, and Natural Language Inference (Appendix C.1). These subtasks inherently involve different linguistic properties and reasoning skills, effectively representing diverse domains within the text classification paradigm. Therefore, our benchmark setup allows us to evaluate performance changes across tasks with varying characteristics, which we believe is sufficient to observe the model's adaptability to different domains.\\n\\n**Q2. The paper did not state what setting of continual learning it works on. I think that this method works on task incremental learning (TIL). However, the first three baselines are commonly used for class incremental learning. Please clarify. For task incremental learning, the problem of forgetting is largely solved. Please check out [1, 2, 3, 4, 5, 6, 7].**\\n\\nA2. Continual learning has garnered significant attention as researchers seek to enable machine learning models to learn sequentially without catastrophic forgetting. The field is typically divided into Task Incremental Learning (TIL) and Class Incremental Learning (CIL), both of which rely on explicit task boundaries or labels to mitigate forgetting. In contrast, Task-Agnostic Continual Learning (TACL) represents a more flexible paradigm, as it does not assume predefined task identifiers or boundaries during training. Our proposed method, Suppressing Recency Bias (SRB), belongs to this category, aiming to address the Recency Bias problem\\u2014where a model overly adapts to recent tasks at the expense of prior knowledge.\\n\\nSRB draws from the Parameter-Efficient Fine-Tuning (PEFT) paradigm by employing fixed-size adapters to maintain computational and memory efficiency while utilizing implicit task vectors to reconcile past knowledge and adapt to new data. By minimizing redundant information across tasks and suppressing bias toward recent data, SRB introduces a novel approach that transcends the limitations of TIL and CIL, offering task independence while improving adaptability and generalization.\\n\\nThe foundation of SRB is inspired by prior studies, particularly the approach detailed in *Dissecting Learning and Forgetting in Language Model Fine-Tuning* [7], which investigates the learning and forgetting dynamics in large-scale language models. While both our method and [7] address fine-tuning biases, the key distinctions lie in their focal points and methodologies. The study in [7] emphasizes isolating the effects of fine-tuning on text elements such as topic, style, and factual knowledge, providing an analysis-driven perspective on model behavior. In contrast, SRB prioritizes mitigating Recency Bias within a broader task-agnostic continual learning framework, leveraging vector interpolation and PEFT-based techniques to enhance continual learning scenarios where task boundaries are undefined.\\n\\nBy incorporating diverse perspectives such as [7], this work situates itself within a robust continuum of research efforts that aim to balance learning stability and adaptability, ultimately advancing the state-of-the-art in continual learning and fine-tuning paradigms. Following this discussion, we have included the details of the study presented in [7] within the Related Works section.\\n\\n**Q3. Related to w-2. The paper says that the approach is task-agnostic, which means that no task-id is given, but which means it does CIL. But for CIL, by definition, there is no task-id information given. It is very confusing. Please make it clear which continual learning problem you are solving. If you are solving CIL, you should compare your method with another set of SOTA baselines.**\\n\\n**A3.** Our approach specifically addresses task-agnostic continual learning, which extends beyond the conventional definitions of CIL. While both task-agnostic learning and CIL do not rely on task IDs, our method is not strictly tied to the assumptions or baselines typically associated with CIL. Instead, task-agnostic continual learning focuses on enabling the model to adapt to sequential tasks without task-specific boundaries, leveraging vector arithmetic to balance historical knowledge preservation and new knowledge acquisition.\"}", "{\"title\": \"Continued\", \"comment\": \"**Q1. In the regularization process (Equation 13), how to determine the effectiveness of the orthogonal projection for suppressing recency bias? Were there other regularization techniques or projections you considered, and if so, what made this one preferable? In addition, if the implicit task vector is orthogonal to $\\\\tau_t$, then the projections becomes zero, is there any additional strategy to prevent this situation?**\\n\\n**Q1-A.** Thank you for raising this insightful question. The orthogonal projection suppresses recency bias by dynamically balancing the influence of historical and current task vectors. Its effectiveness was validated through experiments showing SRB's ability to retain performance on earlier tasks (Section 5.1). While we considered other regularization techniques, including L2-based penalties, we found the projection method preferable due to its ability to explicitly measure and adjust for alignment between task vectors.\\n\\nIn addition, hyperparameter $c$ in our approach is specifically designed to balance past and current information during the regularization process (Section 3.4). When the implicit task vector becomes orthogonal to $\\\\tau_t$, it indicates that past knowledge and the current task information do not interfere with each other. This scenario suggests that the information captured by these task vectors is independent, which aligns with the goal of preserving historical knowledge without redundancy.\\n\\n**Q2. In task-agnostic settings, are there specific application domains where SRB\\u2019s approach to suppressing recency bias is particularly valuable? Conversely, are there domains where the implicit task approach may struggle, such as with tasks that have low overlap or are highly distinct?**\\n\\n**Q2-A.** We appreciate your thoughtful question regarding SRB\\u2019s applicability in task-agnostic settings. When models are exposed to data from the same context or domain repeatedly, there is a risk of overfitting to the most recent tasks, which can diminish the foundation model's generalization capabilities. By mitigating recency bias, SRB ensures that the foundational abilities of the model are preserved, allowing it to perform robustly across diverse tasks and domains.\\nFor domains with low overlap or highly distinct tasks, the implicit task vector mechanism effectively addresses this challenge. As observed in domain transfer scenarios, the second term of Eq. (11) limits significant updates to the implicit task vector when there is little overlap between the tasks. This helps SRB maintain stability and prevents unnecessary adjustments, ensuring that the model adapts appropriately to new tasks without eroding previously learned knowledge.\\n\\n**Q3. You mention using average accuracy for performance evaluation, but did you also measure other indicators like backward and forward transfer? If so, how did SRB perform on these metrics, especially in preserving knowledge from earlier tasks?**\\n\\n**Q3-A.** While average accuracy is a primary evaluation metric, we also examined forward transfer and preservation of knowledge from earlier tasks, as discussed in Section 5.1. SRB demonstrated strong performance in forward transfer by leveraging implicit task vectors to retain generalizable features. Similarly, backward transfer showed that SRB effectively mitigates forgetting by suppressing recency bias, as evident in the nearly parallel performance trends over time (Figures 3(b)\\u20133(d)). These results confirm SRB's capacity to balance historical knowledge retention with new task adaptation.\"}", "{\"summary\": \"This paper addresses the problem of recency bias in foundation models, which causes models to prioritize recent tasks at the expense of past knowledge. The main contribution of this work is a novel method called \\\"Suppressing Recency Bias\\\"(SRB), which combines current and past knowledge using arithmetic operations without requiring additional back-propagation, thus ensuring minimal computational overhead. The key highlights of SRB that authors claim include:\\n1. SRB eliminates the need for task IDs by using an implicit task that aggregates past knowledge through simple arithmetic.\\n2. SRB maintains historical knowledge from past tasks while learning new tasks by integrating only unique information, thus reducing redundant learning.\\n3. SRB requires minimal additional memory space and computations to SOTA methods, outperforming methods like LoRA and IncLoRA effectively.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It is a novel approach to handling one of the major challenges in continual learning: recency bias. By leveraging an implicit task to integrate historical knowledge, SRB addresses the common problem where models tend to overly focus on recent tasks at the expense of retaining knowledge from earlier ones. This approach could be beneficial across a wide range of continual learning applications.\\n\\n2. SRB operates without requiring task IDs, which is a significant advantage in real-world scenarios where tasks are not explicitly defined. Task-agnostic continual learning is particularly challenging, and the SRB method makes a good improvement by showing it\\u2019s possible to achieve effective continual adaptation without relying on task-specific information.\\n\\n3. The experimental results show SRB outperforms or is competitive with state-of-the-art methods like LoRA, IncLoRA, and O-IncLoRA on some standard CL benchmarks. SRB demonstrates superior generalization and knowledge retention across tasks, particularly in comparison to methods prone to catastrophic forgetting.\\n\\n4. The authors provide a clear description of the experimental setup, benchmark datasets, and hyperparameters settings. This transparency supports reproducibility of this work.\", \"weaknesses\": \"The paper presents a novel approach to addressing recency bias in task-agnostic continual learning. However, there are some potential weaknesses and limitations in this work.\\n\\n1. The SRB method introduces several hyperparameters (such as a, b, and c for controlling the influence and regularization of task vectors). The paper notes that hyperparameter tuning is essential for SRB\\u2019s performance. This reliance might limit the model\\u2019s robustness, as it may require fine-tuning for different tasks or models, reducing its practicality in real-world, dynamic settings where such tuning isn\\u2019t feasible.\\n\\n2. Although SRB is designed for task-agnostic settings, the scalability to larger or longer task sequences is not thoroughly explored. For instance, the implicit task mechanism might become less efficient or struggle to represent historical knowledge accurately when handling an extensive range of tasks. A larger-scale experiment would provide insights into how SRB performs with extensive, varied task sequences.\\n\\n3. Since SRB relies on arithmetic operations to balance historical and current knowledge, it may struggle in environments where task characteristics change quickly or drastically. The implicit task representation could fail to adapt promptly in such settings, potentially limiting SRB\\u2019s performance on tasks that require quick, context-sensitive adaptation.\", \"questions\": \"1. In the regularization process (Equation 13), how to determine the effectiveness of the orthogonal projection for suppressing recency bias? Were there other regularization techniques or projections you considered, and if so, what made this one preferable? In addition, if the implicit task vector is orthogonal to \\\\tau_t, then the projections becomes zero, is there any additional strategy to prevent this situation?\\n\\n2. In task-agnostic settings, are there specific application domains where SRB\\u2019s approach to suppressing recency bias is particularly valuable? Conversely, are there domains where the implicit task approach may struggle, such as with tasks that have low overlap or are highly distinct?\\n\\n3. You mention using average accuracy for performance evaluation, but did you also measure other indicators like backward and forward transfer? If so, how did SRB perform on these metrics, especially in preserving knowledge from earlier tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper propose a continual learning algorithm that projects LoRA parameter updates to a subspace defined by an \\\"implicit task\\\" to mitigate recency bias and mitigate forgetting. Experiments on CL benchmarks demonstrate performance improvements over baselines, especially those that also learn LoRA.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The performance improvement is clear compared to the baselines.\", \"The description of the proposed approach is clear\"], \"weaknesses\": \"1. The intuition behind the performance improvement is not quite clear to me.\\n\\nIn a high level, the approach finds a direction of model parameter update that represents previous tasks (termed as \\\"implicit tasks\\\" in this work), and apply regularization in weight updates while learning new tasks by \\\"pushing back\\\" the update a bit towards a direction orthogonal to the \\\"implicit tasks\\\".\\n\\n- It seems Orthogonal LoRA by Wang et al. 2023 does a similar job. What is the design that makes the approach improve over Orthogonal LoRA? Please highlight the key methodological differences and explain how these differences contribute to the improved performance observed in the experiments. \\n\\n- The approach conceptually shares similar idea to regularization based CL approaches like L2 regularization (which pushes back parameter updates to their initial states before fine-tuning). But in Table 2 L2 regularization performs very poorly. What could be the reason? Please provide a more in-depth analysis of why the proposed approach outperforms L2 regularization.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued\", \"comment\": \"**Q7. What is average accuracy? Please give the definition. In continual learning, there are at least two accuracy measures.**\\n\\n**A7.** Average accuracy is a key evaluation metric in continual learning, used to measure the overall performance of a model after sequentially learning multiple tasks. According to Section 4.1 of our work, average accuracy is calculated as the mean performance across all tasks following the learning of the final task. This metric is critical for assessing how well the model retains performance on previous tasks while learning new ones.\\n\\nIn contrast, the forgetting measure quantifies the performance degradation on specific tasks. It is computed as the difference between a task\\u2019s performance after initial learning and its performance after the final training stage. While average accuracy evaluates the overall balance of performance, the forgetting measure focuses on determining the retention of past knowledge.\\n\\nIn our work, we highlight that SRB achieves higher average accuracy compared to previous SOTA methods by mitigating Recency Bias and effectively preserving knowledge. Average accuracy serves as a standardized metric in continual learning, enabling fair comparisons and assessing the balance between learning new tasks and retaining prior knowledge.\\n\\n**Q8. Having a separate adaptor for each model is not in the spirit of continual learning, which aims to use the same network learning multiple tasks. Please give the memory requirement of your approach.**\\n\\n**A8.** SRB uses a single implicit task adapter, which keeps memory requirements fixed and minimal, irrespective of the number of tasks. Unlike methods that require storing multiple adapters, SRB integrates historical knowledge using arithmetic operations without maintaining task-specific adapters as mentioned in Section 1 and 2.\\n\\n**Q9. Please give the performance upper bound for the continual learning problem that you are solving.**\\n\\n**A9.** Thank you for this important suggestion. To evaluate the performance upper bound for the continual learning problem, we used per-task finetuning as a reference, which assumes access to task identifiers and finetunes the model independently for each task. This represents the ideal scenario without any interference between tasks. The results for per-task finetuning were taken from the work by [a-1].\", \"the_comparison_results_are_as_follows\": \"| | | Order | | |\\n| --- | --- | --- | --- | --- |\\n| | 1 | 2 | 3 | avg |\\n| Per-task Finetune | 70.0 | 70.0 | 70.0 | 70.0 |\\n| SRB | 78.1 | 78.2 | 77.5 | 77.9 |\\n| O-IncLoRA | 77.1 | 76.2 | 76.6 | 76.6 |\\n\\n| | | Order | | |\\n| --- | --- | --- | --- | --- |\\n| | 4 | 5 | 6 | avg |\\n| Per-task Finetune | 78.1 | 78.1 | 78.1 | 78.1 |\\n| SRB | 70.5 | 71.4 | 73.3 | 71.7 |\\n| O-IncLoRA | 68.4 | 68.8 | 71.4 | 69.5 |\\n\\nAs shown, SRB outperforms the upper bound (per-task finetuning) on Orders 1, 2, and 3, demonstrating its ability to generalize effectively even without task identifiers. For Orders 4, 5, and 6, while SRB\\u2019s performance is slightly below the upper bound due to longer task sequences and increased complexity, it still surpasses the performance of O-IncLoRA, a method that requires task IDs for adaptation. This highlights SRB\\u2019s strength in task-agnostic continual learning, where task boundaries are not explicitly defined.\\n\\n[a-1] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023.\\n\\n**Q10. The writing of the paper needs significant improvement. The paper is confusing. I am not even sure what problem you are solving. If you are solving CIL, how do you deal with documents that may belong to two different classes in two different tasks? For example, a topic-specific document may contain a positive sentiment and a review of a problem may be classified to its product category.**\\n\\n**A10.** As mentioned in A.2, our work does not aim to solve CIL specifically. Instead, the primary focus of our paper is to address task-agnostic continual learning by suppressing recency bias through simple vector arithmetic. This allows the model to continuously adapt to diverse domains without relying on task identifiers, while effectively preserving historical knowledge and adapting to current tasks.\"}", "{\"title\": \"Clarifying the selection of upper bound baselines\", \"comment\": \"**Q. For the upper-bound section, could the authors explain why they chose to use pre-task fine-tuning instead of multi-task learning as the baseline? Is the poorer performance of pre-task fine-tuning due to the limited amount of data in a single dataset and the less optimal training configuration? The reviewer believes that comparing with multi-task learning better highlights how SRB utilizes positive transfer while avoiding forgetting in continuous learning.**\\n\\n**A.** We appreciate the reviewer for pointing out the critical aspect of\\u00a0*continual information incorporation*\\u00a0in our study. As noted, multi-task learning (MTL) offers an alternative perspective on upper bounds. We have conducted experiments to include MTL as a baseline, and the results are as follows:\\n\\n| | | **Order** | | |\\n| --- | --- | --- | --- | --- |\\n| | 1 | 2 | 3 | avg |\\n| **MTL** | 80.0 | 80.0 | 80.0 | 80.0 |\\n| **Per-task Fine-tune** | 70.0 | 70.0 | 70.0 | 70.0 |\\n| **SRB** | 78.1 | 78.2 | 77.5 | 77.9 |\\n| **O-IncLoRA** | 77.1 | 76.2 | 76.6 | 76.6 |\\n\\n| | | **Order** | | |\\n| --- | --- | --- | --- | --- |\\n| | 4 | 5 | 6 | avg |\\n| **MTL** | 76.3 | 76.3 | 76.3 | 76.3 |\\n| **Per-task Fine-tune** | 78.1 | 78.1 | 78.1 | 78.1 |\\n| **SRB** | 70.5 | 71.4 | 73.3 | 71.7 |\\n| **O-IncLoRA** | 68.4 | 68.8 | 71.4 | 69.5 |\\n\\nFrom the results, we observe that for\\u00a0Orders 4, 5, and 6, MTL underperforms compared to per-task fine-tuning. This can be attributed to the tendency of language models (LMs) to rely on task-specific representations, similar to in-context learning. Consequently, when dealing with long sequences, per-task fine-tuning demonstrates superior performance, making it a more suitable choice as the upper bound in these cases.\\n\\nConversely, for\\u00a0Orders 1, 2, and 3, where the number of domains is smaller, MTL achieves the best results. This occurs because shared knowledge among tasks retains greater value than the knowledge lost during sequential learning of tasks. In cases with fewer tasks, the transfer of shared knowledge between tasks through MTL becomes more impactful, establishing MTL as the most appropriate upper bound in these scenarios. Moreover, in\\u00a0Orders 1, 2, and 3, our proposed method, SRB, demonstrates performance that is closest to MTL. This highlights the effectiveness of SRB in leveraging shared knowledge across tasks while mitigating the forgetting observed in sequential learning settings.\"}" ] }
BW8O4wHgbo
Why Solving Multi-agent Path Finding with Large Language Models has not Succeeded Yet
[ "Weizhe Chen", "Sven Koenig", "Bistra Dilkina" ]
With the explosive influence caused by the success of large language models (LLM), there has been an extensive amount of recent work showing that foundation models can be used to solve a large variety of tasks. However, there is very limited work that shares insights on multi-agent planning. Multi-agent planning is different from other domains by combining the difficulty of multi-agent coordination and planning, and making it hard to leverage external tools to facilitate the reasoning needed. In this paper, we focus on the problem of multi-agent path finding (MAPF), which is also known as multi-robot route planning, and study the performance of solving MAPF with LLMs. We first show the motivating success of single-agent planning and multi-agent pathfinding in an empty room map without obstacles, then the failure to plan on the harder room map and maze map of the standard MAPF benchmark. We present our position on why directly solving MAPF with LLMs has not been successful yet, and we use various experiments to support our hypothesis. Based on our results, we discussed how researchers with different backgrounds could help with this problem from different perspectives.
[ "Large language models", "multi-agent path finding", "reasoning" ]
Reject
https://openreview.net/pdf?id=BW8O4wHgbo
https://openreview.net/forum?id=BW8O4wHgbo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tXAo5rzsqA", "qzNLXwvS4X", "e3SJsjFyWv", "dvTDrXHFWD", "b1Jzy55lbd", "Zbo4x0mAVM", "SipFgZnJzW", "NM3UEnCm3L", "MDYqmF5FaX", "KxXP4SNtkO", "KLPOm6DzvZ", "JWUr2G2nVb", "BNSNybA1gN", "AdxYcGRciZ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "meta_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732563004083, 1732213047756, 1732690782065, 1732577027144, 1737523660272, 1730408927364, 1734697775215, 1730498401160, 1732212942542, 1730667557712, 1730677796565, 1732212863939, 1732212897807, 1732687999792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_g5y3" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_g5y3" ], [ "ICLR.cc/2025/Conference/Submission4760/Area_Chair_JsTP" ], [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_wYHE" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_LEkg" ], [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_kZvx" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Submission4760/Authors" ], [ "ICLR.cc/2025/Conference/Submission4760/Reviewer_kZvx" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your responses to my questions. After reconsidering the paper including your responses, my overall assessment remains unchanged.\\n\\nWhile I appreciate the answers to my questions, some of the points made are not clear to me (e.g. giving step-wise information to the LLM is similar to Chain of Thought prompting), and several of the weaknesses I identified have not been fully addressed (e.g. weaknesses 3,4).\\n\\nI believe the weaknesses I identified would require more substantial revisions/improvements then those provided to meet the standards expected for publication in ICLR.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your reviews. Here we answer your question one by one:\\n1. As our study has broken down the failing reasons into three aspects, i.e., context length, understanding obstacles, and general reasoning capability related to pathfinding. All three aspects are known as LLM challenges, and all these challenges are also known to be not prompt-sensitive and can be solved with prompt engineering. Thus, we believe that introducing other baselines will definitely keep our conclusion the same and there is very little point in introducing them. \\n2. Thanks for pointing this out. These are papers that work on LLM for planning problems, [1,2,3]. We will add them to our paper.\\n3. The stepwise local information is changed between different designs. Given the page limit, we can only include the prompt and the examples in our appendix, as you pointed out in the weakness. However, we still invite you to look at Fig. 8, which gives an example of this.\\n4. In the current paper, we have primarily focused on the success rate because it is still too low, and the only successful scenarios are those that do not require many detours. This means that, even with single-step information, the solution quality (or global optimality, as you mentioned) is not significantly affected. Since the current successful scenarios stop at an agent count of 8 on maps like \\\"room,\\\" and the number of instances tested is limited to 5 per setting due to the high cost, metrics other than success rate that contribute to measuring solution quality exhibit a high standard deviation. Furthermore, as the LLM solver is currently unable to find a solution in many cases, we believe it is more critical at this stage to focus on achieving any solution rather than prioritizing a good solution while exploring the potential of LLMs as alternative solvers for MAPF.\\n5. While we do not have clear evidence and it is difficult to justify, there is a small possibility that breaking down the process could also help the LLM reason more thoroughly, similar to the Chain-of-Thought prompting [4], which enables LLMs to think step-by-step in solving difficult math problems. However, it is not possible to decouple this effect from the context window length issue in the MAPF problem, so we cannot verify this hypothesis. Thus, we can only conclude that SBS is better than OS when the success rate is the sole consideration. However, SBS could potentially have a downside in terms of solution quality\\u2014for instance, requiring a greater number of total steps\\u2014compared to OS in scenarios where both could generate solutions. Nonetheless, since OS fails to provide a solution in those cases, this cannot be verified.\\n6. Figure 4 shows the **output** from the LLM that we have not changed. In this context, the \\u201cvalidated\\\" one refers to no collision at the current step.\\n\\n\\n[1]. Kambhampati S, Valmeekam K, Guan L, et al. Position: LLMs Can\\u2019t Plan, But Can Help Planning in LLM-Modulo Frameworks[C]//Forty-first International Conference on Machine Learning.\\n\\n[2]. Kalyanpur A, Saravanakumar K K, Barres V, et al. Llm-arc: Enhancing llms with an automated reasoning critic[J]. arXiv preprint arXiv:2406.17663, 2024.\\n\\n[3]. Chen, Y., Arkin, J., Zhang, Y., Roy, N., & Fan, C. (2024, May). Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4311-4317). IEEE.\\n\\n[4]. Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models[J]. Advances in neural information processing systems, 2022, 35: 24824-24837.\"}", "{\"title\": \"Further Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your response and your constructive suggestions.\\n\\nRegarding the inclusion of performance benchmarks for previous algorithms, we have added a sentence similar to the one we discussed with you in the discussion section of our paper. We invite you to review our updated paper, where the changes are highlighted in blue, and let us know if you agree with the chosen location for presenting these results.\\n\\n\\nRegarding the underlying reasons for these failures, we have already included the breakdown: \\u201c77% of the failures occurred because the LLM agents began to oscillate in a specific area of the map, while the remaining failures were due to excessively long detours,\\u201d prior to further analyzing the failures from the LLM perspective in our paper. Compared to the additional metrics you suggested, we believe that our current two classes of failures are more intuitive and directly linked to the reasons analyzed later in the paper. Furthermore, these reasons are well-recognized issues with LLMs, making them valuable research topics for future exploration. More specifically, metrics like the number of collisions and path overlap do not occur in our current workflow, as the LLM regenerates solutions whenever our solution checker detects conflicts in the actions generated for a given step. Regarding the proportion of agents reaching their goals, we found this metric to be highly scenario-dependent and subject to significant randomness. For example, in some scenarios, 6 out of 8 agents may successfully reach their goals, while in some others, only 2 succeed. Due to this variability, we are uncertain how this metric could contribute to further analysis of the reasons for failure and, therefore, did not include it in our paper. We would greatly appreciate it if you could provide additional suggestions on how this metric could be leveraged effectively. Thank you!\"}", "{\"title\": \"Further Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your further response. \\n\\n\\nTo address your concerns outlined in Weaknesses 4 and 5, we have revised our paper to ensure greater accuracy in our statements. We invite you to review the updated version, and we are happy to address any remaining concerns you may have. Specifically for the points you raised in the introduction, we revised the wording in the relevant paragraph to maintain the narrative without diminishing the contributions of existing works. Since the primary focus of (Chen et al., 2023b) and (Agashe et al., 2023b) is on multi-agent task planning rather than actual path planning, we believe that referencing these works is not necessary in the updated text, as we discuss our differences with them later in the related work section.\\n\\nRegarding Weakness 3, we believe this is closely related to your comment that \\\"giving step-wise information to the LLM is similar to Chain of Thought (CoT) prompting.\\\" First, we would like to clarify the two pairs of ablation studies presented in our paper, which may have caused some confusion. The first pair compares global observations (GO) versus single-step observations (SSO). This refers to whether local obstacle information\\u2014such as whether agent 1 can move left at a given moment\\u2014is provided at each step. The second pair compares one-shot (OS) generation versus step-by-step (SBS) generation. This refers to whether the LLM provides the entire path in a single response or generates actions for each agent iteratively, step by step. We believe that your original Weakness 3 and Question 5 primarily relate to the second pair of comparisons (OS vs. SBS). However, in your most recent response, it appears that you are discussing the connection between single-step observations (SSO) from the first pair of comparisons and CoT prompting. We agree that this connection is limited. To clarify again, the step-by-step generation approach allows LLMs to produce additional intermediate outputs, enabling them to reason effectively about action choices during the process. This aligns closely with the purpose of CoT prompting and its theoretical advantages, as outlined in [1].\\n\\nThank you again for your constructive suggestions on improving the clarity of our writing. We hope our response has addressed your concerns. Please let us know if you have any further questions or feedback.\\n\\n\\n[1]. Li Z, Liu H, Zhou D, et al. Chain of Thought Empowers Transformers to Solve Inherently Serial Problems[C]//The Twelfth International Conference on Learning Representations.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This is a paper that attempts to show that LLMs are not yet a viable option for solving multi-agent path finding (MAPF). The authors describe a method for prompting an LLM for a solution, checking the output for collisions, and iteratively re-prompting the LLM until a solution is found or until a max number of iterations is reached. They show that LLMs can solve MAPF problems when the planning problem is simple (such as single agent problems in an empty room with no obstacles) and that the capabilities of LLMs as MAPF solvers break down as the problem becomes more complex (multiple agents and more complex obstacle scenarios). They then discuss three possible causes of the poor performance (LLM\\u2019s reasoning capabilities, context length limits, and map \\u201cunderstanding\\u201d).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"LLMs are indeed experiencing a surge of popularity in a huge spectrum of applications. Any work that contributes to understanding their limitations is important.\", \"weaknesses\": \"1.\\tWhile it\\u2019s true that their method didn\\u2019t work well, the authors do not present enough evidence to justify saying \\u201cLLMs do not yet work for MAPF\\u201d (which is the main claim of the paper).\\n2.\\tExperimental results are given in Tables 1-4 without sufficient explanation of the methods. Would recommend a more concise problem statement in the paper (not appendix) that gives specifics of the scenarios, prompts, LLM outputs, and the collision checker.\\n3.\\tThe justification for why the SBS method the authors describe in the paragraph starting on line 207 is not clear. Other than providing a method to keep the context length shorter, it\\u2019s not apparent why this is a good approach.\\n4.\\tMany claims are made with little to substantiate them:\\na.\\tLine 041: previous work \\u201cbarely covers multi-agent planning\\u201d (Chen2023b and Agashe 2023)\\nb.\\tLine 047-049: Disagree on the list of previous methods. It should also include LLMs (Chen et al 2023b)\\nc.\\tBroad claims about LLMs are made in the section on Understanding Obstacle Locations that do not seem supported by the single example presented.\\nd.\\tThe authors claim (line 428) that the current workflow does not include any tool use, but they use an external collision checker (with little or no description of the checker).\\ne.\\tThe claim made on line 363 \\u201cpeople barely provide any such information online since people have common knowledge of what to do with a map\\u201d seems unfounded and there is nothing to support it.\\n5.\\tThere are significant grammar issues throughout. Some examples:\\na.\\t(Line 323) \\u201cHowever, recent studies have shown that long models like GPT4- turbo-128K are not a model whose capacity in 8K length also works when given a 128K-tokens input.\\u201d\\nb.\\tAnd (line 371) \\u201cwhich is killed by using much more total number of steps than it should\\u201d\\n6.\\tIn lines 102-103 the authors say \\u201cwe hope LLMs can be an alternative model to the current MAPF RL based models without additional training\\u201d. However, they have previously stated that in their formulation, this didn\\u2019t work. At this point, it\\u2019s better to say what you observed than what you had hoped for.\\n7.\\tThe authors state on line 201 \\u201cIt is unclear how well LLMs can solve MAPF problems,\\u201d but the main claim of the paper is that they aren\\u2019t good at it.\\n8.\\tIn Figure 4, the goal of Agent 1 is (3,1) and the goal of Agent 2 is (2,0), however, Agent 1 ends up at (0,1) and Agent 2 ends up at (0,3). These are not at the goals, and in fact, Agent 2 has moved further away from the goal, so it\\u2019s not clear why this is a \\u201cvalidated solution\\u201d, except for the fact that they did not collide. \\n9.\\tIt seems intuitive and not scientifically interesting that success rate drops as the size of the problem grows (Lines 424-426). \\n10.\\tThe formatting of references seems non-standard.\", \"questions\": \"1. Would comparing to a wider range of baseline methods (other than just 0S and SBS) help substantiate the claim that LLMs are not good for MAPF problems?\\n2. Under Methods: \\u201cFollowing common practices of LLMs\\u201d what exactly are these?\\n3. The authors mention giving the LLM \\u201cstepwise local information\\u201d what exactly does this mean? (I assume it is related to the context window size issue?)\\n4. The authors only give the LLM information to solve the \\u201cnext step\\u201d and keep re-prompting until the LLM provides a solution with no conflicts. How does this affect the global optimality of the solution?\\n5. Is there any additional reasoning/evidence for why the SBS method is advantageous (other than the context window length)? Are there any downsides to the SBS method? \\n6. What is meant by \\\"validated solution\\\" in Figure 4? The agents do not reach the goals specified in the figure.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper explores the use of large language models (LLMs) for multi-agent path-finding (MAPF) problems. The study investigates whether LLMs are able to generate valid paths for agents in different MAPF scenarios without heuristic guidance or additional training. Experiments show that while LLMs can effectively solve simple MAPF problems with a small number of obstacles, they face significant challenges in complex environments, frequently failing to produce collision-free solutions. The paper highlights three contributing factors: insufficient advanced reasoning capabilities, restrictions due to context length, and challenges in comprehending spatial information. Drawing from these insights, the authors propose future research directions to overcome these challenges and enhance LLMs\\u2019 performance in MAPF tasks.\\n\\nThe paper was reviewed by three referees who agree on the papers' key strengths and weaknesses. All three reviewers appreciate the identification of the limitations of using LLMs for multi-agent planning, which provides valuable insight for future work in LLM-based MAPF. However, the reviewers emphasize the need to compare to other MAPF algorithms including traditional methods as well as more advanced algorithms. The reviewers recognize that the paper's objective is not to outperform classic planners, but these comparisons would help to strengthen the paper's analysis by highlighting where LLMs fall short and would help to motivate the use of LLMs for MAPF. Related, the paper would benefit from experiments on more complex scenarios with further analysis of the success and failure of LLMs.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers appreciated the authors' responses, but all agreed that their primary concerns with the paper remained.\"}", "{\"summary\": \"This paper addresses the challenge of multi-agent pathfinding using large language models (LLMs). The authors demonstrate that while LLMs have proven effective for single-agent planning and, to a certain extent, for multi-agent pathfinding in relatively simple environments, current LLM capabilities are inadequate for planning in more complex settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper effectively highlights the current limitations of LLMs in multi-agent pathfinding (MAPF) domains through experiments conducted across environments of varying complexity.\", \"The authors provide an analysis of potential reasons behind LLMs\\u2019 challenges in effective planning.\", \"This work could serve as a valuable reference for future research directions in MAPF with LLMs.\"], \"weaknesses\": [\"The paper only addresses the limitations of LLMs in centralized planning paradigms. The authors claim to use the history of the agents as input in the prompts. Due to this, the context window limit is reached quite quickly when the number of agents is high. But since the environment is Markov, shouldn\\u2019t it be able to decide actions for the agents just based on their current states?\", \"The methods used to demonstrate the limitations of LLMs are relatively straightforward. Including comparisons to more advanced modular architectures, such as those incorporating memory modules and decentralized planners (e.g., [1, 2, 3]), would have strengthened the analysis\\u2014even though surpassing state-of-the-art classical planners is not the primary objective of the paper.\", \"Overall, the paper feels more akin to a research proposal than a definitive study; it identifies key challenges and proposes future research directions but lacks extensive experimental results demonstrating effective solutions.\", \"The authors suggest three possible reasons for LLMs' shortcomings in MAPF: reasoning limitations, context length constraints, and difficulty understanding obstacle locations. However, these challenges should also arise in single-agent scenarios, where the agent must similarly reason and interpret obstacles, yet LLMs perform well in those cases. Could the authors provide further insights by comparing single-agent and multi-agent tasks?\"], \"references\": \"[1]: Building Cooperative Embodied Agents Modularly with Large Language Models. Hongxin Zhang and Weihua Du and Jiaming Shan and Qinhong Zhou and Yilun Du and Joshua B. Tenenbaum and Tianmin Shu and Chuang Gan. https://arxiv.org/abs/2307.02485\\n\\n[2]: Nayak, S., Orozco, A. M., Have, M. T., Thirumalai, V., Zhang, J., Chen, D., ... & Balakrishnan, H. (2024). Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments.\\u00a0*arXiv preprint arXiv:2407.10031*.\\n\\n[3]: Chen, Y., Arkin, J., Zhang, Y., Roy, N., & Fan, C. (2024, May). Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?. In\\u00a0*2024 IEEE International Conference on Robotics and Automation (ICRA)*\\u00a0(pp. 4311-4317). IEEE.\", \"questions\": \"There are some serious concerns that I have pointed out in the previous section\\n\\n## Limitations:\\n\\nThe methods used to obtain plans with LLMs are quite simple. It would have been better to have some more sophisticated methods (used in prior literature) to compare and show their failure modes. Just using LLMs for planning might not be a reasonable approach and probably that\\u2019s why many of the recent papers come up with more sophisticated methods (different roles with LLMs, decentralization, etc.)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your review. Here we address your concerns point by point:\\n> since the environment is Markov, shouldn\\u2019t it be able to decide actions for the agents just based on their current states?\\n\\n\\nIndeed, the MAPF problem itself is Markov, however, because LLM is not perfect, the LLM can leverage the past history to change its preferred action in the same state, and potentially escape from a local stuck. We invite you to look into the case study in our appendix where such a behavior, instead of improving the performance, decreases the performance of o1-series, but perfectly demonstrates how LLM is leveraging the past history.\\n\\n\\n> Including comparisons to more advanced modular architectures, such as those incorporating memory modules and decentralized planners (e.g., [1, 2, 3])\\n\\n\\nThank you for pointing them out. We want to clarify that we have cited the third paper you have cited, and their conclusion that decentralized planners are no better than a centralized planner has guided us to focus on the centralized design given that we are currently focusing on the success rate itself.\\n\\n\\n> the paper feels more akin to a research proposal\\n\\n\\nIndeed, our paper is, in fact, a position paper on what the future research on LLM for MAPF should be. We sincerely hope you could potentially reevaluate the significance in this case.\\n\\n\\n> Could the authors provide further insights by comparing single-agent and multi-agent tasks\\nWhile the three factors analyzed in this paper also apply to single-agent planning, the multi-agent setting introduces additional challenges due to the output length increasing at least linearly with the number of agents. Longer outputs expand the total context length, necessitating a restart mechanism in our algorithm. This mechanism reinitializes the entire LLM system with a new problem, using the current locations of all agents as their starting points when context length limits are reached. While this approach addresses the immediate problem, it negatively impacts final solutions by causing the algorithm to lose earlier information, such as each agent's preferred direction and potential map locations that the LLM initially struggled to encode. These losses further exacerbate the other two challenges discussed.\\n\\nIn single-agent settings, the LLM's extended input context can help avoid repeated paths, even when the model does not perfectly understand obstacle locations. Similarly, the solution checker can guide the LLM in producing viable paths even if the generated path is suboptimal. However, in multi-agent settings, the limited capabilities of LLMs lead to more frequent mistakes, resulting in additional restarts. Each restart compounds the loss of historical information, such as agent preferences and map details, ultimately increasing failure rates.\\n\\nOur experiments demonstrate that on an empty map, where collision avoidance is the only constraint, LLM solvers can effectively scale to handle up to 16 agents (one agent for every four cells on the map). However, on larger maps with fixed obstacles, LLM solvers struggle even with only eight agents. Comparing these results suggests that agent collisions are not the primary factor driving the performance gap. Instead, the increased output length appears to be the main reason for the performance differences between single-agent pathfinding and multi-agent pathfinding (MAPF).\\n\\nBesides, it is also possible that the reasoning capabilities of current LLMs are also contributing to the failure. As our case study on o1-models showed, LLMs can sometimes fail to use past history to guide future actions correctly, which is an observation that has not happened in the single-agent scenario. However, we believe that this is a smaller issue as even if we have manually regenerated incorrect steps to resolve this problem, the performance of MAPF problems is still worse than single-agent task, so we believe that the main issue is still the complexity caused by multi-agent settings.\"}", "{\"summary\": \"The paper explores the feasibility of using large language models (LLMs) for solving the Multi-Agent Path Finding (MAPF) problem. While LLMs have demonstrated success in various fields, this study examines their limitations in handling MAPF due to issues with reasoning capabilities, context length limits, and obstacle comprehension. Experiments on standard MAPF benchmarks show that LLMs perform well on simple scenarios but struggle as problem complexity increases. The authors conclude that current LLMs are insufficient for MAPF without additional improvements or hybrid systems integrating traditional path-planning methods.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This paper addresses a unique application of LLMs to multi-agent coordination, specifically MAPF.\", \"The paper identifies general limitations of LLMs in multi-agent coordination tasks, with some illustrative failure cases.\", \"The discussion outlines the general challenges of using LLMs in MAPF and suggests broad areas for improvement.\"], \"weaknesses\": [\"Given that these problems are well-addressed by analytical methods, could the authors elaborate on the concrete advantages of using LLMs for MAPF compared to existing analytical methods?\", \"As the findings primarily reiterate known LLM challenges (e.g., context limitations and reasoning issues) without introducing MAPF-specific insights or innovations. The relevance to MAPF needs to be clarified. The authors are suggested to highlight any MAPF-specific challenges or insights they discovered.\", \"The experiments are restricted to simple cases that may not generalize to real-world MAPF tasks, which undermines the strength of the study\\u2019s conclusions.\", \"The chosen prompt design lacks justification as the best approach for MAPF. Without alternative prompting methods or tuning strategies, it is unclear if the observed limitations are universal or specific to this setup.\"], \"questions\": \"1. How do you justify that the proposed prompt method is the best approach and that its failure indicates no other prompting method can address the MAPF problem? How do you ensure the conclusions are generalizable beyond this specific scenario and prompt?\\n2. Given that the insights are common LLM limitations, not specific to MAPF, what is the unique benefit of this research? Are there distinct challenges in MAPF that differ from general LLM challenges, making any of the observations particularly relevant?\\n3. Why use LLMs for MAPF or even SAPF? The necessity is unclear, as these problems can be well-addressed using traditional analytical methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the use of LLMs for solving multi-agent path-finding problems(MAPF), focusing on moving multiple agents from start to goal locations without collisions. The study explores whether LLMs, without additional training or heuristic guidance, can effectively generate valid paths for agents in different MAPF scenarios, including simple and complex environments.\\nExperiments reveal that while LLMs can solve straightforward MAPF cases with limited obstacles, they struggle with more challenging environments, often failing to generate collision-free solutions. The paper identifies three primary reasons for LLMs\\u2019 limitations in MAPF: lack of advanced reasoning, context length limitations, and difficulty understanding spatial map information. Based on these findings, the authors suggest directions for future work to address these limitations and improve LLMs' MAPF performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"Firstly, the paper is well-structured, and it clearly explains why LLMs may be suitable candidates for MAPF due to their reasoning capabilities and large contextual understanding. It conducts experiments on various MAPF benchmark maps, evaluating LLMs\\u2019 performance in different scenarios and identifying specific failure points. The paper\\u2019s breakdown of LLM limitations, such as reasoning and spatial understanding, provides useful insights for future improvements in LLM-based MAPF solutions. Different prompt styles and input representations (e.g., text-only, multimodal) are compared, contributing valuable insights into how prompt structure affects LLM performance.\", \"weaknesses\": \"The paper does not compare the LLM-based approach with traditional MAPF algorithms (e.g., heuristic search, SAT, or reinforcement learning). Including baseline comparisons would provide a better understanding of how LLMs perform relative to established methods.\\nIt also lacks visualizations of agent paths and collision instances, which would improve clarity and provide a more intuitive understanding of LLM performance. Success metrics focus on whether a solution is collision-free, with limited emphasis on solution quality (e.g., path optimality or efficiency). Detailed metrics would offer a clearer picture of LLMs\\u2019 efficacy in generating high-quality paths.\", \"questions\": \"The authors can consider including baseline methods like heuristic search or SAT-based MAPF algorithms for comparison. Such comparisons would clarify whether LLMs bring any unique advantage to MAPF. Besides, they can evaluate the generated paths for metrics like makespan, and path length. Including these metrics could highlight the quality of LLM-generated solutions relative to optimal or near-optimal paths.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your review. Here we address the weaknesses you pointed out one by one:\\n\\n1. Regarding the comparison with traditional MAPF algorithms, the success rate of typical MAPF algorithms like CBS or EECBS will be 100% under 0.1 seconds. While there is currently no advantage for LLM right now, we are the first to explore the possibility of solving MAPF with LLM, and our objective of the paper is to demonstrate that LLMs can solve small problems simply through prompting and discussing what is stopping them from solving larger scenarios. We hope our paper can inspire future research to enable LLMs to function as solvers, offering the significant advantage of leveraging the rapid advancements in LLM technology while eliminating the need for additional training required by current RL-based methods.\\n\\n2. Our results (Table 3) show that already with 8 agents all methods fail to find feasible solutions (0-20% success rate). When the generated solution is infeasible, it does not make sense to measure quality - one cannot focus on quality before consistently achieving feasibility. Therefore, as the LLM solver is currently not even able to find feasible solutions consistently for high number of agents, we believe it is more important to focus on getting a feasible solution in the **harder** scenarios (with 8 agents) than getting a **better** solution in easy scenarios (with 2-4 agents) that do not need many detours and thus not have a huge improvement margin, in the current stage of pushing LLM as an alternative solver of MAPF.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your review. Here, we answer your concerns about the weaknesses (as labeled W1-W4 sequentially ) and questions (as labeled Q1-Q3).\\n\\n\\nQ1. W4. These questions are related to whether the proposed prompt method is the best and whether no other prompting method can address the MAPF problem. Our study has broken down the failing reasons into three aspects, i.e., context length, understanding obstacles, and general reasoning capability related to pathfinding. All three aspects are known LLM challenges as you already acknowledged, and all these challenges are also known to be not prompt-sensitive and can be solved with prompt engineering. Thus, we believe that whether our prompt is the best one is less important, and even if there is a better prompt, our conclusion will still remain the same. \\n\\n\\nQ2. W1. W2. These are questions about the contribution of our work in comparison to both current work on MAPF and current work on LLM. \\n\\n\\nFrom the perspective of MAPF researchers, while the current LLMs show no advantage compared to existing search-based methods or RL-based methods for MAPF, we hope our paper can enable research on LLM-based MAPF solvers that can take advantage of the rapid advancements in LLM technology. More importantly, from the perspective of LLM researchers, the problem of MAPF remains one of the challenging tasks that LLMs still perform very poorly on. When the Blocksworld benchmark (which is related to MAPF) was first introduced to the LLM community, it was hard for LLMs at the time and yet now it can be solved much better by the latest o1 model published by OpenAI . We hope that by introducing MAPF as an LLM planning benchmark in our paper, we can provide a useful next frontier to challenge the abilities of LLMs in the domain of long-context, symbolic understanding, and planning/reasoning capabilities. \\n\\n\\nQ3. We have included the results of SAPF in section 2.2 of our paper, where the success rate of LLMs as the solver for SAPF is much better than for MAPF with the same workflow that involves a rule-based checker. That is why we moved beyond SAPF to MAPF where the challenges of context length compound with the other challenges to significantly reduce the success rate.\\n\\n\\nW3. We are currently using the scenarios from the MAPF benchmark [R1]. The benchmark is the most commonly used benchmark that captures the key challenges in real-world MAPF problems, and in general, the MAPF research community believes a good performance on the MAPF benchmark can be generalized to real-world MAPF tasks with relatively easy adaptations.\\n\\n\\n[R1]. Stern R, Sturtevant N, Felner A, et al. Multi-agent pathfinding: Definitions, variants, and benchmarks[C]//Proceedings of the International Symposium on Combinatorial Search. 2019, 10(1): 151-158.\"}", "{\"comment\": \"Thank you for the detailed explanation. After reconsidering the paper including your responses, my overall assessment remains unchanged.\\nFor the first question, while I understand that the primary aim of the paper is exploratory rather than competitive, a comparison with traditional MAPF algorithms would still provide essential context. Even if LLMs cannot yet match traditional methods, benchmarking against these standards could highlight where LLMs fall short and provide a stronger foundation for motivating future work. \\nFor the second question, I agree that focusing on obtaining feasible solutions in harder scenarios is indeed more critical at this stage. However, the current evaluation primarily highlights the failures without clearly analyzing the underlying reasons for these failures. Including additional metrics, such as the number of collisions, path overlap, or the proportion of agents reaching their goals, could provide more insight into why the LLMs fail and help identify specific bottlenecks in their performance.\"}" ] }
BVsFp5rQxd
VoiceNoNG: High-Quality Speech Editing Model without Hallucinations
[ "Sung-Feng Huang", "Heng-Cheng Kuo", "Zhehuai Chen", "Xuesong Yang", "Pin-Jui Ku", "Ante Jukić", "Chao-Han Huck Yang", "Yu Tsao", "Yu-Chiang Frank Wang", "Hung-yi Lee", "Szu-Wei Fu" ]
Currently, most advanced speech editing models are based on either neural codec language models (NCLM) (e.g., VoiceCraft) or diffusion models (e.g., Voicebox). Although NCLM can generate higher quality speech compared to diffusion models, it suffers from a higher word error rate (WER) (Peng et al., 2024), calculated by comparing the transcribed text to the input text. We identify that this higher WER is due to attention errors (hallucinations), which make it difficult for NCLM to accurately follow the target transcription. To maintain speech quality and address the hallucination issue, we introduce VoiceNoNG, which combines the strengths of both model frameworks. VoiceNoNG utilizes a latent flow-matching framework to model the pre-quantization features of a neural codec. The vector quantizer in the neural codec implicitly converts the regression problem into a token classification task similar to NCLM. We empirically verified that this transformation is crucial for enhancing the performance and robustness of the speech generative model. This simple modification enables VoiceNoNG to achieve state-of-the-art performance in both objective and subjective evaluations. Lastly, to mitigate the potential risks posed by the speech editing model, we examine the performance of the Deepfake detector in a new and challenging practical scenario. Audio examples can be found on the demo page: https://anonymous.4open.science/w/NoNG-8004/
[ "speech generative model", "speech editing", "neural codec", "vector quantizer", "deepfake detection" ]
Reject
https://openreview.net/pdf?id=BVsFp5rQxd
https://openreview.net/forum?id=BVsFp5rQxd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tjT28IkjLS", "skbzLigrBo", "phnTQogY1E", "pVeof3gM5F", "i7L4tcJU8t", "V0mzj2DZpF", "TcawyihFWd", "SKrorD34AV", "Lq2jlVPXLK", "H4dR8CtFDf", "CUMEHezBNe", "7qcMbEIvMA", "79TkDcLzwh", "006BHV1EVU" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734750410838, 1732003678487, 1732002279130, 1732004193145, 1730445882155, 1730703890982, 1730181059101, 1737524074859, 1730795784190, 1732002621619, 1732005055432, 1732003300934, 1732004940212, 1732732701357 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10751/Area_Chair_aFdF" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Reviewer_xqLA" ], [ "ICLR.cc/2025/Conference/Submission10751/Reviewer_vP9D" ], [ "ICLR.cc/2025/Conference/Submission10751/Reviewer_P8Yw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10751/Reviewer_5Wua" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ], [ "ICLR.cc/2025/Conference/Submission10751/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"**Paper Summary:**\\n\\nThis paper proposes an adaptation of the Voicebox architecture for speech editing, replacing the intermediate Mel-spectrogram representation with continuous pre-quantized neural audio codec features. Experiments show that this modification outperforms the baseline Voicebox (diffusion) as well as the VoiceCraft (autoregressive model over quantized neural audio codec tokens) editing models on the RealEdit dataset.\\n\\n**Strengths:**\\n\\nReviewers agree that the change of feature representation (Mel-spectrogram -> neural audio codec) is interesting and well-motivated. \\n\\n**Weaknesses:**\\n\\nAll reviewers raised significant issues with the exposition of this paper. \\\"The paper's structure is uneven\\\" (5Wua). \\\"The article is in an unpolished or even unfinished state\\\" (vP9D). \\\"The paper presents a rather simplistic introduction to the proposed method\\\" (xqLA). \\\"The statements in the paper are sometimes very vague\\\" (P8Yw). Reviewers also raise significant concerns about the experimental claims with respect to training datasets (vP9D, xqLA) and evaluation metrics (5Wua, P8Yw).\", \"additional_comments_on_reviewer_discussion\": \"The authors have responded extensively to the reviewers' comments. On the question of exposition in particular: the authors have largely disputed reviewer criticism, rather than consider how best to improve the organization and claims of the paper. I broadly agree with the reviewer assessments about both structural issues with the paper, and specific concerns about unscientific claims. I urge the authors to take the reviewer feedback constructively.\"}", "{\"title\": \"Resonse to Reviewer xqLA (part2)\", \"comment\": \"***Q1: The paper states that the poor performance of the NCLM-based model is due to attention errors (hallucination phenomena). However, the poor performance of LM-based models could also be influenced by factors such as sampling methods and codebook size. How can it be proven that the issues are specifically caused by hallucination?***\\n\\n\\nThank you for highlighting this! The NCLM-based model performs well in terms of speech quality, indicating that the codebook size is sufficiently large. However, its limitation lies in the inability to **generate speech that accurately matches the target transcription.** The phenomenon of hallucinations in LLM-based TTS is a recently identified issue (please refer to [1, 2] for further details; we also cited [1] in our paper). [1] mentioned that \\u201cLLM-based TTS models are not robust as the generated output can contain repeating words, missing words and mis-aligned speech (referred to as hallucinations or attention errors), especially when the text contains multiple occurrences of the same token.\\u201d\", \"although_the_authors_of_the_voicecraft_paper_also_partially_observed_this_issue_and_discussed_it_in_section_7\": \"Limitations: \\u201cFirst and foremost is the long silence and scratching sound that occasionally occur during generation. Although in this work, we overcome it with sampling multiple utterances and selecting the shorter ones, more elegant and efficient methods are needed.\\u201d, they seem unaware that this issue may be related to the higher WER (Table 4 in Voicecraft paper).\\n\\nWe demonstrated two examples of such hallucinations in Tables 4 and 5 and provided the corresponding audio examples on our demo page (the link is provided in the paper abstract). You can easily access these audio examples under the section \\\"2. Examples of attention errors (hallucinations) of VoiceCraft\\\" on the demo page.\\n\\n\\n[1] Neekhara, P., Hussain, S., Ghosh, S., Li, J., Valle, R., Badlani, R., & Ginsburg, B. (2024). Improving robustness of llm-based speech synthesis by learning monotonic alignment. arXiv preprint arXiv:2406.17957.\\n\\n[2] Battenberg, E., Skerry-Ryan, R. J., Stanton, D., Mariooryad, S., Shannon, M., Salazar, J., & Kao, D. (2024). Very Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech. arXiv preprint arXiv:2410.22179.\\n\\n***Q2: In line 199, the statement \\\"Since no code and model checkpoints are available for Voicebox, we reproduced the results\\\" raises a question. Does this mean that you retrained Voicebox based on open-source code, or did you replicate the experimental results from Voicebox? If it is the latter, please specify which tables the data comes from, as I could not find the same data in the Voicebox paper.***\\n\\nWe retrained Voicebox by ourselves with a training pipeline similar to the proposed VoiceNoNG.\\n\\n***Q3: The \\\"Spotify\\\" column in Table 1 should indicate that VoiceCraft (330M) performs the best.***\\n\\nThank you for pointing this out! We have revised this issue.\\n\\n***Q4: Is VoiceBox and HiFi-GAN trained on GigaSpeech, and then compared with it?***\\n\\nAlthough we don't have the results for HiFi-GAN trained on GigaSpeech, we believe the performance may not be as robust as our VoiceNoNG, as HiFi-GAN lacks a VQ module. The VQ module helps implicitly transform the **regression** problem into a token **classification** task, which enhances robustness against **prediction errors** introduced by the editing model, as demonstrated in Figure 3.\\n\\n\\nWe believe the results of 'Post-quantization' can provide some information for HiFi-GAN trained on GigaSpeech. Directly replacing the HiFi-GAN vocoder with the DAC decoder is not an optimal solution. Although Tables 1 and 2 show that using 'Post-quantization' with DAC results in better WER (4.73 vs. 4.97) and speech quality (18.93 dB vs. 16.90 dB) compared to Voicebox(Giga,Mel), applying **Pre-quantization with the VQ module and CE loss** achieves the best performance (WER: 4.54, speech quality: 20.44 dB).\"}", "{\"title\": \"Resonse to Reviewer P8Yw\", \"comment\": \"***1.& 2. The overall contribution is minimal with no original idea presented in the paper. It appears that the authors have merely replaced the feature representation in VoiceBox model with DAC features.***\\n\\n\\nWe would like to first thank the reviewer for recognizing that our paper is well-written with a clear description and strong motivation. The goal of this paper is to address the problems present in current state-of-the-art speech editing models: VoiceCraft (struggles to generate speech **accurately following** the target transcription) and Voicebox (suffers from reduced speech quality when **background audio** is present). **Identifying these issues is also one of the contributions of this paper.**\\n\\nAlthough we considered a more sophisticated framework to tackle these issues (e.g., applying a speech enhancement model to disentangle background audio from speech and performing the infilling separately), we found that our current simple framework can **already effectively** address these challenges (please listen to the demos in our demo page).\\n\\nBecause our proposed solution is relatively elegant, we decided to focus more on the **motivation** and **experimental** parts to provide insights to the community.\\n\\nAlthough the model framework is simple, we believe this paper makes significant contributions to the related research community. \\n\\n- 1, We **identify the robustness problem of neural codec language models** (e.g., VoiceCraft) in speech editing. \\n\\n- 2, We conduct comprehensive experiments to highlight the **pros and cons** of VoiceCraft and Voicebox. \\n\\n- 3, Our ablation study and Figure 3 demonstrate the importance of flow-matching for **modeling pre-quantization features with the VQ module**, which can improve the robustness of models against small prediction error by implicitly converting the regression problem into a token classification task similar to NCLM. \\n\\n- 4, The proposed VoiceNoNG achieves **state-of-the-art performance** in both objective and subjective evaluations. \\n\\n- 5, Considering the potential for neural codecs to become a new audio format standard (such as mp3 format), the assumption that all codec-generated speech is fake may soon be unrealistic. Therefore, we propose a new and challenging **practical scenario for deepfake detection**, contributing to the relevant community.\\n\\nWe kindly ask the reviewer to **listen to the demos of our proposed method and compare them with other SOTAs** (https://anonymous.4open.science/w/NoNG-8004/), and please don\\u2019t overlook the contribution of this paper due to its simple framework.\\n\\n\\n***3. The statements in the paper are sometimes very vague such as \\\"This diversity makes RealEdit more challenging compared to ...\\\" on line 222. Given that this is a scientific paper, there needs to be an explicit description of what other datasets lack which RealEdit provides.***\\n\\nIn our original paper, the full context is: \\u201cThe edits span various types\\u2014insertions, deletions, and substitutions\\u2014across different lengths, from short (1-2 words) to long (7-12 words), with single or multiple spans. This diversity makes RealEdit more challenging compared to other speech synthesis evaluation datasets.\\u201d\\nAs stated in the context, the diverse edit types (insertions, deletions, and substitutions) and different edit lengths (short, long) are what other datasets do not contain. We will revise the paper to make this more clear.\\n\\n***4. Evaluation of edited speech by WER is not helpful in determining the quality/intelligibility of generation because ASR models have an implicit language model that corrects mispronunciations and even words based on context.***\\n\\nWe partially **disagree** with this statement. While it is true that ASR systems may correct mispronunciations based on context, they still provide **valuable** information about the intelligibility of generated speech. Additionally, previous speech editing works, such as VoiceBox and VoiceCraft, **DO** report WER results of ASR as a **metric of intelligibility**. For instance, section 4 of VoiceBox states: \\u201cCorrectness and intelligibility: This can be measured by the word error rate (WER) of the synthesized speech\\u2019s transcription with respect to the input text, which has been adopted in prior work [Wang et al., 2018]. Public automatic speech recognition (ASR) models are used for comparability.\\u201d Similarly, VoiceCraft reports WER results in their Tables 3 and 4.\"}", "{\"title\": \"Resonse to Reviewer vP9D\", \"comment\": \"***The pre-quantization feature by predicting DAC is a variant of Latent Diffusion, which has been widely proven to be effective. So the solution is not novel....***\\n\\nThe main technical contribution we aim to highlight is that applying a VQ module provides **additional robustness**, as demonstrated in Section 3.1.6. We argue that VQ transforms the **regression** problem into a token **classification** task (similar to NCLM). As a result, small prediction errors\\u2014provided they do not exceed the token decision boundary\\u2014are still mapped to the correct token after VQ. **While simple, we believe this observation will be valuable to the broader research community.**\", \"the_goal_of_this_paper_is_to_address_the_problems_present_in_current_state_of_the_art_speech_editing_models\": \"VoiceCraft (struggles to generate speech **accurately following** the target transcription) and Voicebox (suffers from reduced speech quality when **background audio** is present). **Identifying these issues is also one of the contributions of this paper.**\\n\\nAlthough we considered a more sophisticated framework to tackle these issues (e.g., applying a speech enhancement model to disentangle background audio from speech and performing the infilling separately), we found that our current simple framework can **already effectively** address these challenges (please listen to the demos in our demo page).\\n\\nBecause our proposed solution is relatively elegant, we decided to focus more on the **motivation** and **experimental** parts to provide insights to the community.\\n\\nAlthough the model framework is simple, we believe this paper makes significant contributions to the related research community. \\n\\n- 1, We **identify the robustness problem of neural codec language models** (e.g., VoiceCraft) in speech editing. \\n\\n- 2, We conduct comprehensive experiments to highlight the **pros and cons** of VoiceCraft and Voicebox. \\n\\n- 3, Our ablation study and Figure 3 demonstrate the importance of flow-matching for **modeling pre-quantization features with the VQ module**, which can improve the robustness of models against small prediction error by implicitly converting the regression problem into a token classification task similar to NCLM. \\n\\n- 4, The proposed VoiceNoNG achieves **state-of-the-art performance** in both objective and subjective evaluations. \\n\\n- 5, Considering the potential for neural codecs to become a new audio format standard (such as mp3 format), the assumption that all codec-generated speech is fake may soon be unrealistic. Therefore, we propose a new and challenging **practical scenario for deepfake detection**, contributing to the relevant community.\\n\\nWe kindly ask the reviewer to **listen to the demos of our proposed method and compare them with other SOTAs** (https://anonymous.4open.science/w/NoNG-8004/), and please don\\u2019t overlook the contribution of this paper due to its simple framework.\\n\\n\\n\\n\\n\\n***The new method proposed in the article, in my opinion, should be experimented on both TTS and speech editing, and compared with the VoiceBox model.***\\n\\nThank you for your suggestion! We will consider exploring the TTS setting as part of our future work, as the proposed VoiceNoNG can also function as a TTS model. However, we believe speech editing presents a more challenging task in terms of content coherence. For the edited segment to sound natural, the speaker characteristics and background audio (e.g., noise, music, etc.) must remain consistent with the surrounding context.\\n\\n\\n***This article describes the VoiceCraft model extensively, just to show that using gigaspeech data sets can enhance the effect of speech editing with background sound. In my opinion....***\\n\\n\\nFor a fair comparison, we also report the Voicebox results trained on **GigaSpeech**, as shown in Tables 1 and 2. Despite being trained on GigaSpeech, Voicebox still exhibits worse WER and speech quality compared to the proposed VoiceNoNG. This highlights a key drawback of Voicebox: its model framework, which relies on **mel-spectrograms and HiFi-GAN**.\\n\\nDirectly replacing the HiFi-GAN vocoder with the DAC decoder is also **NOT** an optimal solution. Although Tables 1 and 2 show that using 'Post-quantization' with DAC results in better WER (4.73 vs. 4.97) and speech quality (18.93 dB vs. 16.90 dB) compared to Voicebox*(Giga,Mel), applying **Pre-quantization with the VQ module and CE loss** achieves the best performance (WER: 4.54, speech quality: 20.44 dB).\\n\\n\\n***Rough writing, lack of model or flow chart. There is also a problem with the organization of the article...***\\n\\nWe apologize for giving you this impression. In fact, we carefully polished the paper before submission. Given that our proposed solution is relatively simple, we chose to emphasize the **motivation** and **experimental** sections to provide valuable insights to the community. Regarding the organization of the paper, we will follow your suggestion and make revisions.\"}", "{\"summary\": \"This paper first examines the limitations of current advanced speech editing models. Voicebox produces lower-quality speech when background audio (such as noise or music) is present, while VoiceCraft struggles to accurately follow text input, a common hallucination issue with autoregressive models. To address these challenges, the paper introduces VoiceNoNG, which leverages the advantages of both models. The authors also explore the impact of the vector quantization module in the neural codec on achieving a robust speech editing model. Finally, to mitigate potential risks posed by the speech editing model, the authors examine the performance of a deepfake detector in a new, challenging practical scenario.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper first discusses the advantages and limitations of two different architectures for speech editing models. It then introduces a novel speech editing model based on DAC and latent flow-matching, which can be seen as an improvement to the VoiceBox model. The improved model achieves a lower WER and enhanced speech quality. The authors also investigate the impact of the VQ module on speech editing, with findings that may extend to other application areas. Finally, the authors examine the performance of a deepfake detector in a new and challenging practical scenario, contributing to the deepfake detection community.\", \"weaknesses\": \"The paper presents a rather simplistic introduction to the proposed method, with much of the first two chapters focusing on popular science explanations. In the abstract, the authors state that the poor performance of the NCLM-based model is due to attention errors (hallucination phenomena), but they provide only a few examples to support this claim, lacking more extensive experiments on attention mechanisms. The paper mentions that the model combines the advantages of VoiceCraft and VoiceBox, but it seems to only merge the Codec with VoiceBox. Additionally, the authors point out that the poor performance of VoiceBox is due to the HiFi-GAN being trained on clean speech; however, the experimental section lacks a comparison with a HiFi-GAN trained on noisy speech for VoiceBox.\", \"questions\": \"Q1: The paper states that the poor performance of the NCLM-based model is due to attention errors (hallucination phenomena). However, the poor performance of LM-based models could also be influenced by factors such as sampling methods and codebook size. How can it be proven that the issues are specifically caused by hallucination?\", \"q2\": \"In line 199, the statement \\\"Since no code and model checkpoints are available for Voicebox, we reproduced the results\\\" raises a question. Does this mean that you retrained Voicebox based on open-source code, or did you replicate the experimental results from Voicebox? If it is the latter, please specify which tables the data comes from, as I could not find the same data in the Voicebox paper.\", \"q3\": \"The \\\"Spotify\\\" column in Table 1 should indicate that VoiceCraft (330M) performs the best.\", \"q4\": \"Is VoiceBox and HiFi-GAN trained on GigaSpeech, and then compared with it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new speech editing model called VoiceNoNG, which is based on VoiceBox and has been improved in two ways. On the one hand, the target is replaced with DAC features, and on the other hand, it is trained on the gigaspeech dataset to improve the effectiveness of speech editing, especially in scenarios with background noise. The paper also proposes a new deepfake speech detection method that considers reconstructed real speech.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The method proposed in the article has its rationality, enhancing the effect of speech editing by predicting high-dimensional features instead of Mel features. Since DAC features are better for information compression, replacing Mel is a good solution.\\n2. The article proposes more practical deepfake detection metrics and corresponding models, which have certain practical value.\", \"weaknesses\": \"1. The pre-quantization feature by predicting DAC is a variant of Latent Diffusion, which has been widely proven to be effective. So the solution is not novel. The novelty comes from the additional CE loss.\\n2. The new method proposed in the article, in my opinion, should be experimented on both TTS and speech editing, and compared with the VoiceBox model. \\n3. This article describes the VoiceCraft model extensively, just to show that using gigaspeech data sets can enhance the effect of speech editing with background sound. In my opinion, data selection is an engineering problem and should not be presented as a major contribution. The claim (These two factors result in Voicebox not being good at generating speech with background audio.) in Line 154 is not scientific, This drawback is due to training data, not Voicebox's drawback.\\n4. Rough writing, lack of model or flow chart. There is also a problem with the organization of the article. For example, the drawback of VoiceBox and VoiceCraft (Line 152 and Line 156) should not be placed in the proposed VoiceNoNG section. The ablation description (Line 248) should not be included in the WER metric. The article is in an unpolished or even unfinished state.\", \"questions\": \"See above. In my opinion, the contribution of the article is limited or not fully explored. And more importantly, the rough writing makes it within an unpolished or even unfinished state.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces VoiceNoNG, a new speech editing model that combines the strengths of two existing approaches in the field: Voicebox (based on flow-matching model) and VoiceCraft (based on neural codec language models). While current speech editing models like VoiceCraft can generate high-quality speech, they suffer from hallucination issues that lead to higher word error rates, including problems like unintended silences, slow speaking pace, or missing/repeated words. VoiceNoNG addresses these issues by utilizing a latent flow-matching framework and incorporating the Descript Audio Codec (DAC) instead of traditional Mel-spectrograms as its input feature representation.\\nThe key innovation of VoiceNoNG is its use of pre-quantization features and a vector quantizer (VQ) module, which provides additional robustness against minor prediction errors similar to quantization noise. The model also employs a cross-entropy loss to enhance codec prediction accuracy. In experimental evaluations using the RealEdit dataset, VoiceNoNG outperformed both VoiceCraft and Voicebox variants in terms of word error rate (WER) and speech quality metrics. The authors demonstrate that modeling pre-quantization features and including the VQ module are crucial for developing a robust speech editing model that can maintain high quality while avoiding hallucination issues.\\nFinally, the authors develop a deepfake detector by classifying edited, real and synthesized speech portion in an utterance. The proposed method is an extension of prior model by adding a new class (synthesized) into the target.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Well written paper with clear and concise description of the proposed method.\\n2. The problem statement is extremely interesting with good motivation\\n3. Shows that feature representation from neural codecs can be helpful for tasks like editing\\n4. Development of deepfake detector which is crucial given how these models can be abused by bad actors\", \"weaknesses\": \"1. The overall contribution is minimal with no original idea presented in the paper.\\n2. It appears that the authors have merely replaced the feature representation in VoiceBox model with DAC features.\\n3. The statements in the paper are sometimes very vague such as \\\"This diversity makes RealEdit more challenging compared to ...\\\" on line 222. Given that this is a scientific paper, there needs to be an explicit description of what other datasets lack which RealEdit provides. \\n4. Evaluation of edited speech by WER is not helpful in determining the quality/intelligibility of generation because ASR models have an implicit language model that corrects mispronunciations and even words based on context.\\n5. The hallucination claim is completely based on WER which is not astounding to begin with (see point 4). Further, the WER difference between Voicecraft and proposed technique is minimal except for YouTube dataset.\\n6. The SI-SDR difference are again very small to be meaningful across various models. This is probably not perceivable by human listeners. \\n7. MOS evaluation is unrealiable given there is only 3 rating per sample for a total of 10 samples per model (see ITU-T P.800 recommendations).\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a speech generation model for speech editing that incorporates future context to ensure smooth and seamless transitions. The model is built on the Voicebox architecture, but with a modification that replaces the intermediate acoustic feature mel-spectrogram with the continuous hidden features of the neural codec model DAC. The author demonstrates that this modification leads to some performance improvements, particularly in terms of Word Error Rate (WER).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors compare the proposed method against two baselines, VoiceCraft and VoiceBox, representing autoregressive (AR) and non-autoregressive (NAR) models, respectively. The results demonstrate a performance improvement. Additionally, the authors conduct an ablation study that shows the effectiveness of the proposed approach.\", \"weaknesses\": [\"There are several fundamental issues with this work:\", \"**Unbalanced Structure**: The paper's structure is uneven, with the first three pages primarily dedicated to background and related work, while the proposed method is introduced briefly in a single paragraph at the end of Section 2. This creates an imbalance that detracts from the focus on the contributions of the work.\", \"**Questionable Claim About Voicebox Performance**: In Section 2, the authors suggest that the poor performance of Voicebox in generating speech with background audio is due to its use of the HiFi-GAN vocoder and mel-spectrogram. However, this assertion could be problematic. HiFi-GAN with mel-spectrograms has been demonstrated to effectively generate a wide variety of sounds, including music and singing voices. This raises doubts about the validity of the paper\\u2019s motivation, as it seems based on an incorrect premise.\", \"**Prior Work Overlooked**: The introduction of cross-entropy loss as an auxiliary loss function for diffusion models has already been proposed in NaturalSpeech 2. The authors should clearly acknowledge this prior work in Section 2, as this is not a novel contribution and may lead to some confusion regarding the originality of the approach.\", \"**Previous Work on Codec Embeddings**: The use of diffusion models to generate codec embeddings was already explored in NaturalSpeech 2. Although the authors claim that performance differences arise from using embeddings from different layers (before versus after quantization), the novelty of this contribution appears limited and unlikely to significantly expand the current body of knowledge in this area.\", \"**Concerns About WER Evaluation**: Although it is not explicitly stated, it seems that the entire utterance is passed through the vocoder or codec decoder to generate the waveform. If this is the case, the claim in Section 3.1.3 that \\\"the unmasked regions are expected to exhibit the same WER, and thus the WER differences among various editing models should be considerably more pronounced in the edited regions\\\" seems problematic. This is because the WER would be heavily influenced by the choice of vocoder or codec decoder. In Table 1, since different models use different vocoders and codec decoders, the observed WER gap may primarily reflect differences in the vocoders rather than the models themselves. As such, the current experiments do not adequately support this argument.\", \"**Reference missing**:\", \"Seed-TTS: A Family of High-Quality Versatile Speech Generation Models\", \"UniCATS: A Unified Context-Aware Text-to-Speech Framework with Contextual VQ-Diffusion and Vocoding\", \"E1 TTS: Simple and Fast Non-Autoregressive TTS\"], \"questions\": \"The questions are listed above in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Resonse to Reviewer P8Yw (part2)\", \"comment\": \"***5. The hallucination claim is completely based on WER which is not astounding to begin with (see point 4). Further, the WER difference between Voicecraft and proposed technique is minimal except for YouTube dataset.***\\n\\nWe indeed identified the hallucination problem in VoiceCraft through its higher WER (despite its better quality compared to Voicebox). After further investigation, we found that VoiceCraft **cannot generate speech accurately following the target transcription.** We demonstrated two examples of such hallucinations in Tables 4 and 5 and provided the corresponding audio examples on our demo page (the link is provided in the paper abstract). You can easily access these audio examples under the section \\\"2. Examples of attention errors (hallucinations) of VoiceCraft\\\" on the demo page.\\n\\nSome recent papers also found this issue in the LLM-based TTS models [1][2].\\n\\nCompare the WER of VoiceCraft(830M) with the proposed VoiceNoNG: (LibriTTS) 3.77 vs 2.82, (YouTube) 7.36 vs 5.84, and (Spotify) 5.43 vs 4.92. We believe the WER difference is obvious.\\n\\n[1] Neekhara, P., Hussain, S., Ghosh, S., Li, J., Valle, R., Badlani, R., & Ginsburg, B. (2024). \\u201cImproving robustness of llm-based speech synthesis by learning monotonic alignment.\\u201d arXiv preprint arXiv:2406.17957.\\n\\n[2] Battenberg, E., Skerry-Ryan, R. J., Stanton, D., Mariooryad, S., Shannon, M., Salazar, J., & Kao, D. (2024). \\u201cVery Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech.\\u201d arXiv preprint arXiv:2410.22179.\\n\\n***6. The SI-SDR difference are again very small to be meaningful across various models. This is probably not perceivable by human listeners.***\\n\\nAs stated in the paper, we emphasize the **LibriTTS** case because the SI-SDR scores for YouTube and Spotify, which contain background audio, provide less clear meaning. For LibriTTS, the SI-SDR of Voicebox is 19.49 dB, whereas our proposed VoiceNoNG achieves 23.15 dB. This **3.66 dB** improvement is easily perceivable by human listeners. We kindly ask the reviewer to **visit our demo page**, where the quality difference between Voicebox and VoiceNoNG is evident. This difference is also reflected in our subjective listening test results shown in Figure 2.\\n\\n\\n***7. MOS evaluation is unrealiable given there is only 3 rating per sample for a total of 10 samples per model (see ITU-T P.800 recommendations).***\\n\\nWe believe there is a **misunderstanding** regarding our experiment. We have 8x3 = 24 samples per model, and each audio sample is rated by 15 listeners. For a detailed description, please refer to section 3.1.5.\"}", "{\"title\": \"Resonse to Reviewer 5Wua (part2)\", \"comment\": \"***4.Previous Work on Codec Embeddings:***\\n\\n\\nYes, NaturalSpeech 2 has explored generating codec embeddings using diffusion models.\\n\\nHowever, as described in our section 3.1.3: \\u201cAs noted in Section 2, the VQ module offers **additional robustness against prediction errors made by our model**. In contrast, if the output is the post-quantization features (similar to NaturalSpeech 2), only the DAC decoder is required for waveform reconstruction.\\u201d\\n\\nWe aim to highlight to the community that **applying a VQ module can provide extra robustness benefits**, as verified in our section 3.1.6. Unlike NaturalSpeech 2, which directly models quantized vectors without applying a VQ during inference (see Figure 2 in their paper), we argue that VQ can convert the **regression** problem into a token **classification** (similar to NCLM). Hence, a small prediction error, as long as it is not larger than the token decision boundary, will still be mapped to the correct token after VQ.\\nWe believe these insights can help the community build more robust diffusion models.\\n\\n\\n***5.Concerns About WER Evaluation:***\\n\\nNo, we followed the Voicebox approach, where unmasked regions are directly copied from the original waveform. As mentioned in Section 3.2: 'Additionally, for the audio condition, besides the original VoiceNoNG setting where non-edited segments come from the original audio, we consider a more challenging setting where non-edited segments are also resynthesized from the codec. We refer to this condition as VoiceNoNG (resyn).' The scenario where the **entire utterance is passed through the vocoder or codec decoder to generate the waveform** is only considered in the study of detecting edited speech.\\n\\n\\n***6.Reference missing:***\\n\\n\\nThanks for sharing these papers. We will cite and discuss these papers in section 1 (Introduction).\"}", "{\"title\": \"Resonse to Reviewer xqLA\", \"comment\": \"***The paper presents a rather simplistic introduction to the proposed method, with much of the first two chapters focusing on popular science explanations.***\", \"the_goal_of_this_paper_is_to_address_the_problems_present_in_current_state_of_the_art_speech_editing_models\": \"VoiceCraft (struggles to generate speech **accurately following** the target transcription) and Voicebox (suffers from reduced speech quality when **background audio** is present). **Identifying these issues is also one of the contributions of this paper.**\\n\\nAlthough we considered a more sophisticated framework to tackle these issues (e.g., applying a speech enhancement model to disentangle background audio from speech and performing the infilling separately), we found that our current simple framework can **already effectively** address these challenges (please listen to the demos in our demo page).\\n\\nBecause our proposed solution is relatively elegant, we decided to focus more on the **motivation** and **experimental** parts to provide insights to the community.\\n\\nAlthough the model framework is simple, we believe this paper makes significant contributions to the related research community. \\n\\n- 1, We **identify the robustness problem of neural codec language models** (e.g., VoiceCraft) in speech editing. \\n\\n- 2, We conduct comprehensive experiments to highlight the **pros and cons** of VoiceCraft and Voicebox. \\n\\n- 3, Our ablation study and Figure 3 demonstrate the importance of flow-matching for **modeling pre-quantization features with the VQ module**, which can improve the robustness of models against small prediction error by implicitly converting the regression problem into a token classification task similar to NCLM. \\n\\n- 4, The proposed VoiceNoNG achieves **state-of-the-art performance** in both objective and subjective evaluations. \\n\\n- 5, Considering the potential for neural codecs to become a new audio format standard (such as mp3 format), the assumption that all codec-generated speech is fake may soon be unrealistic. Therefore, we propose a new and challenging **practical scenario for deepfake detection**, contributing to the relevant community.\\n\\nWe kindly ask the reviewer to **listen to the demos of our proposed method and compare them with other SOTAs** (https://anonymous.4open.science/w/NoNG-8004/), and please don\\u2019t overlook the contribution of this paper due to its simple framework.\\n\\n***In the abstract, the authors state that the poor performance of the NCLM-based model is due to attention errors (hallucination phenomena), but they provide only a few examples to support this claim, lacking more extensive experiments on attention mechanisms.*** \\n\\nPlease see our reply to Q1.\\n\\n***The paper mentions that the model combines the advantages of VoiceCraft and VoiceBox, but it seems to only merge the Codec with VoiceBox.*** \\n\\nWe believe the **discrete Codec token** is crucial not only for NCLM but also for the diffusion model, particularly the VQ module in the Codec. The VQ module can implicitly convert the **regression** problem into a token **classification** task, enhancing **robustness against prediction errors** made by the editing model, as shown in Figure 3.\\n\\n\\n***Additionally, the authors point out that the poor performance of VoiceBox is due to the HiFi-GAN being trained on clean speech; however, the experimental section lacks a comparison with a HiFi-GAN trained on noisy speech for VoiceBox.***\\n\\nPlease see our reply to Q4.\"}", "{\"title\": \"Resonse to Reviewer 5Wua\", \"comment\": \"***1. Unbalanced Structure:***\\n\\n\\nThank you for pointing this out! The goal of this paper is to address the problems present in current state-of-the-art speech editing models: VoiceCraft (struggles to generate speech **accurately following** the target transcription) and Voicebox (suffers from reduced speech quality when **background audio** is present). **Identifying these issues is also one of the contributions of this paper.**\\n\\nAlthough we considered a more sophisticated framework to tackle these issues (e.g., applying a speech enhancement model to disentangle background audio from speech and performing the infilling separately), we found that our current simple framework can **already effectively** address these challenges (please listen to the demos in our demo page).\\n\\nBecause our proposed solution is relatively elegant, we decided to focus more on the **motivation** and **experimental** parts to provide insights to the community.\\n\\nAlthough the model framework is simple, we believe this paper makes significant contributions to the related research community. \\n\\n- 1, We **identify the robustness problem of neural codec language models** (e.g., VoiceCraft) in speech editing. \\n\\n- 2, We conduct comprehensive experiments to highlight the **pros and cons** of VoiceCraft and Voicebox. \\n\\n- 3, Our ablation study and Figure 3 demonstrate the importance of flow-matching for **modeling pre-quantization features with the VQ module**, which can improve the robustness of models against small prediction error by implicitly converting the regression problem into a token classification task similar to NCLM. \\n\\n- 4, The proposed VoiceNoNG achieves **state-of-the-art performance** in both objective and subjective evaluations. \\n\\n- 5, Considering the potential for neural codecs to become a new audio format standard (such as mp3 format), the assumption that all codec-generated speech is fake may soon be unrealistic. Therefore, we propose a new and challenging **practical scenario for deepfake detection**, contributing to the relevant community.\\n\\nWe kindly ask the reviewer to **listen to the demos of our proposed method and compare them with other SOTAs** (https://anonymous.4open.science/w/NoNG-8004/), and please don\\u2019t overlook the contribution of this paper due to its simple framework.\\n\\n\\n\\n***2. Questionable Claim About Voicebox Performance:***\\n\\nSeveral research papers [1-2] have indicated that **\\u201cHiFi-GAN does not generalize well to non-speech audio such as sound or music\\u201d** [1]. Reviewers can listen to some distorted examples of HiFi-GAN generated music at this link: https://bigvgan-demo.github.io/. In fact, our subjective listening test (Figure 6) shows that Voicebox has comparable quality to VoiceCraft and the proposed VoiceNoNG under **clean** conditions (LibriTTS). However, as shown in Figures 7 and 8, when there is **background audio** (YouTube and Spotify), the speech quality from Voicebox is worse than that of VoiceCraft and the proposed VoiceNoNG .\\n\\n[1] Vyas, A., Shi, B., Le, M., Tjandra, A., Wu, Y. C., Guo, B., ... & Hsu, W. N. (2023). \\u201cAudiobox: Unified audio generation with natural language prompts.\\u201d arXiv preprint arXiv:2312.15821.\\n\\n[2] S.-g. Lee, W. Ping, B. Ginsburg, B. Catanzaro, and S. Yoon. \\u201cBigvgan: A universal neural vocoder with large-scale training.\\u201d arXiv preprint arXiv:2206.04658, 2022.\\n\\n\\n\\n***3. Prior Work Overlooked:***\\n\\nThank you for pointing this out, we will cite and discuss NaturalSpeech 2\\u2019s method when introducing equation (1).\"}", "{\"title\": \"Waiting for the reply\", \"comment\": \"Thank you for taking the time to review our paper. We have addressed your concerns in our submitted response. As the rebuttal period is nearing its conclusion, we kindly request you to review our rebuttal and share any additional comments or concerns you may have. Thank you once again for your valuable feedback, and we are happy to answer any further questions!\"}" ] }
BVCGTsgpOS
FactTest: Factuality Testing in Large Language Models with Statistical Guarantees
[ "Fan Nie", "Xiaotian Hou", "Shuhang Lin", "James Zou", "Huaxiu Yao", "Linjun Zhang" ]
The propensity of Large Language Models (LLMs) to generate hallucinations and non-factual content undermines their reliability in high-stakes domains, where rigorous control over Type I errors (the conditional probability of incorrectly classifying hallucinations as truthful content) is essential. Despite its importance, formal verification of LLM factuality with such guarantees remains largely unexplored. In this paper, we introduce FactTest, a novel framework that statistically assesses whether an LLM can confidently provide correct answers to given questions with high-probability correctness guarantees. We formulate factuality testing as hypothesis testing problem to enforce an upper bound of Type I errors at user-specified significance levels. Notably, we prove that our framework also ensures strong Type II error control under mild conditions and can be extended to maintain its effectiveness when covariate shifts exist. Our approach is distribution-free and works for any number of human-annotated samples. It is model-agnostic and applies to any black-box or white-box LM. Extensive experiments on question-answering (QA) and multiple-choice benchmarks demonstrate that FactTest effectively detects hallucinations and improves the model's ability to abstain from answering unknown questions, leading to an over 40% accuracy improvement.
[ "Large Language Models", "Factuality", "Uncertainty Quantification", "Hallucination Detection" ]
Reject
https://openreview.net/pdf?id=BVCGTsgpOS
https://openreview.net/forum?id=BVCGTsgpOS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxKUtgbL8t", "vCGzVJS8ky", "sTAAwKCpsj", "qsWozW16RH", "pzdgx1x6bK", "nxQJLIaQUG", "mlTJB6scVD", "mdVYHqaGUJ", "mDAdE1MTAY", "jSzSXpM1yS", "ggiWZlHJYj", "gZwvrboSKu", "fDj2BEl9oh", "ehfquQq4vE", "d6txExlOMY", "YtbVSunfkR", "YR0TnUxLUt", "XbgLsuR55h", "WAo3qaBMgp", "SuHDGnpodq", "OSD6b7KUjO", "MwJuTCgew8", "LOiLK49B20", "GZMTqKoMxU", "AhqDL2KCub", "9QOSsSF0sL", "9JEa4UUAma", "8cfuZas52O", "7SzFeF40Su", "5d9483oAUh", "5cxjoYuF5k", "5UpUMyVWxU", "5S2dRr3aEt", "5OEPGEFy6X", "3NKFNZzme9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1732684942856, 1732186395903, 1732186995801, 1732561226648, 1732186220300, 1732512792960, 1733199850009, 1732185089771, 1732186239892, 1732187160671, 1732415313531, 1732187290219, 1732187043844, 1737524116835, 1732186699801, 1732186659922, 1730604975473, 1732684851428, 1733199689780, 1732184897324, 1732187239776, 1732186936637, 1732628458954, 1732684894997, 1732186866255, 1732415474011, 1733177599081, 1730675976196, 1732187131818, 1732186623864, 1732184129841, 1733025916576, 1734649800755, 1729353127559, 1732186538654 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_Fj97" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_roCe" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_roCe" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_sEhD" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_Fj97" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_Fj97" ], [ "ICLR.cc/2025/Conference/Submission11315/Area_Chair_RAU6" ], [ "ICLR.cc/2025/Conference/Submission11315/Reviewer_sEhD" ], [ "ICLR.cc/2025/Conference/Submission11315/Authors" ] ], "structured_content_str": [ "{\"title\": \"We would like to hear back from reviewer Fj97\", \"comment\": \"Dear reviewer Fj97,\\n\\nGiven the approaching revision deadline, we would welcome your assessment of whether our response adequately addresses your concerns. Please let us know if you need any clarification or have additional questions. Thank you again!\"}", "{\"title\": \"Response to Reviewer Fj97 (4)\", \"comment\": \"> ***W6: Definition of M(q)***\\n\\n**R6:** Thank you for your question. Given any question $q$, the answer $M(q)$ generated by language model $M$ is a random answer following distribution $P_{M(q)|q}$. The distribution $P_{M(q)|q}$ is fully determined by $q$ and $M$, but the random draw $M(q)|q$ involves the sampling randomness independent of $q$ and $M$. In the dataset $\\\\mathcal{D}=\\\\lbrace (q_i,M(q_i),a_i):i\\\\in[n]\\\\rbrace$, the observed $M(q_i)$ is a realization from $P_{M(q)|q=q_i}$. Then the logic behind our correctness predictor $\\\\hat f_\\\\alpha(q,M(q))=\\\\mathbb{I}(\\\\hat\\\\eta(q,M(q))>\\\\hat\\\\tau_\\\\alpha)$ is as follows. \\n\\nFor any new question $\\\\tilde q$, we ask $M$ to generate an output $M(\\\\tilde q)$, then we aim to judge whether the current realization $M(\\\\tilde q)$ is correct or not, based on the question $\\\\tilde q$, the distribution $P_{M(q)|q=\\\\tilde q}$, and the realization $M(\\\\tilde q)$. If we think $M(\\\\tilde q)$ is incorrect, we refuse question $\\\\tilde q$.\\n\\nNote a special case of $\\\\hat f_\\\\alpha$ is that it ignores the realization $M(\\\\tilde q)$ and make decision based on solely $\\\\tilde q$ and $P_{M(q)|q=\\\\tilde q}$. In other words, we judge whether the question $\\\\tilde q_i$ is difficult for $M$. If we think $\\\\tilde q$ is hard and $M$ is likely to generate incorrect answers, we refuse question $\\\\tilde q$ regardless of the realization of the produced answer $M(q)$, although $M$ may still have some probability to produce correct answer.\\n\\nIn Section 2 and 3, we use the more general form of $\\\\hat f_\\\\alpha$ to include $M(\\\\tilde q)$ as an argument. But in the experiment, we consider the special case where $\\\\hat f_\\\\alpha$ make decision based on $\\\\tilde q,P_{M(q)|q=\\\\tilde q}$ and doesn't rely on the specific realization $M(\\\\tilde q)$. The way we utilize $P_{M(q)|q=\\\\tilde q}$ is through Monte-Carlo approximation by generating $k$ answers $\\\\lbrace M(\\\\tilde q)_j:j\\\\in k\\\\rbrace$ from \\n\\n$P_{M(q)|q=\\\\tilde q}$.\\n\\n> ***W7: typos:***\\n\\n**R7:** Thank you for pointing out the potential typos:\\n1) The statement you modified is correct, but our statement is also correct. For generic random element $(q,M(q),a,y)\\\\sim P_{q,M(q),a,y}$, we denote the $\\\\eta(q,M(q))=\\u2119(y=1|q,M(q))$, then for any fixed pair $(q',M(q'))$, $\\\\eta(q',M(q'))=\\u2119(y=1|q=q',M(q)=M(q'))$ can be interpreted as the conditional probability for $M(q)$ to align with $a$ given $(q,M(q))=(q',M(q'))$, which can of course be equivalently stated as the conditional probability for $M(q')$ to align with $a$ given $(q,M(q))=(q',M(q'))$. To avoid confusion, we change the statement in our paper and simply use $\\\\eta(q,M(q))=\\u2119_{y\\\\sim P_{y|q,M(q)}}(y=1|q,M(q))$ instead of $\\\\eta(q',M(q'))=\\u2119_{(q,M(q),y)\\\\sim P_{q,M(q),y}}(y=1|q=q',M(q)=M(q'))$.\\n2) Yes, thank you for pointing out. We have corrected it.\\n3) Yes, thank you for pointing out. We have corrected it.\"}", "{\"title\": \"Response to Reviewer Fj97 (11)\", \"comment\": \"> ***Q7: Line 256: Why is the expected value of $\\\\tilde{v}$ taken?***\\n\\n**R7:** Recall that $\\\\tilde v(k)=\\\\sum_{j=k}^{\\\\tilde n_0}{\\\\tilde n_0\\\\choose j}(1-\\\\alpha)^j\\\\alpha^{\\\\tilde n_0-j}$, where $\\\\tilde n_0$ is the size of incorrect samples selected by rejection sampling. As we have explained in the response to Question 5, the theoretical guarantee under covariate shift consists of two steps. Firstly, we apply rejection sampling to transform the calibration data into samples $\\\\tilde{\\\\mathcal{D}}\\\\_0$ from the target distribution. Then, we reuse the previous theory to $\\\\tilde{\\\\mathcal{D}}\\\\_0$ to conclude the result. Following this line, the equation you referred to can be interpreted as follows. Recall $\\\\mathcal{I}$ is the index set selected using rejection sampling, as we have explained in the response to Weakness 8, $\\\\tilde{\\\\mathcal{D}}\\\\_0|\\\\mathcal{I}\\\\overset{i.i.d.}{\\\\sim}P_0$, then\\n\\\\begin{align}\\n&\\\\mathbb{P}\\\\_{\\\\mathcal{D}}\\\\big(\\\\mathbb{P}\\\\_{(q,M(q))\\\\sim P_0}(\\\\hat\\\\eta(q,M(q))>\\\\tilde T_{(\\\\hat k)})>\\\\alpha\\\\big)\\\\\\\\\\\\\\\\\\n=&\\\\mathbb{E}\\\\_{\\\\mathcal{I}}\\\\mathbb{P}\\\\_{\\\\tilde{\\\\mathcal{D}}\\\\_0}\\\\big(\\\\mathbb{P}\\\\_{(q,M(q))\\\\sim P_0}(\\\\hat\\\\eta(q,M(q))>\\\\tilde T\\\\_{(\\\\hat k)})>\\\\alpha|\\\\mathcal{I}\\\\big)\\\\\\\\\\\\\\\\\\n\\\\le&\\\\mathbb{E}\\\\_{\\\\mathcal{I}}\\\\tilde v(\\\\hat k)\\\\\\\\\\\\\\\\\\n\\\\le&\\\\delta,\\n\\\\end{align}\\nwhere on the right-hand side of the first equation, the inner probability $\\\\mathbb{P}\\\\_{\\\\tilde{\\\\mathcal{D}}_0}(\\\\cdot|\\\\mathcal{I})$ treats the samples $\\\\tilde{\\\\mathcal{D}}_0|\\\\mathcal{I}$ selected by rejection sampling as samples from $P_0$, and reuse the previous type I error result in following inequalities, then, the outer expectation $\\\\mathbb{E}\\\\_{\\\\mathcal{I}}$ counts the randomness due to rejection sampling.\\n\\n> ***Q8: How is the frequency/probability term in Equation 6 calculated/estimated?***\\n\\n**R8:** The frequency of a predicted answer $M(q)_j$ in Equation 6 is calculated by $\\\\frac{m}{k}$, where $m$ is the number of times $M(q)_j$ exists in $k$ generations.\\n\\nWe have added details about the calculation of frequency in Section B.2 updated PDF.\\n\\n> ***Q9: Lines 289-290: how will the distribution-free setting be violated for models requiring fine-tuning to answer factual question? I think most LLMs can do factual QA (perhaps not optimally) without finetuning. So what is the point of mentioning this?***\\n\\n**R9:** 'Distribution-free' refers to models or methods that do not make specific assumptions about the underlying probability distribution. In our main experiments, we utilize calibration dataset to provide distribution-free guarantees. Models trained on this dataset will be adjusted based on the data, and thus is not distribution-free. Therefore, it'll be unfair in main experiments to compare our method with finetuning-based methods.\\n\\n> ***Q10: Why does KLE have only a 15 generation variant in Table 1?***\\n\\n**R10:** Due to time and space limits, we only include a 15-generation variant in our main table. Theoretically\\uff0cit should has a better performance than 10-generaion and 5-generation variants because the uncertainty estimation should be more accurate with more generations. We provide experiment results for 5-generation and 10-generation variants as follows, which has been updated in Sec D.5 in the updated PDF:\", \"table\": \"The Type I Error of FactTest with a significance level $\\\\alpha=0.1$\\n| Dataset | Model | FactTest-kle5 | FactTest-kle10 |\\n| ------- | ------------ | -------- | -------- |\\n| ParaRel | OpenLLaMA-3B | 0.0783 | 0.0778 |\\n| | OpenLLaMA-7B | 0.0880 | 0.0787 |\\n| Hotpot | OpenLLaMA-3B | 0.0656 | 0.0643 |\\n| | OpenLLaMA-7B | 0.0643 | 0.0654 |\\n\\n> ***Q11: the experiment corresponding to Figure 3***\\n\\n**R11:** Sorry for the confusion. This experiment shows the variation tendency of accuracy with the threshold. With a user-specified $\\\\alpha$, our framework can control the Type I error, which changes monotonically. However, the accuracy doesn't follow this monotonous trend, and the maximum accuracy depends on the model and score functions. The results provide insights for users to choose the sigficance level $\\\\alpha$ as well as the performance of different score functions.\"}", "{\"title\": \"Response to Reviewer Fj97\", \"comment\": \"Thank you for your follow-up comment. We appreciate the opportunity to clarify our methodology and address your concerns in greater detail:\\n\\n> The entire paper relies on this premise and makes this assumption, including in the theory and experiments. \\n\\nWe would like to clarify that we said \\\"our work does not rely on the premise that if the model is certain then it is going to be correct\\\" in order to highlight that our theoretical framework ensures Type I error control regardless of the choice of score function. The score function could represent uncertainty, correctness, or even remain constant across all inputs. For instance, in the extreme case where the score function is a constant, Type I error can still be controlled by rejecting all questions, thereby maintaining valid statistical control of Type I error.\\n\\nHowever, controlling the Type II error does depend on the score function's ability to effectively quantify correctness. In our experiments, we primarily employed uncertainty-based measures as score functions because directly assessing correctness is inherently challenging. Nevertheless, our framework is not limited to uncertainty-based approaches. To illustrate this flexibility, we trained a random forest classifier to predict the correctness of question-answer pairs, using the predicted probability of the \\\"correct\\\" class as the score function. We refer to this approach as FactTest-cls and have included it in our updated manuscript. We compare FactTest-cls with two uncertainty-based variants, FactTest-ve15 and FactTest-se15, which employ entropy across generated answers and entropy incorporating linguistic invariances, respectively, to quantify uncertainty.\\n\\nThe results are shown as follows (also see Table 13 in Sec D.5 for further details), which indicate that FactTest-cls achieves competitive accuracy and maintains Type I error below the specified threshold, while also demonstrating improved Type II error rates compared to uncertainty-based score functions.\", \"table\": \"The Accuracy, Type I error and Type II error performance of FactTest-cls compared with uncertainty-based score functions on ParaRel with $\\\\alpha=0.05$.\\n\\n| Base Model | Metric | FactTest-ve15 | FactTest-se15 | FactTest-cls |\\n| ------------ | ------------- | ------------- | --- | ------------- |\\n| OpenLlama-3B | Accuracy(\\\\%) | 67.28 | 67.26 | **85.13** |\\n| | Type I error | 0.05 | 0.05 | 0.04 |\\n| | Type II error | 0.86 | 0.85 | **0.35** |\\n| OpenLlama-7B | Accuracy(\\\\%) | 80.29 | 65.23 | **89.50** |\\n| | Type I error | 0.01 | 0.04 | 0.03 |\\n| | Type II error | 0.92 | 0.87 | **0.44** |\\n| OpenLlama-13B | Accuracy(\\\\%) | 79.41 | 73.09 | **88.37** |\\n| | Type I error | 0.03 | 0.03 | 0.04 |\\n| | Type II error | 0.91 | 0.87 | **0.42** |\\n\\n\\n> i.i.d. assumptions need to properly substantiated.\\n\\nThank you for your comment. i.i.d. assumptions are commonly assumed in machine learning theory literature, ranging from generalization error bounds to conformal prediction (where they assume a related but slightly weaker assumption, exchangeablity) [2,3,4]. \\n\\nIn our experiments, we adhere to this assumption as follows:\\n\\n- 1). For ParaRel, we follow the setup in [1], dividing the dataset into two subsets: an in-domain subset, consisting of samples from the first 15 domains, and an out-of-domain (OOD) subset, comprising samples from the remaining 16 domains. The in-domain subset is then randomly split into training and testing sets, which ensures the data are i.i.d. The OOD subset is referred to as ParaRel-OOD and is utilized for evaluation with covariate shifts, which do not need to be i.i.d.\\n\\n- 2). For other datasets, we utilize standard training and testing splits, which are explicitly designed to follow the same underlying distribution. This setup adheres to the i.i.d. assumption required for our theoretical guarantees.\\n\\nWe acknowledge that the i.i.d. assumption may not hold in certain cases. To address this, we have included a section in the paper extending our theoretical framework to the covariate shift setting in Sec.3, where the assumption is relaxed. We also plan to explore extensions to other types of distribution shifts in future work.\", \"references\": \"[1] R-Tuning: Instructing Large Language Models to Say \\u2018I Don\\u2019t Know\\u2019, NAACL 2024.\\n\\n[2] Foundations of Machine Learning, The MIT Press 2018.\\n\\n[3] Conformal Prediction: A Gentle Introduction, Foundations and Trends in Machine Learning 2023.\\n\\n[4] Conformal Language Modeling, ICLR 2024.\"}", "{\"title\": \"Response to Reviewer Fj97 (3)\", \"comment\": \"> ***W4: a proof sketch for Theorem 1***\\n\\n**R4:** Thank you for the question. **The proof sketch is already provided in Appendix** due to the space limit. Here we provide more clarification to address your concern:\\n\\nThe type II error control in Tong (2013)[3] is for a very different method. Their method takes the empirical type I error as a constraint. In order to control the population type I error at level $\\\\alpha$, they constrain the empirical type I error at level $\\\\alpha-c\\\\sqrt{\\\\frac{\\\\log 1/\\\\delta}{n_0}}$. This constaint also restricts the sample size $n_0$ to be large enough such that $\\\\alpha\\\\gtrsim\\\\sqrt{\\\\frac{\\\\log 1/\\\\delta}{n_0}}$, while **our type I error control works for any sample size $n$ and our type II error control only requires $\\\\alpha\\\\gtrsim\\\\frac{\\\\log (1/\\\\delta)}{n_0}$**.\\n\\nEven **for the proof of type II error control, our method is also different** from that in Tong (2013). The excess type II error can be decomposed into two terms: the first term quantifies how conservative $\\\\hat f_\\\\alpha$ is in the type I error control, i.e., $\\\\alpha-\\\\mathcal{R_0}(\\\\hat f_\\\\alpha)$, and the second term corresponds to the estimation error of $\\\\hat f_\\\\alpha$ for the Bayes optimal classifier $f^*_\\\\alpha$. \\n\\nFor the second term, our analysis is not restricted to Holder class and only requires $\\\\eta$ can be estimated up to increasing transformations, i.e., $\\\\Vert H\\\\circ\\\\hat\\\\eta-\\\\eta\\\\Vert_\\\\infty$ (we will explain this useful condition further in the response to Question 4), while Tong (2013) assume $\\\\eta$ is Holder smooth and can be estimated directly. \\n\\nOur analysis of the first term is also unique. The conservativeness in type I error control in Tong (2013) is due to the deviation between the empirical type I error and population type I error, which is straightforward to analysis. For our method, recall that in our construction of $\\\\hat f_\\\\alpha=\\\\mathbb{I}(\\\\hat\\\\eta>\\\\hat\\\\tau_\\\\alpha)$, the threshold $\\\\hat\\\\tau_\\\\alpha$ is chosen from $n_0$ certainty scores $T_i=\\\\hat\\\\eta(q_i^{(0)},M(q_i^{(0)}))$, then the conservativeness of our method is due to the finite choices of thresholds. Our analysis for the first term relies on a detailed understanding of the behaviour of these thresholds.\\n\\n> ***W5: using terms before definition***\\n\\n**R5:** Thank you for pointing our the problem. We will clarify them in the main text.\\n\\n1. Type I error, Type II error: The definition of Type I error **has already been defined in Line 013 and Line 045**. Since Type II error is opposite to Type I error, we omitted it in the original version, which has now been added to Sec 2.3. In our setting, Type I error is the probability of misclassifying incorrect $(q,M(q))$ from $P_0$ as correct. Type II error is the probability of misclassifying correct $(q,M(q))$ from $P_1$ as incorrect. \\n2. human-annotated samples: Human-annotated samples represent data samples with human-annotated labels, which in our setting is $\\\\{(q_i,a_i):i\\\\in[n]\\\\}$ containing the set of $n$ questions $q_i$'s and corresponding correct answers $a_i$'s typically provided by humans.\\n3. $\\\\epsilon_{\\\\eta}$: $\\\\epsilon_\\\\eta$ is also defined (in line 170 in the original version), where we assume there exists an increasing function $H$ and $\\\\epsilon_\\\\eta\\\\ge0$ such that $\\\\Vert H\\\\circ\\\\hat\\\\eta-\\\\eta\\\\Vert_\\\\infty\\\\le\\\\epsilon_\\\\eta$.\\n4. *mild conditions* under which the Type II error is controlled: We have modified the expression in line 55 in the revised PDF to make it clearer. Specifically, we make the following three assumptions for type II error control: 1) $\\\\alpha\\\\gtrsim\\\\frac{\\\\log 1/\\\\delta}{n_0}$ instead of $\\\\sqrt{\\\\frac{\\\\log 1/\\\\delta}{n_0}}$ required by Tong (2013)[3], 2) $\\\\hat\\\\eta(q,M(q))$ is a continuous random variable with $(q,M(q))\\\\sim P_0$, 3) $\\\\tau_\\\\alpha+\\\\epsilon_\\\\tau+\\\\epsilon_\\\\eta<1$, where $\\\\tau_\\\\alpha$ is the threshold of the Bayes optimal classifer with type I error constrained below $\\\\alpha$, and $\\\\epsilon_\\\\tau$ defined in line 184 is expected to be of small order.\\n5. aligns with the correct answer: Thank you for pointing out the issue. We have added details in Sec 4.1. Specifically, evaluating whether $M(q)$ aligns with the answer $a$ depends on the datasets. For question-answering datasets, we verify whether the first few output tokens contain $a$. For multiple-choice datasets, we check whether $M(q)$ exactly matches $a$.\\n7. $\\\\mathcal{Q}$ and $\\\\mathcal{A}$: $\\\\mathcal{Q}$ is the set of all possible questions and $\\\\mathcal{A}$ is the set of all possible answers. We have added it to the updated PDF.\"}", "{\"comment\": \"Thanks to the authors for their rebuttal and additional experiments which clarify several of my previously mentioned concerns. I have, however, some concerns which are not convincingly addressed, mentioned below.\\n1. I am confused by this statement \\\"our work does not rely on the premise that if the model is certain then it is going to be correct\\\". The entire paper relies on this premise and makes this assumption, including in the theory and experiments. I would recommend the authors scope their work to be about better uncertainty quantification than relate their uncertainty measure further to factuality. \\n2. While the method assumes iid samples from some input distributions, experiments assume prior datasets to be iid and directly use them for analysis. It would have been ok for another paper that wouldn't claim guarantees. But for a work providing guarantees, such assumptions need to properly substantiated.\"}", "{\"comment\": \"Thank you for your response and your suggestions to make our work better.\"}", "{\"title\": \"Response to Reviewer Fj97 (2)\", \"comment\": \"> ***W3: theoretical generalizability of the uncertainty predictor: useful as a general uncertainty calibrator\\uff1f***\\n\\n**R3:** Thank you for the question. However, there seems to be a misunderstanding in this comment. To clarify, let us first explain the distribution we consider in this work. As we defined on Page 2, we assume there is a distribution $P_{q,a}$ over all the possible question-answer pairs $(q,a)$. The marginal distribution $P_q$ of $q$ is over all the possible questions, and the conditional distribution $P_{a|q}$ of $a$ given $q$ is supported on the set of (correct) answers to $q$. Therefore, under $P_{q,a}$, $a$ given $q$ can be viewed a random answer among all the correct answers of $q$. Recall that given any question $q$, the distribution $P_{M(q)|q}$ of the answer $M(q)$ generated by $q$ is fully determined by the language model $M$ and does not rely on the correct answer $a$ given $q$, which means $M(q)\\\\perp a|q$. Following this, we defined a distribution $P_{q,M(q),a}=P_{q}P_{a|q}P_{M(q)|q}$ over all the possible combinations $(q,M(q),a)$. Then we introduce another binary random variable $y=\\\\mathbb{I}(M(q)\\\\text{ aligns with }a)$ indicating whether the generated answer $M(q)$ aligns with the correct answer $a$. Therefore, $y$ is deterministic given $q,M(q),a$. This construction results in a well defined distribution $P_{q,M(q),a,y}$. Finally as we defined on Page 3, we abbreviate the distribution $P_{q,M(q)|y=0}$ as $P_0$ and $P_{q,M(q)|y=1}$ as $P_1$.\\n\\nEquipped with the definition of $P_{q,M(q),a}$, we can view the dataset $\\\\mathcal{D}=\\\\lbrace(q_i,M(q_i),a_i):i\\\\in[n]\\\\rbrace$ as $n$ i.i.d. samples from $P_{q,M(q),a}$. After defining $y_i=\\\\mathbb{I}(M(q_i)\\\\text{ aligns with }a_i)$, the set $\\\\mathcal{D_0}=\\\\lbrace(q_i,M(q_i)):y_i=0,i\\\\in[n]\\\\rbrace$ given $\\\\lbrace y_i:i\\\\in[n]\\\\rbrace$ can be viewed as $n_0=\\\\sum_{i\\\\in[n]}\\\\mathbb{I}(y_i=0)$ i.i.d. samples from $P_0$, and similar for $\\\\mathcal{D_1}$.\\n\\nUsing the datasets introduced above, the type I and II error controls can be summarized as follows: \\n1) With probability at least $1-\\\\delta$ over the randomness of the dataset $\\\\mathcal{D}$, the probability that $\\\\hat f_\\\\alpha$ misclassifies any independent incorrect test sample $(q,M(q))$ from $P_0$ as correct is below $\\\\alpha$.\\n2) With probability at least $1-2\\\\delta$ over the randomness of the dataset $\\\\mathcal{D}$, the probability that $\\\\hat f_\\\\alpha$ misclassifies any independent correct test sample $(q,M(q))$ from $P_1$ as incorrect is not too large, compared to that of the optimal classifier with controlled type I error.\\n\\nSince our goal is to detect incorrect answers for any future question-answering scenarios, not restricted to questions in the calibration data, the distribution we consider covers all possible question-answer pairs, including elements outside $\\\\mathcal{D}$. Moreover, our method has error control over any independent question-answer pairs, therefore it can be used as a general calibrator for detecting incorrectness.\"}", "{\"title\": \"Response to Reviewer Fj97 (3)\", \"comment\": \"7. probability distribution over distinct meanings: This is proposed by Semantic Entropy, and the details and equations can be seen in appendix. Specifically, $$\\n SE(q,M(q)) = - \\\\sum_{c} p(c|q) \\\\log p(c|q) \\n = -\\\\sum_c \\\\bigg(\\\\Big(\\\\sum_{\\\\mathbf{a} \\\\in c} p(\\\\mathbf{a} \\\\mid q)\\\\Big) \\\\log \\\\Big[ \\\\sum_{\\\\mathbf{a} \\\\in c} p(\\\\mathbf{a} \\\\mid q) \\\\Big]\\\\bigg)\\n$$\\n where $c$ represents possible meaning-class and $p(\\\\mathbf{a}|q)$ is the probability of the entire answer sequence, that is, the product of the conditional probabilities of new tokens given past tokens.\\n8. FactTest-t: The definition of FactTest-t **is already defined in Section 4.1**. To facilitate comparison with training-based methods, we randomly split our training dataset, allocating half for instruction-tuning and the remaining half to construct the calibration dataset. We use 15-generation SE as the score function, referring to this variant as FactTest-t\"}", "{\"title\": \"Response to Reviewer roCe (2)\", \"comment\": \"> ***Q3: (4) is barely an application of a binomial tail bound under the iid assumption, but as written in the paper it is not iid, i.e., the outer probability is taken over D but the inner probability is taken over a filtered distribution P_0. Can you justify the correctness of (4) if I miss something?***\\n\\n**R3:** Thank you for your question. Equation (4) in our paper is correct. Since our algorithm only utilize the incorrect samples, although we count the randomness of all samples in the outer probability, the event in the probability only depends on incorrect samples, therefore the i.i.d. condition is still valid.\\n\\n> ***Q4: If this paper\\u2019s method is equivalent to PAC conformal prediction, extension to covariate shift via rejection sampling is not novel.***\\n\\n**R4:** As stated in Questions 1 and 2, both our target and method are not equivalent to PAC conformal prediction. Indeed, rejection sampling is a standard method for distribution shift. We are actually adopting this standard technique to the new problem of hallucination detection and adapt our framework to out-of-distribution settings.\\n\\n> ***Q5: Can you compare your method and PAC conformal prediction in experiments for both no shift and covariate shift cases?***\\n\\n**R5:** Thank you for your suggestion. However, we have to stated that the goal of PAC conformal prediction is different from ours. Besides, traditional conformal prediction, including PAC-style ones, is not suitable for the generation of LLMs since the output space is infinite and it's infeasible to explore all possible predictions. \\n\\nAs you mentioned before, Conformal Language Modeling (CLM) extends traditional conformal prediction to language generation by calibrating a stopping rule and employing a rejection rule, which is also PAC-style. \\n\\nThough we have stated above that checking whether an output is from CLM prediction set isn't a suitable method for hallucination detection, we still compare FactTest with Conformal Language Modeling (CLM) on ParaRel and HotpotQA to address your concern.\", \"table\": \"The accuracy of FactTest-kle15 and CLM.\\n\\n| Dataset | Model | Pretrained | CLM | FactTest-kle15 |\\n| -------- | ------------ | ---------- | ----- | -------------- |\\n| ParaRel | Openllama-3B | 36.66 | 39.86 | 78.45 |\\n| | Openllama-7B | 40.38 | 42.58 | 76.83 |\\n| HotpotQA | Openllama-3B | 25.72 | 26.40 | 55.35 |\\n| | Openllama-7B | 28.63 | 30.82 | 60.66 |\\n\\n\\n\\n**References**\\n\\n[1] Conditional validity of inductive conformal predictors, PMLR 2012.\\n\\n[2] Conformal Language Modeling, ICLR 2024.\\n\\n[3] R-Tuning: Instructing Large Language Models to Say 'I Don't Know', NAACL 2024.\"}", "{\"title\": \"We would like to hear back from reviewer Fj97\", \"comment\": \"Dear reviewer Fj97,\\n\\nWe would like to follow up to see if the response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!\"}", "{\"title\": \"Response to Reviewer sEhD (2)\", \"comment\": \"> ***Q3: The threshold selection approach used in the paper essentially corresponds to a conformalized quantile regression. This helps achieving a marginal (on average over all the data) guarantee on the Type I error. However, as the error may not be homogeneous over different questions, one may wonder if ensuring a marginal guarantee is, in fact, sufficient. I wonder if the authors have experienced different error magnitudes across different segments of the datasets.***\\n\\n**R3:** Thank you for your insightful question. \\n\\nAlthough the error may not be homogeneous across different answers and domains, ensuring a marginal guarantee is suficient for Type I error control over the entire test distribution. However, we acknowledge that this marginal guarantee on the Type I error may not fully capture variations in error rates across different segments of the data, especially if the distributions over different domains are quite different. We will leave this conditional Type I error control as an important future work, by leveraging the tools developed in recent literature on the conditional coverage of conformal prediction.\\n\\nIn the following, we provide experiment results of FactTest-kle15 on ParaRel using Llama 13B across different segments of the dataset. Since the testing dataset contains 15 different subsets according to the domains of the questions, we report the corresponding accuracy and Type I error of each domain. The significance level is set to 0.05. 'All' includes all the subsets.\\n\\n| metric\\\\subset | All | field of work | occupation | employer | genre | native language | capital of | named after | religion | headquarters location | manufacturer | developer | place of birth | twinned administrative body | place of death | record label |\\n| ------------- | --- | ------------- | ---------- | -------- | ----- | --------------- | ---------- | ----------- | -------- | --------------------- | ------------ | --------- | -------------- | --------------------------- | -------------- | ------------ |\\n| Accuracy | 78.45 | 5.88 | 44.44 | 86.11 | 15.62 | 97.81 | 59.25 | 80.00 | 54.54 | 94.18 | 98.99 | 95.83 | 85.71 | 0 | 0 | 0 |\\n| Type I error | 0.03 | 0.05 | 0.02 | 0.02 | 0.06 | 0.10 | 0.04 | 0.02 | 0.03 | 0.03 | 0.02 | 0.01 | 0.01 | 0.01 | 0.00 | 0.03 |\\n\\nAs shown in the table, while the overall Type I error across the entire dataset is controlled at the specified significance level (0.05), there is variability in the Type I error across different domains.\"}", "{\"title\": \"Response to Reviewer Fj97 (12)\", \"comment\": \"> ***Q12: Lines 459-460: How do you train a classifier to approximate density ratios? Is it unsupervised training?***\\n\\n**R12:** We randomly split 1000 samples from ParaRel-OOD as validation samples and the remaining 12k samples as testing samples. We then utilize the supervised identification strategy to divide the validation samples into $D_0^{'}$ and $D_1^{'}$, and the training dataset into $D_0$ and $D_1$.\\n\\nWe extract the features from the questions in $D_0^{'}$, $D_0$, and label them as 1(target data) and 0(source data). We then train a binary classifer and utilize the predicted probability to approximate density ratios.\\n\\nWe have added the details in Sec C.3 in the updated PDF.\\n\\n> ***Q13: In the black-box APIs setting, is the open-source model used to get the uncertainty score even during testing?***\\n\\n**R13:** Yes. In Table 3, we utilize open-source model to provide certainty scores both in calibrating and testing. However, one could employ black-box uncertainty quantification methods to serve as score functions, which will not necessitate open-source models.\\n\\nWe here provide another experiment using only black-box APIs to calculate scores. Specifically, we utilize the SelfCheckGPT with NLI score to serve as the score function, with significance level $\\\\alpha$ = 0.1:\\n\\n| | Base (Acc%) | FactTest-scgpt (Acc%) |\\n| ------ | ----------- | --------------------- |\\n| Claude | 58.25 | 75.54 |\\n| GPT-4o | 66.39 | 73.65 |\\n\\nHowever, black-box uncertainty quantification methods usually prompt the APIs multiple times to compute uncertainty, which leads to much higher cost. Therefore, utilizing open-source models to calculate scores is a feasible and cost-friendly option.\\n\\n**Reference**\\n[1] Survey of Hallucination in Natural Language Generation, ACM Computing Surveys 2023.\\n\\n[2] A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions, 2023.\\n\\n[3] A plug-in approach to Neyman-Pearson classification, JMLR 2013.\\n\\n[4] SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models, EMNLP 2023.\\n\\n[5] R-Tuning: Instructing Large Language Models to Say \\u2018I Don\\u2019t Know\\u2019, NAACL 2024.\\n\\n[6] C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models, ICML 2024.\\n\\n[7] Language Models with Conformal Factuality Guarantees, ICML 2024.\\n\\n[8] Decoding Intelligence: A Framework for Certifying Knowledge Comprehension in LLMs, ArXiv 2024.\\n\\n[9] Quantitative Certification of Knowledge Comprehension in LLMs, SeT LLM @ ICLR 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer Fj97 (8)\", \"comment\": \"> ***W10: mention the relevant prior works on providing guarantees on the generations of LLMs.***\\n\\n**R10:** Thank you for your suggestions. We have included prior works on providing guarantees on the generations of LLMs as well as the differences between FactTest and these works in our updated PDF(See Sec.5 and Sec.B). Here we provide a brief summary about the related works you mentioned:\\n\\nC-RAG [6] provides conformal risk analysis for RAG models and certifies an upper confidence bound. Conformal Factuality [7] enables the application of conformal prediction in improving model performance while FactTest evaluates the correctness and abstain from answering unknown questions. QuaCer-C [8,9] certifies knowledge comprehension in LLMs with formal probabilistic guarantees, whose goal is similar to ours but it only focuses on knowledge comprehension task.\"}", "{\"title\": \"Response to Reviewer Fj97 (7)\", \"comment\": \"> ***W9-5: (Experiments) no prior uncertainty quantification baselines or other hallucination mitigation methods.***\\n\\n**R9-5:** Thank you for your suggestion. In fact, all uncertainty quantification methods can serve as the score functions and be integrated into our framework. Besides, hallucination mitigation tries to improve the model outputs while our goal is to detect hallucination. We now include SelfCheckGPT-NLI[4] as our baseline, which is a zero-resource hallucination detection method. It will output a contradiction probability between 0 and 1, and then we evaluate the answers to questions with a score less than 0.5. We list some results as follows, and for the complete table please refer to Table 1 in the updated PDF.\", \"table\": \"The accuracy performance and Type 1 error of FactTest using instruction-tuned models as base models. The significance level is set to 0.1.\\n\\n| Dataset | Model | Base | FactTest-se15 | FactTest-kle15 |\\n| -------- | --------------------- | ----- | ------------- | -------------- |\\n| ParaRel | Llama-3.2-3B-Instruct | 39.34 | 72.79 (0.08) | 80.01 (0.08) |\\n| ParaRel | Tulu-2-7B | 43.89 | 75.47 (0.06) | 78.49 (0.07) |\\n| HotpotQA | Llama-3.2-3B-Instruct | 33.40 | 57.75 (0.06) | 60.38 (0.07) |\\n| HotpotQA | Tulu-2-7B | 32.91 | 53.54(0.05) | 45.89(0.10) |\\n| WiCE | Llama-3.2-3B-Instruct | 55.11 | 75.16 (0.09) | - |\\n| WiCE | Tulu-2-7B | 57.20 | 63.22 (0.08) | - |\\n| FEVER | Llama-3.2-3B-Instruct | 33.33 | 68.48 (0.10) | - |\\n| FEVER | Tulu-2-7B | 47.87 | 69.40 (0.09) | - |\"}", "{\"summary\": \"This paper proposes a novel statistical method that detects factuality of large language models (LLMs) with statistical guarantees, i.e. hallucination detection with guarantees. The main method assumes the iid assumption, and controls the type 1 error by thresholding uncertainty of LLM\\u2019s answers. This is further extended to hold in covariate shift by using rejection sampling. The efficacy of methods is validated over question-answering and multiple-choice tasks with white-box and black-box models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"I like this paper as mainly this tackles an important and timely problem.\", \"This paper attacks an important problem, combatting hallucination in LLMs with guarantees.\", \"The paper is well-written and easy to follow.\", \"This paper has Fairly extensive experiments.\"], \"weaknesses\": [\"This paper is quite interesting, but I am mainly leaning to rejection due to the novelty of this paper \\u2013 this paper completely ignores existing papers in conformal prediction and selective prediction, which are popular methods for building trustworthy AI models in general.\", \"(1) and (3) are equivalent to PAC-style conformal prediction (See proposition 2b in https://arxiv.org/abs/1209.2673 or other related papers). What is the novelty of the proposed method with respect to the PAC-style conformal prediction?\", \"In language tasks, obtaining the correctness of answers is the most important issue. Otherwise, we can apply traditional techniques for LLMs (e.g., conformal prediction). Conformal language modeling (https://arxiv.org/abs/2306.10193) proposes to extend conformal prediction for LLMs and this method can be used as a detector by checking whether a generated answer is included in a conformal set. What\\u2019s the novelty of the proposed method with respect to this conformal language modeling? How can you obtain the indicator variable y_i in Section 2.2?\", \"To my understanding, (4) is incorrect. This is barely an application of a binomial tail bound under the iid assumption, but as written in the paper it is not iid, i.e., the outer probability is taken over D but the inner probability is taken over a filtered distribution P_0. Can you justify the correctness of (4) if I miss something?\", \"If this paper\\u2019s method is equivalent to PAC conformal prediction, extension to covariate shift via rejection sampling is not novel. In particular, https://arxiv.org/abs/2106.09848 extends the PAC conformal prediction under covariate shift using the same rejection sampling techniques. What\\u2019s the novel part of the proposed method compared to this existing work?\", \"In experiments, an important baseline is missing. Can you compare your method and PAC conformal prediction in experiments for both no shift and covariate shift cases?\"], \"questions\": \"Previously mentioned Weaknesses end with questions. Please refer to those questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer roCe\", \"comment\": \"> Achieving (3) is exactly the goal of conformal prediction except that this paper only considers incorrect samples\\n\\nThank you for your follow-up comment. We would like to clarify how our work distinguishes itself from existing conformal prediction (CP) methods in LLMs and to highlight our unique contributions to the field of hallucination detection.\\n\\n**Our work fundamentally builds upon the Neyman-Pearson (NP) classification framework**[1] to establish a threshold selection mechanism that ensures control over Type I error in the context of hallucination detection. **Although NP classification does not explicitly reference conformal prediction**, seeing the inherent connection, **we have demonstrated that the relationship between conformal prediction and NP classification is akin to the well-known duality between confidence interval and hypothesis testing**, and **provided a formal proposition** to show that **the technique employed in the NP umbrella algorithm is equivalent to that used in PAC-style conformal prediction** [2] for determining membership based on p-values (**See Sec.C Discussion in the update PDF**). By defining a classifier and calibrating solely on incorrect samples, we can effectively reformulate PAC conformal prediction to suit our specific problem of hallucination detection. **This novel approach of defining a plug-in classifier and focusing calibration exclusively on incorrect samples** represents a simple yet significant advancement over traditional methods and offers a new perspective on applying CP to LLMs. Moreover, we identify the optimal score function for constructing the optimal classifier with minimum type II error. This aspect has not yet been explored in the conformal prediction literature.\\n\\nOverall, our main contributions apart from NP classification and PAC conformal prediction are as follows:\\n\\n- We **take the first step to formulate hallucination detection as a hypothesis testing problem**, explicitly aiming to control both Type I and Type II errors. Traditional CP methods in LLMs, such as Conformal Language Modeling (CLM), focus primarily on coverage guarantees without differentiating between correct and incorrect samples, thereby **lacking explicit error rate controls** essential for reliable hallucination detection (See our next response for more details).\\n- We define **a plug-in classifier to predict the correctness** of question-answer pairs and **utilizes only incorrect samples for calibration**. This targeted calibration allows for precise control over Type I errors by specifically modeling the distribution of incorrect answers, unlike CP\\u2019s uniform treatment of all samples.\\n- In addition to controlling Type I error, **our framework provides a novel Type II error control analysis, which is not addressed in the NP umbrella algorithm[1] or PAC conformal prediction frameworks**. This dual-error control ensures that incorrect answers are reliably rejected while correct answers are not unnecessarily excluded, thereby enhancing the overall reliability of our factuality testing in LLMs.\\n- We perform extensive experiments on question-answering (QA) and multiple-choice benchmarks. The empirical results demonstrate that FactTest is not only **simple to use** but also **highly effective** in detecting hallucinations, achieving 40\\\\% accuracy improvement on ParaRel, WiCE and FEVER, and 30\\\\% on HotpotQA.\\n\\n[1] Neyman-Pearson Classification Algorithms and NP Receiver Operating Characteristics, Science Advances 2018.\\n\\n[2] Conditional Validity of Inductive Conformal Predictors, PMLR 2012.\"}", "{\"comment\": \"Thank you for your response. We appreciate the opportunity to address your concerns and clarify aspects of our methods.\\n\\nWe want to clarify more about \\\"assessing correctness is inherently challenging\\\": Unlike training or validation phases where ground-truth labels are accessible, labels for newly generated answers given testing samples are not available in real-time. This absence necessitates reliance on indirect measures, such as external knowledge bases, uncertainty quantification and so on, to infer the correctness of generated responses. \\n\\nRegarding the i.i.d assumption, as we mentioned before, many foundational machine learning theories operates under the iid assumption to ensure the validity of their theoretical results. Generally speaking, while there're some cases that can go beyond i.i.d, these are often limited to specific scenarios or require additional assumptions or mechanisms. Nonetheless, recognizing the practical limitations of the iid assumption, we have extended our theoretical framework to accommodate distribution shifts, thereby enhancing the generalizability of our approach beyond strictly iid scenarios.\\n\\nBesides, we have revised our paper to include all the additional experiments you have raised and refined parts of the main text to eliminate potential confusion and address your concerns. As the discussion deadline approaches, we will appreciate it if you could give us examples that you think \\\"the claims need to be made more formal and informative\\\" and the specific \\\"positioning of the paper\\\" that needs revision. \\n\\nThank you again.\"}", "{\"title\": \"Response to Reviewer Fj97 (1)\", \"comment\": \"Thank for your constructive feedbacks. We are glad that you acknowledge the experiments of our work. Here we provide responses to your comments one by one. We also add more experimental results according to your suggestions. We hope these could address your concerns (also see the revised paper in the updated pdf):\\n\\n> ***W1: concerns about the equivalence of the notions of certainty of the model and its correctness.***\\n\\n**R1:** Thank you for your question. We want to clarify that our work does not rely on the premise that if the model is certain then it is going to be correct. FactTest can control the Type I error with any score function, while Type II error can be controlled if the score function indeed quantifies the model correctness. Given the difficulty of directly measuring correctness without ground truth labels during testing, we follow prior works and use uncertainty as an indicator of potential hallucination. In our implementation, we utilize the certainty scores to serve as score functions, which do not contradict previous works.\\n\\nIn our previous PDF, we term 'the model answers q correctly' as 'model being certain of q', which may cause some confusion. Therefore, we have revised the expressions in Sec.1 and Sec.2 and differentiate these two terms. Some modifications are: (1) If null hypothesis is rejected, i.e., $M(q)$ aligns with $a$, the question-generated answer pair will be deemed correct; otherwise, it's incorrect. (2) $y_i$ indicates the correctness of $M(q)$, based on which the samples will be divided into incorrect subset $\\\\mathcal{D}_0$ and correct subset $\\\\mathcal{D}_1$.\\n\\nAs for your concern about hallucination in Line 034, we want to rephrase the definition of it, which is *the generated content that nonsensical or unfaithful to the provided source content but **appears to be fluent and grounded in the real context.*** [1,2] The \\\"seemingly high confidence\\\" is different from the inherent model uncertainty to be estimated. We have modified the expression to *'generate nonfactual and incorrect information with seemingly high fluency and natural grounding'* in Line 034 to make it clearer.\\n\\n> ***W2: a mismatch in the definition of hallucination: the models generating incorrect responses with high confidence\\uff1bhallucination occurs when model is uncertain.***\\n\\n**R2:** Thank you for your question. As stated in **R1**, the \\\"seemingly high confidence\\\" in our original version means the LLMs output hallucinations in a way that seems natural and fluent, which is hard to tell apart from other \\u201creal\\u201d perceptions [1]. This is different from the inherent model uncertainty. We follow the prior works and assume that when the model is uncertain, there's a high probability that the generated output is a hallucination. We have modify the statement regarding hallucination in Line 34 to make it more clearer.\"}", "{\"title\": \"Response to Reviewer sEhD (1)\", \"comment\": \"Thank you for the positive feedback and constructive suggestions that can help us to improve the paper. We are happy that you acknowledge our work\\u2019s originality, technical quality and presentation. For your questions, we provide additional experimental results and explanations as follows (also see the revised paper in the updated pdf):\\n\\n> ***W1: Robustness with respect to density ratio estimates***\\n\\n**R1:** Thank you for your comment. Indeed, density ratio estimation can be unreliable in some cases, such as in high dimensional spaces. In theory, our method will be affected by the density ratio estimation error.\\n\\nIn the covariate shift setting, the source conditional distribution $\\\\tilde P_{y|q,M(q)}$ equals the target conditional distribution $P_{y|q,M(q)}$. If we assume the oracle score function $\\\\eta(q,M(q))=\\u2119_{y\\\\sim P_{y|q,M(q)}}(y=1|q,M(q))$ can be estimated well and we have access to unlabeled samples from the target distribution $P_{q,M(q)}$, one way for decreasing the impact of density ratio estimation is using semi-parametric theory to construct a debiased classifier, which is doubly robust with respect to the errors of density ratio estimator and oracle score estimator. Then the error of density ratio estimator will appear through a multiplication with the error of oracle score estimator. \\n\\nHowever, in our cases, the oracle score function corresponds to the oracle rule of judge whether an answer is correct for a question, which is itself a challenging and unsolved problem. Therefore, it is not clear how to construct a good estimator for the oracle score function. Consequently, for hallucination detection of LLMs, it is hard to bypass the dependence on the first order density ratio estimation error.\\n\\n> ***Q1: Is there a typo in the legend of Figure 1? The ve10 and ve15 results are repeated twice.***\\n\\n**R1:** Thank you for pointing out the problem and sorry for the confusion caused by our typos. The legends should be ve10, ve15, se10 and se15. We have corrected this in the revised pdf. \\n\\n> ***Q2: I feel the authors could bring a stronger connection with the calibration and conformal prediction literature.***\\n\\n**R2:** Thank you for your advice. We have added the literature review of calibration an conformal prediction in the revised paper (See Sec 5 and B). Here we briefly summarize these works:\\n\\n**More related works about Confidence Calibration:** Recent research on confidence calibration for LLMs has explored several innovative approaches. For example, Tian et al. (2023) elicits verbalized confidences to improve calibration. Huang et al. (2024) proposes confidence elicitation methods for long-form generations. Multicalibration (Detommaso et al., 2024) aims to ensure LLM confidence scores accurately reflect the true likelihood of predictions being correct.\\n\\n**More related works about Conformal Prediction:** Conformal prediction is a statistical framework that provides finite-sample, distribution-free guarantees on the uncertainty of predictions. It enables the construction of confidence sets that contain the true outcome with a specified probability (Shafer & Vovk, 2007; Angelopoulos & Bates, 2022; Barber et al., 2023). Specifically, Kumar et al. (2023) provides conformal guarantees on multiple-choice datasets. C-RAG (Kang et al., 2024) provides conformal risk analysis for RAG models and certifies an upper confidence bound. CLM (Quach et al., 2024) extends conformal prediction for open-form generations and provide coverage guarantees. Conformal Factuality (Mohri & Hashimoto, 2024) enables the application of conformal prediction in improving model performance. FactTest differs from those works in that it aims to evaluate the model's ability to answer correctly and abstain from answering unknown questions.\"}", "{\"title\": \"Response to Reviewer Fj97 (10)\", \"comment\": \"> ***Q4: How does assuming H to be an identity function ensure the condition $\\\\Vert H\\\\circ\\\\hat{\\\\eta}-\\\\eta\\\\Vert_\\\\infty\\\\leq\\\\epsilon_\\\\eta$? What is the point of having H in the first place, then?***\\n\\n**R4:** \\nIn our paper, we used the sentence \\\"WLOG, we assume $H$ is the identity function\\\" to simplify the notations and derivations in the type II error analysis. This simplification mainly works for the proof of Theorem 2 and does not affect the statement of this theorem. In the revision, we move this statement into the appendix to avoid confusion.\\n\\nIn the following, we will explain the role of the increasing transformation $H$ and the validity of this \\\"WLOG\\\" simplification. Specifically, we will demonstrate in the following paragraphs that 1) our construction of $\\\\hat f_\\\\alpha$ is invariant under increasing transformations of $\\\\hat\\\\eta$, then we can pretend that the score function is $H\\\\circ\\\\hat\\\\eta$ and thus, the transformation becomes identity for the new score, 2) the introduction of $H$ allows a more flexible metric to quantify the difference between $\\\\hat\\\\eta$ and $\\\\eta$, allowing the usage of many modern classification algorithms for training a score function from data.\\n\\nSuppose there exists some increasing function $H$ such that $\\\\Vert H\\\\circ\\\\hat\\\\eta-\\\\eta\\\\Vert_\\\\infty\\\\le\\\\epsilon_\\\\eta$. Recall that the classifier we consider has the form $\\\\hat f_\\\\alpha=\\\\mathbb{I}(\\\\hat\\\\eta>T_{(\\\\hat k)})$, where the index $\\\\hat k$ satisfies Equation (5). If we replace $\\\\hat\\\\eta$ by $H\\\\circ\\\\hat\\\\eta$ and rerun the algorithm using the new score function $H\\\\circ\\\\hat\\\\eta$, we have the following observations:\\n1) $\\\\hat k$ defined in Equation (5) remains the same.\\n2) The new classifier is $\\\\mathbb{I}(H\\\\circ\\\\hat\\\\eta>(H(T))\\\\_{(\\\\hat k)})$, where $H(T_i)=H\\\\circ\\\\hat\\\\eta(q_i\\\\^{(0)},M(q_i\\\\^{(0)}))$ are the new scores of the samples and $(H(T))\\\\_{(\\\\hat k)}$ is the $\\\\hat k$-th order statistic of $\\\\lbrace H(T_i):i\\\\in[n_0]\\\\rbrace$ with $(H(T))\\\\_{(1)}\\\\le\\\\ldots\\\\le(H(T))\\\\_{(n_0)}$.\\n3) Since $H$ is an increasing function, we have $(H(T))\\\\_{(\\\\hat k)}=H(T\\\\_{(\\\\hat k)})$.\\n4) Then the new classifier $\\\\mathbb{I}(H\\\\circ\\\\hat\\\\eta>(H(T))\\\\_{(\\\\hat k)})$ equals $\\\\mathbb{I}(H\\\\circ\\\\hat\\\\eta>H(T\\\\_{(\\\\hat k)}))$ which further reduces to the original classifier $\\\\hat f_\\\\alpha=\\\\mathbb{I}(\\\\hat\\\\eta>T\\\\_{(\\\\hat k)})$.\\n\\nThese observations tell us that our algorithm is invariant under increasing transformation of the score function $\\\\hat\\\\eta$ and using $\\\\hat\\\\eta$ and using $H\\\\circ\\\\hat\\\\eta$ lead to the same decision. Therefore, without the loss of generality, we pretend that we are using the score function $H\\\\circ\\\\hat\\\\eta$ and increasing transformations are no longer required for this score function. This is the reason we assume $H$ is identity function.\\n\\nHowever, introducing the increasing transformation $H$ is extremely useful and is also a unique contribution of our work. Because if we train the score function using data, it corresponds to the probability of predicting $y=1$ based on $(q,M(q))$. It is well known that modern classification algorithms like deep neural networks are bad in calibration, which means they can not estimate the underline conditional probability $\\\\eta(q,M(q))$ well. But our algorithm is still guaranteed to have small type II error if the deep neural networks are order consistent, in the sense that the two events $\\\\lbrace\\\\hat\\\\eta(q_1,M(q_1))>\\\\hat\\\\eta(q_2,M(q_2))\\\\rbrace$ and $\\\\lbrace\\\\eta(q_1,M(q_1))>\\\\eta(q_2,M(q_2))\\\\rbrace$ are close to each other. This flexibility allows the use of many modern classifiers for learning $\\\\eta$.\\n\\n\\n> ***Q5: Line 201: can the target distribution not be redefined and hence expanded to account for the covariate shift? Hence the previous theory can be reused.***\\n\\n**R5:** Yes, the previous theory can be reused after one additional step. \\n\\nThe previous theory requires the data comes from the target distribution. In order to deal with covariate shift, we adopt rejection sampling, which aims to transform the data we have into samples from the target distribution. Once we get the samples from the target distribution, previous theory can be applied to guarantee the performance of our algorithm.\\n\\n> ***Q6: Lines 213-215: Do the source and target distributions have the same sample space?***\\n\\n**R6:** Yes, we do assume the support of target distribution is contained in that of the source distribution.\"}", "{\"comment\": [\"Thanks for the author\\u2019s well-organized answers. I would still not be convinced of the novelty of this paper so maintain my score mainly due to the following reasons.\", \"Achieving (3) is exactly the goal of conformal prediction except that this paper only considers incorrect samples \\u2013 I think considering only incorrect samples can be a difference but incremental. The paper would be more convincing if it introduces the core of conformal prediction and then highlights the differences.\", \"The CLM is the main competitor of this paper (as the goals of this paper and the conformal prediction are aligned) but authors claim that it is not true. In particular, detecting the hallucination by checking the membership of a generated answer from a conformal set is possible by exploiting an admissible function (in CLM) or the method obtaining y_i (in this paper) \\u2013 I feel that authors ignore in-depth relation between their method and the CLM/conformal prediction but checking superficial differences for claiming novelties.\", \"The details on the comparison with the CLM are missing, so I cannot judge the correctness results. At least, authors should use indicator loss instead of a general loss in the CLM for a right comparison.\", \"Given very extensive experiments, addressing the above and acknowledging existing works would clearly enhance the reliability of the paper.\"]}", "{\"title\": \"Response to Reviewer roCe\", \"comment\": \"> The CLM is the main competitor of this paper\\n\\nWhile both approaches aim to ensure correctness, there are critical distinctions that render Conformal Language Modeling (CLM) unsuitable for effective factuality testing.\\n\\nAt first glance, the goals of our paper and CLM may appear similar, as both seek to provide correct answers. Furthermore, the admissible function $A$ in CLM could be chosen as the correctness indicator $y$ in our framework. However, a closer examination reveals that CLM is fundamentally inadequate for factuality testing. Specifically, CLM fails to provide guarantees for controlling either type I or type II errors..\\n\\nUsing our notations, CLM constructs a set $\\\\mathcal{C}(q)$ of answers $\\\\tilde a$ for a given question $q$ with property $$\\\\mathbb{P}(\\\\mathbb{P}(\\\\exists \\\\tilde a\\\\in\\\\mathcal{C}(q):A(\\\\tilde a)=1|\\\\mathcal{D})\\\\ge 1-\\\\alpha)\\\\ge 1-\\\\delta.$$ \\nRoughly speaking, the conformal set $\\\\mathcal{C}(q)$ guarentees to contain at least one correct answer for $q$ with high probability. However, $\\\\mathcal{C}(q)$ has the following problems in factuality testing:\\n\\n1) The set $\\\\mathcal{C}(q)$ is not guaranteed to contain only correct answers. On the contrary, to ensure the $1-\\\\alpha$ coverage of $\\\\mathcal{C}(q)$ for a correct answer, it is likely to contain incorrect answers. If we only reject answers outside $\\\\mathcal{C}(q)$, all the incorrect answers in $\\\\mathcal{C}(q)$ will be misclassified as correct. In extreme cases, to ensure a 100\\\\% coverage of $\\\\mathcal{C}(q)$, $\\\\mathcal{C}(q)$ should contain all possible answers, then no answer will be rejected and the type I error becomes 100\\\\%. Therefore, CLM does not control type I error.\\n\\n2) CLM guarantees that $C(q)$ contains at least one correct answer but does not account for cases where $q$ has multiple correct answers. Any correct answer not included in $\\\\mathcal{C}(q)$ will be misclassified as incorrect. As a result, CLM provides no control over type II error.\\n\\nIn summary, the approach of detecting hallucinations by verifying whether a generated answer belongs to CLM conformal set is inherently infeasible for effective factuality testing.\\n\\n> The details on the comparison with the CLM are missing.\\n\\nThank you for your suggestion. We provide more details as follows to address your concerns.\\n\\nTo ensure fair comparison, we utilize the indicator $y_i$ from our paper as the admission function in CLM and just as you said, employ the indicator loss. We utilize the Algorithm 1 in CLM to construct conformal set since we do not need to select individual components in our datasets. Specifically, we use the likelihood function of the base LM with length-normalization to serve as $\\\\mathcal{Q}(x,y)$, and MAX as $\\\\mathcal{F}$, consistent with CLM's original setup. We utilize the code provided by CLM to implement these functions. For $k_{max}$, we set it to 20, adhering to CLM's configuration. For $\\\\epsilon$, as mentioned in the paper, not all values of $\\\\epsilon$ are valid. In our experiments, for example, there doesn't exist a valid configuration when $\\\\epsilon<0.4$ on ParaRel, we reported the result with $\\\\epsilon=0.8$. However, the results for other $\\\\epsilon$ can be seen as follows.\", \"table\": \"The accuracy of FactTest-kle15 and CLM.\\n| | Pretrained | CLM,$\\\\epsilon=0.5$ | CLM,$\\\\epsilon=0.6$ | CLM,$\\\\epsilon=0.7$ | CLM,$\\\\epsilon=0.8$ | CLM,$\\\\epsilon=0.9$ | FactTest-kle15 |\\n| ------------ | ---------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | -------------- |\\n| OpenLlama-3B | 36.66 | 38.08 | 37.45 | 38.02 | 39.86 | 38.67 | 78.45 |\\n| OpenLlama-7B | 40.38 | 43.11 | 42.72 | 42.31 | 42.58 | 55.37 | 76.83 |\"}", "{\"comment\": \"> ***Q1: What does the parameterization wrt $\\\\mathcal{D}$ mean in the outermost probability term in Equation 1? What distribution is this probability defined over?***\\n\\n**R1:** As we have explained in the response to Weakness 3 (W3), samples in $\\\\mathcal{D}$ follow the distribution $P_{q,M(q),a}$, then the two probabilities in Equation (1) can be understood in the following way.\\n1) The inner probability $\\\\mathbb{P}\\\\_{(q,M(q))\\\\sim P_0}$ counts the randomness of the independent incorrect test sample $(q,M(q))\\\\sim P_0$ and the classifier $\\\\hat f_\\\\alpha$ is fixed here.\\n2) The outer probability $\\\\mathbb{P}\\\\_{\\\\mathcal{D}}$ is taken with respect to the randomness of the dataset $\\\\mathcal{D}\\\\overset{i.i.d.}{\\\\sim}P_{q,M(q),a}$, or equivalently, $\\\\mathbb{P}\\\\_{\\\\mathcal{D}}$ counts all the randomness in the classifier $\\\\hat f_\\\\alpha$.\\n\\n> ***Q2: Do $\\\\mathcal{D_0}$ and $\\\\mathcal{D_1}$ contain of multiple answers $M(q)$ for same $q$?***\\n\\n\\n**R2:** \\nNo. Recall $\\\\mathcal{D_0}=\\\\lbrace (q_i,M(q_i)):y_i=0,i\\\\in[n]\\\\rbrace$ and $\\\\mathcal{D_1}=\\\\lbrace (q_i,M(q_i)):y_i=1,i\\\\in[n]\\\\rbrace$ contain all the incorrect samples and correct samples, respectively. In $\\\\mathcal{D}\\\\_0$ and $\\\\mathcal{D}\\\\_1$, $M(q_i)$ is the currect realization of the answer for $q_i$ produced by $M$.\\n\\nIf we ask a language model $M$ a question $q$ once, it only output one answer $M(q)$, and our goal is to judge whether the currect output $M(q)$ is correct or not. To this end, we collect $n$ realizations of this question-answering procedure and aim to learn some common rules from the data. \\n\\n> ***Q3: Is $(q',M(q'))$ in line 167 from the datasets, or any possible pair?***\\n\\n**R3:** $(q',M(q'))$ can be any possible question-generated answer pair, not restricted to the observed samples.\", \"title\": \"Response to Reviewer Fj97 (9)\"}", "{\"title\": \"We would like to hear back from reviewer roCe\", \"comment\": \"Dear reviewer roCe,\\n\\nWe would like to follow up to see if the responses address your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!\"}", "{\"comment\": \"Thank you for your thorough answers, appreciated. I will keep my score.\"}", "{\"summary\": \"The paper proposes a framework, FactTest to reduce hallucinations in LLM responses. FactTest uses Neyman-Pearson methods to make an uncertainty classifier and prevents LLMs from answering responses for which they are uncertain to reduce hallucination. The empirical analysis shows the method's constrained Type-1 errors and accuracy.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper applies the Neyman-Pearson method to construct an uncertainty classifier with guarantees on constrained Type-1 error.\\n2. The work suggests a method to remove the typical iid assumption when designing Neyman-Pearson classifiers to cater to practical scenarios. \\n3. The experiments show applicability of the method to black-box API models as well, thus enhancing its practical value.\", \"weaknesses\": \"I have the following major criticisms for the paper. With these, I think that the paper is not ready for acceptance. However, if the authors can address my concerns appropriately, I can consider increasing my score.\\n1. My main concern is the assumption of the equivalence of the notions of certainty of the model and its correctness. The method builds entirely on the premise that if the model is certain then it is going to be correct. The wording of the paper conveys this too, where the words \\\"certain\\\" and \\\"correct\\\" are often used interchangeably. Lines 97-104 mix up the notions of correctness and certainty. As the paper itself mentions in line 34 that models can generate incorrect responses with high confidence, such an equivalence of the notions of correctness and certainty is incorrect till certainty is formally defined differently from prior works, which is not done in the paper. I am especially confused by line 102, which says that when the null hypothesis is rejected, i.e., the model can answer q certainly, then M(q) aligns with a. There is no justification given for the same till that point of the paper. Same confusion is created in lines 124-125 where first $y_i$ indicates uncertainty of responses and then correctness in the following equation.\\n2. There is a mismatch in the definition of hallucination in the Introduction. Line 34 mentions hallucination as the models generating incorrect responses with high confidence and line 42 says that hallucination occurs when model is uncertain. The authors should consistently define the property.\\n3. I am doubtful about the theoretical generalizability of the uncertainty predictor $\\\\hat{f}_\\\\alpha$, which is constructed on the samples from $\\\\mathcal{D}$ to samples outside of $\\\\mathcal{D}$, to be useful as a general uncertainty calibrator. I believe that the authors should thoroughly study this aspect. Does the sample space of $P$ also contain elements outside of $\\\\mathcal{D}$?\\n4. \\\"We prove that our statistical framework achieves strong power control under mild conditions, ensuring that the predictor can also maintain a low Type II error.\\\" I don't see how this is a contribution from the main paper. The result of using a thresholding based classifier that has controlled Type 2 error appears to follow from Tong (2013). As the authors claim this contribution, they should provide at least a proof sketch for Theorem 1.\\n5. There are several instances of using terms before definition, some of which I enumerate below. \\n 1. Lines 50-54 mention terms like Type-1 error, Type-2 error, and false positive rate, before clearly state the null hypothesis.\\n 2. [Line 62] The term \\\"human-annotated samples\\\" is used before definition/context. \\n 3. $\\\\epsilon_{\\\\eta}$ is used in line 170 before definition.\\n 4. The paper does not clearly state the *mild conditions* (phrase used several times in the paper) under the Type-2 error is controlled.\\n 5. What is meant by the phrase \\\"aligns with the correct answer\\\"?\\n 6. What are $\\\\mathcal{Q}$ and $\\\\mathcal{A}$? They are used before definition in line 105.\\n 7. Line 269: How is the \\\"probability distribution over distinct meanings\\\" defined?\\n 8. What is FactTest-t that comes up Section 4.3, without any prior definition?\\n6. *Definition of M(q)*\\n 1. I sense ambiguity in the statement in line 116, where the authors mention that they consider the effects of the distribution of $M(q)$ as well in the probability term in equation 1. $M(q)$ is a certain realization from the distribution of responses, which is not explicitly captured in the expression. I would encourage the authors to explain this point more explicitly. \\n 2. All through sections 2 and 3, the framework used M(q) as a single generation for a given question q. However, in the experiments, in equation 6, M(q) appears to be a list of responses. I would encourage the authors to be consistent in their notations.\\n8. Major typos:\\n 1. I think that in line 167, it should be \\\"ability that \\ud835\\udc40 answers the question *\\ud835\\udc5e* (not q') certainly given any question \\ud835\\udc5e\\u2032 and the generated answer \\ud835\\udc40(\\ud835\\udc5e\\u2032)\\\".\\n 2. It looks like the legends of Figure 1 have typos in them, as the plot names are repeated.\\n 3. Line 465: Shouldn't it be FactTestO-SE instead of FactTest-SE?\\n11. Line 223: The authors should provide the proof for $\\\\tilde{\\\\mathcal{D}}_0\\\\mid\\\\mathcal{I}\\\\sim P_0$ and for iid. Moreover, what is meant by the notation: $\\\\tilde{\\\\mathcal{D}}_0\\\\mid\\\\mathcal{I}$, specifically, the conditioning? \\n14. Experiments:\\n 1. The evaluation is just on Llama models and GPT-4o-mini. There are several other small open-source and closed-source models that must be evaluated to fully understand the efficacy of the method. Examples are Mistral, Gemini, Claude, GPT-4o, Phi, etc. \\n 2. I think the evaluations should also report the % of willingly answered questions. Without that, it is hard to judge whether the method makes the models too conservative about QA. \\n 3. Table 1 must also report the accuracy of the pretrained model on the subset of questions answered willingly in the FactTest experiments. \\n 4. For practical significance level $\\\\alpha=0.05$, the Type-2 error shown in Figure 2 does not appear to be controlled. It appears to invalidate the claim of the paper about controlling Type-2 error too. \\n 5. The experiments consider only finetuning-based baselines and has no prior uncertainty quantification baselines or other hallucination mitigation methods to compare their uncertainty results against.\\n 6. It is not made clear what parts of the datasets used were for training and testing of the methods.\\n 7. The main text should mention how the ParaRel-OOD dataset differs from ParaRel.\\n 8. I don't understand how FactTest can work on just the pretrained model, without instruction tuning. The former models are known to not output the answer properly in most settings, which is the main motivation of instruction tuning. Why can't FactTest be applied to instruction tuned models?\\n25. The paper does not mention the relevant prior works on providing guarantees on the generations of LLMs. A non-exhaustive list is following:\\n 1. C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models by Kang et al.\\n 2. Decoding Intelligence: A Framework for Certifying Knowledge Comprehension in LLMs by Chaudhary et al. \\n 3. Language Models with Conformal Factuality Guarantees by Mohri et al.\", \"questions\": \"1. What does the parameterization wrt $\\\\mathcal{D}$ mean in the outermost probability term in Equation 1? The paper mentions $\\\\mathcal{D}$ as a dataset, which could be the sample space of the distribution. So what distribution is this probability defined over?\\n4. Do $\\\\mathcal{D}_0$ and $\\\\mathcal{D}_1$ contain of multiple answers $M(q)$ for same $q$?\\n5. Is $(q',M(q'))$ in line 167 from the datasets, or *any* possible pair?\\n6. Lines 172-173: How does assuming H to be an identity function ensure the condition $\\\\|H\\\\circ\\\\hat{\\\\eta}-\\\\eta\\\\|_\\\\infty\\\\leq\\\\epsilon_\\\\eta$? What is the point of having H in the first place, then?\\n7. Line 201: can the target distribution not be redefined and hence expanded to account for the covariate shift? Hence the previous theory can be reused. \\n8. Lines 213-215: Do the source and target distributions have the same sample space?\\n9. Line 256: Why is the expected value of $\\\\tilde{v}$ taken?\\n10. How is the frequency/probability term in Equation 6 calculated/estimated?\\n12. Lines 289-290: how will the distribution-free setting be violated for models requiring fine-tuning to answer factual question? I think most LLMs can do factual QA (perhaps not optimally) without finetuning. So what is the point of mentioning this?\\n13. Why does KLE have only a 15 generation variant in Table 1?\\n14. Does the experiment corresponding to Figure 3 suggest that after the construction of the certainty classifier (basically identification of its threshold), one needs to do another search for the accuracy maximizing threshold? I don't get the point of this experiment, if the search for the accuracy maximizing threshold is not a part of the method. \\n15. Lines 459-460: How do you train a classifier to approximate density ratios? Is it unsupervised training? \\n17. In the black-box APIs setting, is the open-source model used to get the uncertainty score even during testing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer roCe (1)\", \"comment\": \"Thank you for the questions you raise to help improve our paper. We are happy that you acknowledge our work\\u2019s motivation, presentation and experiments. For your questions, we provide more explanations and additional experimental results to address your concerns (also see the revised paper in the updated pdf):\\n\\n> ***Q1: (1) and (3) are equivalent to PAC-style conformal prediction. What is the novelty of the proposed method with respect to the PAC-style conformal prediction?***\\n\\n**R1:** Thank you for pointing out the reference Vovk (2012)[1] of PAC-style conformal prediction. We would like to clarify the difference between our method and the PAC-style conformal prediction:\\n1) Conformal prediction aims to produce a prediction set for the correct output, but we aim to test whether the output is incorrect or not.\\n2) Conformal prediction typically treats all samples equally, however, we treat the correct and incorrect samples differently.\\n3) The power analysis for conformal prediction mainly focuses on the sizes of prediction sets, but we study the type II error (misclassifying correct answers as incorrect) of our methods.\", \"our_novelty_can_be_summarized_as_follows\": \"1) We are the first to formulate hallucination detection as a hypothesis tesing problem.\\n2) Motivated by Neyman-Pearson classification, instead of constructing prediction sets for the correct answers using conformal prediction, we propose an one-sided hypothesis testing for the incorrectness of answers. Unlike conformal prediction where all samples are used equality, we prioritize detecting incorrect answers and utilize only incorrect samples in the calibration data.\\n3) We study the type II error of our method, while the power analysis for conformal prediction are mainly about the sizes of prediction sets.\\n\\n\\n> ***Q2: What\\u2019s the novelty of the proposed method with respect to conformal language modeling? How can you obtain the indicator variable y_i in Section 2.2?***\\n \\n**R2:** Thank you for your question. Our work relates to conformal language modeling, and we have included it in Sec. 5 in updated paper. However, our work is different from it particularly in the following aspects:\\n\\n1) The goals of our method and conformal language modeling[2] is different. Our goal is to detect incorrect answers, while conformal language modeling aims to generate correct answers.\\n2) Since the goals are different, the outputs of these two frameworks are also different. Suppose we ask the language model $M$ a hard question $q$, such that $M$ is likely to generate incorrect answers. In this case, if our algorithm thinks the answer is indeed incorrect, we replace its answer by \\\"I don't know\\\" and terminate. However, what conformal language modeling does is to ask $M$ to keep generating (likely incorrect) answers, until it thinks there exists a correct answer, and then output a prediction set containing multiple answers.\\n3) CLM is proposed to provide coverage guarantees while our framework provide guarantees on Type I error.\\n4) In addition to type I error control, our work are also guaranteed to have small type II error. To the best of the authors' knowledge, conformal language modeling doesn't have such power analysis.\\n\\nBesides, we have to state that \\\"using CLM as a detecter by checking whether a generated answer is included in a conformal set\\\" is not feasible. The prediction set only guarantees that there exists a correct answer in the conformal set with high probability, with no guarantees on including all possible correct answers or including no incorrect answers. It is highly likely that a correct answer is not included in their prediction set or an incorrect answer is included in their prediction set. If we only reject answers not in the conformal set, all the incorrect answers inside the conformal set will be accepted, leading to a large type I error. Meanwhile, all the correct answers not in the conformal set will be rejected, resulting in a large type II error. Moreover, if one increases the coverage of the conformal set, this more reliable conformal set will lead to a larger type I error, since more incorrect answers will be included in the conformal set.\\n\\nAs for the indicator variable $y_i$, we use greedy decoding to get the realization of $M(q)$ and then it depends on task. Following R-Tuning[3], for multiple-choice datasets, $y_i = \\\\mathbf{1}[M(q_i)=a_i]$. For short-form question-answering datasets, where the ground truths are typically numbers, words or phrases, we set $y_i = \\\\mathbf{1}[a_i \\\\subseteq M(q_i)]$, which means a generated answer is considered correct only if it contains the provided answer. We have added these details in Sec 4.1 in updated PDF.\"}", "{\"title\": \"Response to Reviewer Fj97 (6)\", \"comment\": \"> ***W9-2: (Experiments) The evaluations should also report the % of willingly answered questions.***\\n\\n**R9-2:** Thank you for your constructive advice. The percentage of the willingly answered questions will vary with different significance level $\\\\alpha$, the allowable probability $\\\\delta$, the score functions, base models and datasets. One could modify significance levels for a balance between being conservative or aggressive in answering questions.\\n\\nHere we provide an answer rate analysis of our method using different significance levels comparing with baselines, which has been added to our modified paper in Sec E.3.\", \"table\": \"The Answer Rate and Accuracy Performance (%) of FactTest-t. The number in parenthese is the percentage of willingly answered questions.\\n\\n\\n| Dataset | Model | Finetuned | R-Tuning |FactTest-t ($\\\\alpha$=0.15) | FactTest-t ($\\\\alpha$=0.1) | FactTest-t ($\\\\alpha$=0.05) |\\n| -------- | -------- | --- | --- | --- | --- | -------- |\\n| ParaRel | OpenLLaMA-3B |61.73 ( 100% ) | 87.42 ( 37% ) | 89.91 ( 46% ) |92.73 ( 31% )| 94.26 ( 17% ) |\\n| | LLaMA-7B | 67.73( 100% ) | 89.65 ( 42% ) | 92.76 ( 47% ) | 95.04 ( 31% ) | 96.01 ( 18% ) |\\n| FEVER | OpenLLaMA-3B | 65.56 ( 100% ) | 67.19 ( 11% ) | 92.58 ( 38% ) | 94.88 ( 36% ) | 97.82 ( 33% ) |\\n| | LLaMA-7B | 66.24 ( 100% ) | 66.19 ( 49% ) | 95.41 ( 28% ) | 95.83 ( 24% ) | 96.79 ( 16% ) |\\n\\nThe findings demonstrate that **FactTest consistently achieves higher accuracy while effectively managing the answer rate through varying significance levels**. Specifically, FactTest-t with $\\\\alpha=0.15$ answers 47% questions on ParaRel and acheives 92.76% accuracy, outperforming R-Tuning, which answers 42\\\\% of the questions with an accuracy of 89.65\\\\%. Similarly, FactTest-t maintains superior accuracy performance on FEVER compared to baseline models while managing the answer rate through different significance levels.\\n\\n> ***W9-3: (Experiments) Table 1 must also report the accuracy of the pretrained model on the subset of questions answered willingly in the FactTest experiments.***\\n\\n**R9-3:** Thank you for your question, but there may be a misunderstanding. The accuracies of the pretrained models on the subset of willingly answered questions **are the results of FactTest**. It can be applied to all kinds of LMs including pretrained models and instruction-tuned models(e.g. Tulu, Llama-Instruct) and identifies the questions that the LM cannot provide correct answers. \\nHere we additionally present the accuracy of pretrained models on the subset of questions that the model is unwilling to answer on ParaRel using FactTest-kle15 to supplement the results in main text, which has been added to our updated PDF in Sec E.6. The $\\\\alpha$ is set to 0.1.\\n\\n| Model | Pretrained | Unwilling | Willing |\\n| ------------- | ---------- | --- | --------- |\\n| Openllama-3B | 36.66 | 27.90 | 75.51 |\\n| Openllama-7B | 40.38 | 32.93 | 75.36 |\\n| Openllama-13B | 42.21 | 32.81 | 79.55 |\\n\\n> ***W9-4: (Experiments) For $\\\\alpha$=0.05, the Type-2 error shown in Figure 2 does not appear to be controlled.***\\n\\n**R9-4:** Thank you for your question. We acknowledge that in some instances, the Type II error may not appear to be adequately controlled. However, this will not violate what we prove in our Type II error control analysis, which is based on the premise that the score function effectively measures the correctness of the generated answers. We have theoretically established that the optimal classifier for minimizing Type II error, given a constraint on Type I error, adopts a thresholding rule based on an oracle score. In practice, since the oracle score is inaccessible, we rely on a certainty function to approximate it. Our theoretical guarantees assert that if this score function approximates the oracle score well, up to an increasing transformation, then the Type II error will be effectively controlled. More critically, if the score function fails to accurately assess the correctness of the generated answers, though our Type I error control holds for any score functions, the Type II error control may falter.\"}", "{\"title\": \"General Response\", \"comment\": [\"Dear Reviewers,\", \"We sincerely thank the reviewers for their time, insightful reviews and constructive suggestions. Overall, it is heartening to note that most of the reviewers found our work to be well motivated(roCe, sEhD), experimental solid (roCe, sEhD) and well-written(roCe, sEhD). To clarify some potential misunderstandings of our paper, we first address some shared concerns of reviewers:\", \"**FactTest's novelty:** We appreciate the recognition of the novelty in our approach. Unlike traditional conformal prediction (including PAC-style) and conformal language modeling, which focus on **generating prediction sets** that contain the true outcome and provide **coverage guarantees**, FactTest is specifically designed to **identify and filter out incorrect responses** from large language models (LLMs) while **providing Type I error guarantees**. Additionally, while power analysis for conformal prediction primarily concentrates on **the sizes of prediction sets**, our framework emphasizes **the study of Type II errors**. The novelty of FactTest can be summarized as follows:\", \"We are **the first** to formulate hallucination detection as a hypothesis testing framework to enforce an upper bound of Type I errors at user-specified significance levels in a finite-sample and distribution-free manner.\", \"Motivated by Neyman-Pearson classification, instead of constructing prediction sets for the correct answers using conformal prediction, we propose an one-sided hypothesis testing for the incorrectness of answers.\", \"Unlike conformal prediction where all samples are used equally, we prioritize detecting incorrect answers and **utilize only incorrect samples in the calibration data**.\", \"We also provide detailed analysis for Type II error control and **derive the optimal score function**.\", \"**Additional experiments:** According to reviewer Fj97, we have included additional experiments using **more base models** (e.g., Mistral, Llama3.1-Instruct, Tulu2), **more closed-source models** (e.g., Claude, Gemini, GPT-4o), **answer rate analysis**, **more score functions** (KLE of 5 and 10-generation variants), **more baselines** (e.g., SelfCheckGPT) and more Type II error analysis. Notably, any uncertainty quantification method for hallucination detection could be integrated in our framework to serve as the score function and provide correctness guarantees. Though Reviewer roCe raised that we should include PAC conformal prediction as our baseline, we should clarify that **the goal of PAC conformal prediction is different from our factuality testing**, which is not a feasible baseline to compare with.\", \"**Paper clarity and related works:** Thanks to the suggestions of reviewer Fj97 and sEhD, we have corrected the typos, and revised our writings regarding hallucination, correctness and certainty in our main text. Besides, we have added more prior works about calibration of confidence scores in LLMs and conformal prediction in our updated related works.\"]}", "{\"comment\": \"Thanks to the authors for their response. I believe that FactTest-cls is a good addition to the paper and helpful to mitigate some of my concerns. I still do not understand why the authors say that \\\"assessing correctness is inherently challenging\\\" when all QA benchmarks are about factuality and some of the related frameworks for statistical guarantees for LLMs (e.g., QuaCer-C from the paper) also evaluate response correctness, rather than uncertainty.\\n\\nAbout the iid assumption, the QuaCer-C paper from the related works section seems to be tackling a similar problem without any iid assumptions. Hence, I am not convinced about the utility of the guarantees of this work over those of QuaCer-C, which in my understanding can be extended to this particular setting of factuality. \\n\\nOverall, I think that this paper needs more work. The results shown in the rebuttal and further discussions are promising, but the claims need to be made more formal and informative, with proper specification of their scope. The original submission had numerous major statements and claims that I highlighted in my review that the authors have reverted now. Hence, I believe this paper needs another revision, consisting of proper positioning of the paper and its methods, before acceptance.\"}", "{\"metareview\": \"This paper proposes a novel strategy for providing statistical guarantees on LLMs for question answering that leverages statistical hypothesis testing techniques to provide guarantees of the form \\\"if the LLM answers the question, then it is correct with high probability\\\". One of the key issues with the paper is its novelty; it is very closely related to the PAC-style conformal prediction literature. While the authors have improved the discussion of the connection in their paper, significant concerns remain.\\n\\nOne specific point of contention is the distinction between constructing prediction sets (the goal in conformal prediction) vs. abstaining from answering the question (the authors' goal). While the authors argue that they are different, their difference is overstated. In particular, the latter problem can be cast in the conformal prediction framework by instead considering a prediction set around a binary classification model designed to predict whether the LLM's answer is correct; then, the LLM answer is only provided if this model outputs {1} (instead of either {0} or {0,1}). There are two important caveats. First, the authors' guarantees are conditioned on the class label (y=0 or y=1); however, class-conditional variants of conformal prediction already exist (and are straightforward modifications; just construct the prediction set for each class separately). Second, the conformal prediction guarantee is slightly stronger than necessary (since it also provides guarantees for when the LLM answer is definitely wrong, which is irrelevant for the authors' problem). Thus, I expect the authors' approach to somewhat outperform the naive application of conformal prediction that I outlined above.\\n\\nOverall, I agree with the reviewers that a more rigorous comparison to conformal prediction (both theoretically and empirically) is important. I believe re-positioning the paper to account for this connection would significantly strengthen the submission.\\n\\nFinally, the authors might also consider discussing the following paper leveraging conformal prediction for question answering:\\n\\nShuo Li, Sangdon Park, Insup Lee, Osbert Bastani. TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction. In NAACL, 2024.\\n\\nLike the other papers shared on conformal prediction, its focus is on prediction sets rather than abstention, but it is the closest related work that comes to mind.\", \"additional_comments_on_reviewer_discussion\": \"There was significant discussion\\u00a0during the rebuttal period, and while some of the reviewers' concerns were addressed, some broader concerns remain.\"}", "{\"summary\": \"This paper offers a hypothesis testing framework to control the Type I error, while also showing that the Type II remains sufficiently low. The results hold for general binary classification tasks, and they are related to standard conformal prediction and calibration results. The authors frame their work in the context of LLM hallucination detection, and employ relevant scoring functions from the literature for this task. While the theory holds in distribution, they also provide an extension to out-of-distribution via density ratio estimation.\\n\\nThe methodology is justified by theoretical results. In practice, the method seems relatively simple to use, and it appears to provide notable improvements over baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and theoretically justified. Experiments seem to support the theoretical claims.\", \"weaknesses\": \"In the conformal prediction literature, density ratio estimation is a fairly standard way to extend in-distribution guarantees to out-of-distribution. Yet, these estimates tend to be unreliable. Would be interested for the authors to comment on the robustness of their method with respect to the density ratio estimates.\", \"questions\": [\"Is there a typo in the legend of Figure 1? The ve10 and ve15 results are repeated twice.\", \"I feel the authors could bring a stronger connection with the calibration and conformal prediction literature. The UQ of LLMs paragraph in the work related section may mention previous work on calibration of confidence scores in LMMs.\", \"The threshold selection approach used in the paper essentially corresponds to a conformalized quantile regression. This helps achieving a marginal (on average over all the data) guarantee on the Type I error. However, as the error may not be homogeneous over different questions, one may wonder if ensuring a marginal guarantee is, in fact, sufficient. I wonder if the authors have experienced different error magnitudes across different segments of the datasets.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Fj97 (5)\", \"comment\": \"> ***W8: The proof for $\\\\tilde{\\\\mathcal{D}}_0 \\\\mid \\\\mathcal{I} \\\\sim P_0$. What is meant by the notation: $\\\\tilde{\\\\mathcal{D}}_0 \\\\mid \\\\mathcal{I}$, specifically, the conditioning?***\\n\\n**R8:** Thank you for your suggestion. We have added the proof in our revision.\\n\\nRecall that $\\\\mathcal{I}$ is an index set determined by the density ratios $w(q_i^{(0)},M(q_i^{(0)})$ and independent uniform random variables $U_i$. After determining the indices $\\\\mathcal{I}$, for each index $i$, we can show that given $i$ is in $\\\\mathcal{I}$, $(q_i^{(0)},M(q_i^{(0)}))$ follows $P_{q,M(q)|y=0}$. Consequently, given the index set $\\\\mathcal{I}$, the samples with indices in $\\\\mathcal{I}$ are i.i.d. samples from $P_{q,M(q)|y=0}$, i.e., $\\\\tilde{\\\\mathcal{D_0}}|\\\\mathcal{I}\\\\overset{i.i.d.}{\\\\sim}P_{q,M(q)|y=0}$.\\n\\nNow we provide a proof for $(q,M(q))|\\\\lbrace U\\\\le w(q,M(q))\\\\rbrace \\\\sim P_{q,M(q)|y=0}=P_0$ with $(q,M(q))\\\\sim\\\\tilde P_0$. For any measurable set $C\\\\subset\\\\mathcal{Q}\\\\times\\\\mathcal{A}$, the conditional distribution of $(q,M(q))|\\\\lbrace U\\\\le w(q,M(q))\\\\rbrace$ can be expressed as\\n\\\\begin{align}\\n& \\\\mathbb{P} ((q,M(q))\\\\in C|U\\\\le w(q,M(q))) \\\\\\\\\\\\\\\\\\n= & \\\\frac{\\\\mathbb{P}((q,M(q))\\\\in C, U\\\\le w(q,M(q)))}{\\\\mathbb{P}(U\\\\le w(q,M(q)))} \\\\\\\\\\\\\\\\\\n= & \\\\frac{\\\\mathbb{E}\\\\frac{w(q,M(q))}{B}\\\\mathbb{I}((q,M(q))\\\\in C)}{\\\\mathbb{E}\\\\frac{w(q,M(q))}{B}} \\\\\\\\\\\\\\\\\\n= & \\\\mathbb{P}\\\\_{(q,M(q))\\\\sim P_{q,M(q)|y=0}}((q,M(q))\\\\in C),\\n\\\\end{align}\\nwhere we have use the facts that $\\\\mathbb{P}(U\\\\le w(q,M(q))|q,M(q))=\\\\frac{w(q,M(q))}{B}$, $\\\\mathbb{E}\\\\_{(q,M(q))\\\\sim \\\\tilde P_0}w(q,M(q))=1$ and $\\\\mathbb{E}\\\\_{(q,M(q))\\\\sim\\\\tilde P_0}w(q,M(q))\\\\mathbb{I}((q,M(q))\\\\in C)=\\\\mathbb{P}_{(q,M(q))\\\\sim P_0}((q,M(q))\\\\in C)$.\\n\\n> ***W9-1: (Experiments)The evaluation is just on Llama models and GPT-4o-mini.***\\n\\n**R9-1:** Thank you for your advice. Due to the time and resource limits, we only included Llama models and GPT-4o-mini before. We have now added the experiments on Mistral and other closed-source models including GPT-4o, Gemini and Claude. The results are shown as follows and we have included these new results into our paper(See Sec.E.4 in our revised PDF). Hope that these new experiment results can help better understand the efficacy of the method.\", \"table_1\": \"The accuracy performance of FactTest on four question-answering datasets using Mistral-7B as the base model. The significance level for FactTest is set to 0.1. The percentages inside the parentheses are the Type I error.\\n\\n| Dataset | Pretrained | SelfCheckGPT-NLI | FactTest-ve15 | FactTest-se15 | FactTest-kle15 |\\n| -------- | ---------- | --- | ------------- | ------------- | -------------- |\\n| ParaRel | 39.79 | 57.01 (0.25) | 65.63 (0.07) | 70.20 (0.08) | 72.78 (0.08) |\\n| HotpotQA | 36.48 | 46.01 (0.46) | 61.81 (0.06) | 63.06 (0.05) | 65.59 (0.05) |\\n| FEVER | 35.47 | 41.76 (0.05) | 22.99 (0.08) | 51.05 (0.08) | - |\\n| WiCE | 55.85 | 56.24 (0.47) | 68.81 (0.08) | 68.64 (0.08) | - |\", \"table\": \"The accuracy performance of FactTest on ParaRel using llama 7B as open-source model. The significance level is set to 0.1. The percentages inside the parentheses are the Type I error.\\n| Model | Base | SelfCheckGPT-NLI | FactTest-se15 | FactTest-kle15 |\\n| ------ | ----- | ---------------- | ------------- | -------------- |\\n| Claude-3.5-Sonnet | 58.25 | 58.96 (0.92) | 73.29 (0.08) | 79.86 (0.08) |\\n| Gemini-1.5-Flash-8B | 64.23 | 65.92 (0.86) | 76.87 (0.07) | 80.01 (0.08) |\\n| GPT-4o | 66.39 | 69.71 (0.83) | 80.70 (0.07) | 82.76 (0.08) |\"}" ] }
BVACdtrPsh
MCTBench: Multimodal Cognition towards Text-Rich Visual Scenes Benchmark
[ "Bin Shan", "Xiang Fei", "Wei Shi", "An-Lan Wang", "Guozhi Tang", "Lei Liao", "Jingqun Tang", "Xiang Bai", "Can Huang" ]
The comprehension of text-rich visual scenes has become a focal point for evaluating Multi-modal Large Language Models (MLLMs) due to their widespread applications. Current benchmarks tailored to the scenario emphasize perceptual capabilities, while overlooking the assessment of cognitive abilities. To address this limitation, we introduce a $\textbf{M}$ultimodal benchmark towards $\textbf{T}$ext-rich visual scenes, to evaluate the $\textbf{C}$ognitive capabilities of MLLMs through visual reasoning and content-creation tasks ($\textbf{MCTBench}$). To mitigate potential evaluation bias from the varying distributions of datasets, MCTBench incorporates several perception tasks (e.g., scene text recognition) to ensure a consistent comparison of both the cognitive and perceptual capabilities of MLLMs. To improve the efficiency and fairness of content-creation evaluation, we conduct an automatic evaluation pipeline. Evaluations of various MLLMs on MCTBench reveal that, despite their impressive perceptual capabilities, their cognition abilities require enhancement. We hope MCTBench will offer the community an efficient resource to explore and enhance cognitive capabilities towards text-rich visual scenes.
[ "Multimodal Benchmark", "MLLM", "OCR", "Cognition", "perception" ]
Reject
https://openreview.net/pdf?id=BVACdtrPsh
https://openreview.net/forum?id=BVACdtrPsh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sA8XT0hc15", "r2EbMQftk8", "kAAnhx9DUS", "jMuEiUqEWt", "bq8XkeGl0R", "5yKljTlHBm" ], "note_type": [ "official_review", "official_review", "decision", "official_review", "meta_review", "official_review" ], "note_created": [ 1729574943109, 1730104794051, 1737524024663, 1730660303054, 1734698780990, 1730199818543 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10080/Reviewer_18nb" ], [ "ICLR.cc/2025/Conference/Submission10080/Reviewer_WzK4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10080/Reviewer_9uor" ], [ "ICLR.cc/2025/Conference/Submission10080/Area_Chair_9ife" ], [ "ICLR.cc/2025/Conference/Submission10080/Reviewer_Lfto" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces MCTBench, a novel benchmark, with 8.5k QA pairs, aiming at evaluating the cognitive abilities of VLMs in text-rich visual scenes through visual reasoning and content creation tasks. Using GPT-4V to assist annotators in improving data quality and evaluate content creation efficiently. Several experiments on 18 VLMs are provided and pointing out that text-enhanced VLMs trained for different types of tasks may lose some creative capabilities of content creation.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. MCTBench fills an important gap by introducing content creation task to evaluate cognitive capabilities of VLMs in text-rich scenes.\\n2. Authors provide a large, diverse and high-quality human annotated dataset containing perception, reasoning, and content creation tasks.\\n3. The testing results on 18 VLMs inform readers that there is still room for improvement in reasoning tasks for current VLMs and in content creation tasks, text-enhanced VLMs trained for different types of tasks may lose some creative capabilities.\", \"weaknesses\": \"1. More newer VLMs, such as Gemini 1.5 Pro (Feb. 2024), InternVL1.5-Chat (Apr. 2024), GPT-4o (May 2024) and Claude 3.5 Sonnet (Jun. 2024) should be considered.\\n2. The reliability of automated evaluation using GPT-4V is questioned.\\n3. The paper lacks further insightful analyses, such as the impact of the resolution of source images on the results, the impact of different language decoders on the results of content creation task.\", \"questions\": \"1. In Table 2, the performance difference between strong models, like GPT-4V and weaker models, like LLaVA1.5-13B is minimal on reasoning tasks. What could be the cause of this result?\\n2. Is the 79.38 accuracy for GPT-4V to evaluate on content creation task higher enough to replace humans? Could you provide the accuracy of human evaluation?\\n3. Authors are encouraged to provide results on some of the latest models, such as the InternVL2 series (2024/07/04), and the Qwen2-VL series (2024/08/30). While the results of these models are not mandatory under the guidelines, considering the super-fast advancements in VLMs this year. Could you please include results from some of the aforementioned models to highlight the performance of the latest generation of VLMs?\\n4. Refer to Weaknesses 3.\\n5. There is no content in Section 3.3 Data Construction.\\n6. Figure 5 is in Reference. Authors are encouraged to reformat it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper present MCTBench, a new multimodal benchmark designed to evaluate the cognitive abilities of MLLMs through visual reasoning and content creation tasks. MCTBench includes perception tasks and employs an automated evaluation process for content creation, revealing that while MLLMs exhibit strong perceptual skills, their cognitive abilities need improvement. This benchmark aims to provide a valuable tool for the community to advance cognitive capabilities in processing text-rich visual scenes.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. This paper broadens the scope of OCR ability of MLLMs, rather than conventional OCR tasks and current MLLM benchmarks.\\n2. The benchmark is large-scale and human-annotated, make the benchmark valid and reliable.\", \"weaknesses\": \"1. Paper is poorly formatted.\\n2. Paper lacks details.\\nSee questions below.\", \"questions\": \"1. The paper is poorly written, with many citation format errors. Section 3.3 is incomplete, and on page ten, there is a figure inserted in the middle of the references section. Additionally, there is no appendix provided.\\n2. Many details are not clearly explained. For example, the content-creation task lacks sufficient explanation, and the prompts used are not detailed.\\n3. While the paper claims to \\\"provide a broader evaluation of cognition in text-rich visual scenes,\\\" this is only reflected in the word cloud, lacking other relevant support. For instance, under the reasoning task, it is unclear how OCR capabilities are subdivided into fine-grained categories, nor is there a comparison with other benchmarks at a fine-grained level. It also remains unclear which aspects are covered by existing benchmarks and which are not, and if there are any additional examples for OCR abilities not covered by current benchmarks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces MCTBench, a benchmark for evaluating the cognitive abilities of multimodal large language models (MLLMs) in text-rich visual scenes. MCTBench includes two main task types: reasoning tasks for understanding scenes and open-ended content-creation tasks for generating responses. It also incorporates perception tasks to differentiate them from cognitive tasks, minimizing bias from dataset variations.\\nThe benchmark compiles about 5.2k images and 8.5k annotated question-answer pairs across three categories: perception, reasoning, and content creation. Perception and reasoning tasks use multiple-choice formats for easy assessment, while an automated evaluation system, using advanced MLLMs like GPT-4V, is set up for content creation due to the challenges of subjective human evaluation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1\\uff09MCTBench provides a thorough assessment of both reasoning and content-creation capabilities in MLLMs, offering a well-rounded evaluation framework.\\n2\\uff09By using advanced MLLMs for automated evaluation, the benchmark reduces the need for costly and subjective human assessments in content creation tasks.\\n3\\uff09By distinguishing between perception and cognitive tasks, MCTBench helps identify specific areas where MLLMs need improvement.The finding that larger models perform better in cognitive tasks provides valuable guidance for future model development and scaling strategies.\", \"weaknesses\": \"1\\uff09Incomplete paper with no content in section 3.3\\n2\\uff09Segmenting cognitive abilities into reasoning and content generation may not be enough, and a sufficiently fine-grained benchmark would require a more precise segmentation of the data\\n3\\uff09Automated evaluations have improved efficiency, but their accuracy and consistency with manual evaluations need further validation\", \"questions\": \"1) Complete the missing section 3.3 in the paper. What is the specific data construction process in Section 3.3? Can you provide more details about data stratification, annotation, and preprocessing so that other researchers can replicate the MCTBench data preparation process?\\n2) Can cognitive abilities be further subdivided beyond reasoning and content generation? For example, is it possible to incorporate more refined categories such as logical reasoning, contextual understanding, and cross-modal reasoning to more accurately assess the cognitive abilities of different models?\\n3) How consistent are automated rating systems with human ratings? Can comparative experiments be conducted to analyze the reliability of automatic scoring in different types of generation tasks and clarify its accuracy in different evaluation dimensions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents MCTBench, a new multimodal benchmark designed to evaluate the cognitive abilities of MLLM through visual reasoning and content generation tasks.\\n\\nThe strengths of this paper include the introduction of an automatic evaluation pipeline to improve the efficiency of the content generation task, and the distinction between perceptual and cognitive tasks to identify specific areas where MLLM needs improvement.\\n\\nHowever, the paper is not well formatted, with section 3.3 incomplete and missing text, and figures inserted in the middle of the references section. In addition, the concerns of all reviewers were not addressed due to the lack of a rebuttal by the authors.\\n\\nThus, all reviewers gave negative reviews. There is no reason to overturn the decisions of the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"There are no rebuttals and discussions.\"}", "{\"summary\": \"This paper introduced MCTBench, a comprehensive benchmark designed to evaluate the cognitive capabilities of MLLMs in text-rich visual scenes. The MCTBench comprises 5.2k images and 8.5k question-answer pairs, covering a range of tasks including reasoning, content creation for cognitive assessment, and conventional perception.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper collect a large-scale benchmark for evaluating the cognitive capability for MLLM, where reasoning and content-creation ability is highlighted\\n2. For the content-creation task, an automated evaluation pipeline is introduced to enhance efficiency.\", \"weaknesses\": \"1. The paper introduces Content Creation as a new evaluation component, but it could benefit from a clearer explanation of the necessity and value of this addition for assessing cognitive abilities. Furthermore, the rationale behind dividing cognitive tasks into \\u201creasoning\\u201d and \\u201ccontent creation\\u201d would be strengthened with additional justification for this categorization.\\n2. The paper suggests that MLLMs require improvements in cognitive capabilities within text-rich visual scenes. However, the results presented do not entirely support this conclusion, as cognitive scores do not show a substantial decrease compared to perceptual scores. Since cognition often builds on perception, the separation of these tasks across different data samples may seem too rigid. Evaluating perception and cognition on the same images could better capture their relationship and provide clearer insights into how MLLMs leverage perceptual understanding for reasoning.\\n3. The automatic evaluation approach could be better supported by a further improvement and a comparison to prior evaluation methods. Specifically, with a Pearson correlation of only 0.558 against human judgment, this score may be insufficient to fully validate the reliability of the automated approach. A higher correlation score would likely provide stronger validation.\", \"questions\": \"In Table 4, the \\u201cImage (text regions removed)\\u201d row shows a perception score of 62.22. Given that this benchmark is designed for text-rich scenes, one would expect perception tasks to be highly challenging, if not impossible, without text information. Could you clarify:\\n1. How was this score achieved despite the absence of text?\\n2. If accurate, does this suggest that certain benchmark questions may not fully align with the intended text-rich focus?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
BV84FICIAM
Energy-Based Conceptual Diffusion Model
[ "Yi Qin", "Xinyue Xu", "Hao Wang", "Xiaomeng Li" ]
Diffusion models have shown impressive sample generation capabilities across various domains. However, current methods are still lacking in human-understandable explanations and interpretable control: (1) they do not provide a probabilistic framework for systematic interpretation. For example, when tasked with generating an image of a "Nighthawk", they cannot quantify the probability of specific concepts (e.g., "black bill" and "brown crown" usually seen in Nighthawks) or verify whether the generated concepts align with the instruction. This limits explanations of the generative process; (2) they do not naturally support control mechanisms based on concept probabilities, such as correcting errors (e.g., correcting "black crown" to "brown crown" in a generated "Nighthawk" image) or performing imputations using these concepts, therefore falling short in interpretable editing capabilities. To address these limitations, we propose Energy-based Conceptual Diffusion Models (ECDMs). ECDMs integrate diffusion models and Concept Bottleneck Models (CBMs) within the framework of Energy-Based Models to provide unified interpretations. Unlike conventional CBMs, which are typically discriminative, our approach extends CBMs to the generative process. ECDMs use a set of energy networks and pretrained diffusion models to define the joint energy estimation of the input instructions, concept vectors, and generated images. This unified framework enables concept-based generation, interpretation, debugging, intervention, and imputation through conditional probabilities derived from energy estimates. Our experiments on various real-world datasets demonstrate that ECDMs offer both strong generative performance and rich concept-based interpretability.
[ "Interpretability", "Concepts", "Diffusion Model", "Energy-Based Model", "Generative Model" ]
Reject
https://openreview.net/pdf?id=BV84FICIAM
https://openreview.net/forum?id=BV84FICIAM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xPMxqGHYQG", "xIDt77pfEq", "tUlgy7so7s", "sVsQF8krsq", "gX8PCWHAlY", "gM0NLNVM3Z", "f35JMmhok0", "cbZciqq6yI", "cYZmRZg2yF", "Zi32JmYZFp", "ZWP5GXLWSU", "UmHrhvtuLy", "TXnU93iirN", "RqfaVDE2rX", "QMDM7XHVM6", "JWH4wjxGxH", "Gu1QrxUPnF", "GgzBXRaiHo", "Di1jhRHymc", "95cfRnRlZ8", "8jmWqfOES7", "8hAHjUNzHn", "8WrFVxUAUe", "7qpy5yJPxn", "6ndyzlgQbn", "3NlTmDCJRd", "2QPGAyUDmh", "2HPfuQwnmr", "0jsxuS77QH" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1731997152538, 1731997366110, 1731997334067, 1732728245889, 1731996930719, 1737523558839, 1731997478504, 1732246317485, 1731997409730, 1731997022006, 1731997567555, 1730758452060, 1730702269991, 1732246173955, 1732728134206, 1732246253973, 1731997524521, 1732666353752, 1732728046975, 1730470413593, 1733180206002, 1732547024248, 1732246211639, 1731997267493, 1732727681909, 1731997067750, 1734308600853, 1730705687200, 1732727951480 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_AYc9" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_grAS" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_AYc9" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_5CJZ" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_grAS" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_5CJZ" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ], [ "ICLR.cc/2025/Conference/Submission3150/Area_Chair_Ak3f" ], [ "ICLR.cc/2025/Conference/Submission3150/Reviewer_oJRi" ], [ "ICLR.cc/2025/Conference/Submission3150/Authors" ] ], "structured_content_str": [ "{\"title\": \"[1/2] Thank you for your encouraging and valuable comments.\", \"comment\": \"Thank you for your encouraging and valuable comments. We are glad that you found our method ```\\\"novel\\\"``` and has ```\\\"multiple practical applications\\\"```, our theoretical framework ```\\\"comprehensive\\\"``` ```\\\"with detailed proofs\\\"```, our empirical results ```\\\"strong\\\"```, and our improvement ```\\\"clear\\\"```. We will address each of your comments in turn below.\\n\\n**W1. Acknowledging pioneering work on energy-based diffusion models and works using EBM as compositions.**\\n\\nThank you for pointing us to these interesting papers. Following your suggestions, we have cited and discussed these pioneering papers in our revision (in the related works section). \\n\\nWe recognize the importance of the works [1, 2, 3, 4] you have mentioned. For example, in the pioneering works on energy-based diffusion models, i.e., Diffusion Recovery Likelihood and its extension [1, 2], they trained EBMs on diffusion recovery likelihood to facilitate training and sampling on high-dimensional datasets. \\n\\nWe also note that for these works, \\n(1) The number of supported concepts is fixed and limited (e.g., only $6$ concepts, compared to $112$ concepts in our ECDM), and hence not sufficiently informative as interpretations. \\n(2) More importantly, these works aim to compositional generation with deterministic concepts and, therefore, fail to provide probabilistic interpretation, which is the focus of our ECDM. \\nTherefore, these methods are *not applicable to our setting*. \\n\\nIn contrast, our ECDM explicitly considers human-understandable probabilistic concept explanations in its design by jointly modeling the input instruction $\\\\mathbf{y}$, associated concepts $\\\\mathbf{c}$, and the generated image $\\\\mathbf{x}$ during the generation process within a unified energy-based framework.\\n\\n\\n## For Questions:\\n\\n**Q1. How does the method scale with increasing number of concepts?**\\n\\nThank you for mentioning this. Our model is efficient, and scales linearly with the number of concepts in terms of computation and the number of model parameters. \\n\\nFor example, when concept number $K=6$, the parameter size is 27.57 M, excluding all frozen pretrained components, and when $K=112$, the parameter size is 110.99 M. Note that the computational cost and number of parameters for all frozen pretrained components are fixed (i.e., constant). \\n\\nWe have included a scaling analysis in **Figure 8 of the updated Appendix D**. We can see that the number of model parameters scales linearly with the number of concepts. \\n\\n\\n**Q2. What is the computational overhead compared to standard diffusion models?**\\n\\nThank you for mentioning this. \\n\\n**Mapping Energy Network.** \\nWe sample from the mapping energy network using the Gradient Inference technique, as outlined in ECBM [5]. This sampling procedure requires approximately 10 to 30 steps, taking around 10 seconds of wall-clock time. \\n\\n**Concept Energy Network.** \\nFor the generative concept energy network, we model the diffusion model as an implicit representation of the energy function, making the diffusion model sampling algorithm applicable to our framework. We utilize the standard diffusion sampling algorithm (i.e., DDIM [6]]) to generate an image from the concept energy network. This process involves approximately 50 steps and takes around 3 seconds of wall-clock time when using an NVIDIA RTX 3090. Therefore, the computational overhead remains comparable to that of standard diffusion models.\\n\\n\\n**Q3. Could the framework be extended to handle continuous concept values rather than binary?**\\n\\nThis is an insightful question and points to an interesting extension of our ECDM. \\n\\n+ **Our framework naturally supports normalized continuous-valued concepts.**\\nFor example, By normalizing the continuous concept value to the range of $[0,1]$, the concept probability $c_k$, which is already a real (continuous) number in the range of $[0, 1]$, used for mixing the positive/negative concept embedding can be substituted by this value, and further be integrated into our framework. \\n+ **Our framework can be further extended to support unnormalized continuous-valued concepts.** \\nFor example, we can learn a *unit* concept embedding $\\\\mathbf{e}_k \\\\in \\\\mathbb{R}^d$ that represents the unit value of a certain concept, and a continuous magnitude concept $c_k \\\\in \\\\mathbb{R}$ embedding that represents the actual magnitude of the concept. With $\\\\mathbf{e}_k$ and $c_k$, we can then replace the final concept embedding (in Line 232-238) $\\\\mathbf{v}_k = c_k \\\\cdot \\\\mathbf{v}_k^{(+)} + (1-c_k) \\\\cdot \\\\mathbf{v}_k^{(-)}$ with $\\\\mathbf{v}_k = c_k \\\\cdot \\\\mathbf{e}_k$. All other components of our ECDM can remain unchanged.\\n\\nWe agree that extending our method toward continuous concepts would be an interesting future work, and we have included the discussion above in our revised paper (Appendix E.2).\"}", "{\"title\": \"[2/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"**W3.Code Availability.**\\n\\nThank you for your interest in our code. We assure you that we will make our code open-source and available to the wider research community after acceptance, thereby facilitating the reproducibility of our results. So far, we have finished cleaning up the source code and will release it if the paper is accepted. \\n\\n**W4. The experiments rely on a fixed pretrained stable diffusion model, while other models are not explored.**\\n\\nThis is a good question. Our model does not assume any specific model architecture and is therefore compatible with any pretrained diffusion model. \\n\\nWe follow the convention in the field to choose Stable Diffusion as the base model because it is the most commonly used pretrained large text-to-image diffusion model. Thousands of works have used Stable Diffusion as their only base model [3, 4, 5]. Note that a lot of open-source pretrained diffusion models are actually finetuned from the Stable Diffusion model. Therefore, we believe the results are representative and generalize across different pretrained models. \\n\\nNonetheless, we are happy to further validate our framework if you have any specific open-source pretrained diffusion model in mind. We would be very happy to include additional results before the discussion period ends in late November. \\n\\n\\n**W5. Precise regional control and computational expense.**\\n\\n\\nThank you for mentioning this. As mentioned in our **Conclusion and Limitations** section, precise regional control and computational expense are two of our ECDM's limitations. \\n\\n**Precise Regional Control.** Enhancing precise regional control in concept-based editing is indeed an intriguing direction for future work. This challenge could potentially be addressed by incorporating attention-based regional editing techniques, such as prompt-to-prompt [6]. This is definitely interesting future work, but is out of the scope of our paper. \\n\\n**Computational Cost.** We would like to clarify that our ECDM introduces low computational overhead compare to existing methods. Specifically: \\n+ **Mapping Energy Network.** We sample from the mapping energy network using the Gradient Inference technique, as outlined in ECBM [7]. This sampling procedure requires approximately 10 to 30 steps, taking around 10 seconds of wall-clock time. \\n+ **Concept Energy Network.** For the generative concept energy network, we model the diffusion model as an implicit representation of the energy function, making the diffusion model sampling algorithm applicable to our framework. We utilize the standard diffusion sampling algorithm (i.e., DDIM [8]]) to generate an image from the concept energy network. This process involves approximately 50 steps and takes around 3 seconds of wall-clock time when using an NVIDIA RTX 3090. Therefore, the computational overhead remains comparable to that of standard diffusion models.\\n\\nWe agree that accelerating the sampling process of our framework is another promising area for future research and could potentially be addressed by adapting variational inference methods. However, we decided to leave this to future work, as it is beyond the scope of this paper.\"}", "{\"title\": \"[1/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"Thank you for your valuable comments. We are glad that you found our model ```\\\"is a valuable tool\\\"```/```\\\"supports both generative and interpretive tasks\\\"``` and that our ECDM ```\\\"shows quantitative improvements in image quality and concept alignment\\\"```. Below we address your questions one by one.\\n\\n**W1. Comparisons with some related works (COMET or CBGM).**\\n\\nThank you for mentioning this. We would like to clarify that our setting focuses on concept-based generation and interpretation given a pretrained large diffusion model. Therefore CBGM [1] and COMET [2] (both cited in our paper) are *not applicable* in this setting. Specifically:\\n\\n+ CBGM [1] involves training a new diffusion model *from scratch* using a modified Diffusion UNet. In contrast, we focus on augmenting an *existing* pretrained large diffusion model (e.g., Stable Diffusion) to enable concept-based generation, intervention, and interpretation. Therefore CBGM is *not applicable* to our setting. \\n+ CBGM [1] is *not* an energy-based model, which is different from our ECDM.\\n+ In this paper, we focus on the text-to-image generation setting, where the input is free-form text, and the output is an image. CBGM [1] is a conditional diffusion model that takes class labels as input. Therefore CBGM is *not applicable* to our setting. \\n+ COMET [2] is an unsupervised, unconditional diffusion model that does not take any input (neither class labels nor text). Therefore COMET is *not applicable* to our setting either.\\n+ Since COMET is an unsupervised learning model, the visual concepts decomposed by COMET do not have ground truth. Therefore, it is *not possible* to evaluate COMET in our setting.\\n\\nWe have included the discussion above in the revision as suggested (Appendix E.1). \\n\\n\\n**W2. Incorporation of other metrics.**\\n\\nThank you for your suggestions. \\n\\n**Additional Experiments and Metrics.** Following your suggestion, we run additional experiments on another metric, Inception Score (IS). The results are presented in the tables below. \\n\\nTable A. Results on the CUB dataset.\\n\\n| Model | FID | IS | Class Accuracy | Concept Accuracy |\\n|:-----------------:|:---------:|:--------:|:----------------:|:------------------:|\\n| SD-2.1 | 29.55 | 5.40 | 0.5033 | 0.9222 |\\n| PixArt-\\u03b1 | 46.85 | 3.82 | 0.1208 | 0.8231 |\\n| TI | 23.36 | 5.41 | 0.6397 | 0.9496 |\\n| **ECDM (Ours)** | **22.94** | **5.63** | **0.6492** | **0.9561** |\\n\\nTable B. Results on the AWA2 dataset.\\n\\n| Model | FID | IS | Class Accuracy | Concept Accuracy |\\n|:-----------------:|:---------:|:--------:|:----------------:|:------------------:|\\n| SD-2.1 | 37.79 | 14.78 | 0.8935 | 0.9850 |\\n| PixArt-\\u03b1 | 59.71 | 13.47 | 0.9008 | 0.9764 |\\n| TI | 29.63 | 14.79 | 0.9142 | **0.98** |\\n| **ECDM (Ours)** | **28.91** | **14.93** | **0.9200** | 0.9801 |\\n\\nTable C. Results on the CelebA-HQ dataset.\\n\\n| Model | FID | IS | Class Accuracy | Concept Accuracy |\\n|:-----------------:|:---------:|:--------:|:----------------:|:------------------:|\\n| SD-2.1 | 53.47 | 3.36 | 0.4881 | 0.8079 |\\n| PixArt-\\u03b1 | - | - | - | - |\\n| TI | 53.47 | 3.36 | 0.4881 | 0.8079 |\\n| **ECDM (Ours)** | **52.89** | **3.51** | **0.5017** | **0.8182** |\\n\\nThese tables show that even in terms of IS, our method still outperforms all baseline methods, indicating consistently improved image quality. These results and the necessary discussions have been incorporated into the revised version of our paper in the Experiment section. \\n\\n**FID Already Evaluates Diveristy.** \\nWe would like to clarify that the metric FID already evaluates the diversity of our generated images. Specifically, FID is computed as the distance between the distribution of generated images and the distribution of real images, using feature vectors from a pre-trained Inception network. Therefore, a *small FID* means that our ECDM's generated images are *as diverse as the real images*. \\n\\nThank you again for your comments, and we are open to considering additional metrics that might enhance the evaluation process and provide further insights into our model's performance. If you have specific metrics in mind, we would be very happy to explore them further during the discussion period.\"}", "{\"title\": \"[3/3] Thank you for your continued engagement and comments with our work.\", \"comment\": \"**Our Proposed ECDM.**\\n\\nOur proposed Energy-based Conditional Diffusion Model (ECDM) addresses key research gaps, bridging the worlds of conceptual interpretability and generative modeling. \\nSpecifically, we:\\n\\n+ **Formulate Conceptual Interpretation under a Joint Energy-Based Framework.**\\nWe propose a novel joint energy-based framework for generative CBMs that models the interactions among instructions, concepts, and image generations in an integrated manner. This enables:\\n\\n1. **Faithful reflection of concept probabilities**: During generation and interpretation, concept probabilities are jointly influenced by both the input instructions and the generated images. This ensures the probabilities accurately reflect the underlying generative process.\\n\\n2. **Deep involvement of concept interpretations**: Concepts are actively involved in the generation process because our model minimizes the joint energy. This core objective inherently requires the concepts to play a significant role in both generation and interpretation.\\n\\n+ **Derive Conditional Probabilities by Composing Energy Functions.**\\nBy systematically deriving new conditional probabilities within the joint energy framework, our model extends beyond concept-based joint generation to support a wide range of tasks, including interpretation, debugging, corrective intervention, and interpretable imputation. This is made possible by leveraging faithful conceptual probabilities and embeddings to perform diverse tasks within a unified interpretable framework. \\nIf CBMs had been naively added to the framework, these capabilities\\u2014relying on non-binary concept-image interactions to perceive and derive concept probabilities\\u2014could not have been achieved.\\n\\n\\n+ **Seamlessly Incorporate Pretrained Diffusion Models.**\\nWe integrate large-scale pretrained diffusion models into our energy-based framework by **reformulating and unifying their training objectives** under the energy-based framework (see **Appendix A**). Rather than merely utilizing pretrained network features, we propose a novel formulation that enables:\\n\\n1. **Direct and deep involvement in interpretation**: The pretrained diffusion model becomes a critical part of the interpretability process by contributing to minimizing the joint energy through the concept energy network.\\n2. **Interpretation of pretrained model outputs**: Our framework can interpret images generated by pretrained diffusion models, not just those fine-tuned within our model.\\n3. **Efficient training and sampling**: By harmonizing the pretrained diffusion model into our framework, we only need to optimize a small set of embeddings to repurpose it as a strong energy estimator. This dramatically improves computational efficiency (see **Appendix D**: Computational Efficiency Analysis).\\n\\nTherefore, our ECDM has successfully addressed the drawbacks associated with integrating CBMs and generative models, culminating in the development of this **unified** and faithful framework. This highlights the **versatility** of our approach, as noted in your previous comment: ``\\\"versatile and seems applicable to any conditional data generation task.\\\"`` It also emphasizes our model\\u2019s **flexible interpretability**, as you observed: ``\\\"enhances the interpretability of the elements and features in the generated images.\\\"``\\n\\n\\nThank you again for providing feedback during the rebuttal period. We hope this additional explanation further clarifies the novelty and added value of our proposed ECDM. We believe our contributions significantly advance the state of the art by providing both theoretical insights and practical tools. Furthermore, we are eager to hear about your thoughts regarding \\\"previous results on diffusion models and concept bottleneck models,\\\" and we would be happy to provide more targeted, point-by-point response accordingly.\"}", "{\"title\": \"Response to all the reviewers and area chairs.\", \"comment\": \"We thank all reviewers for their valuable comments. We are glad that they found the problem we solve ```\\\"novel\\\"```/```\\\"has multiple practical applications\\\"```/```\\\"a practical tool\\\"```/```\\\"supports both generative and interpretive tasks\\\"``` (oJRi, grAS), our proposed method ```\\\"has probabilistic interpretation\\\"```/```\\\"versatile\\\"```/```\\\"enhances the interpretability\\\"``` (AYc9, 5CJZ), our theoretical analysis ```\\\"comprehensive\\\"```/```\\\"detailed\\\"``` (oJRi), our paper ```\\\"well-written\\\"```/```\\\"easy to read\\\"```/```\\\"well-organized\\\"``` (AYc9, 5CJZ), and our experiments ```\\\"thorough and clear\\\"``` (5CJZ), and agreed that our ECDM has ```\\\"strong empirical results\\\"```, ```\\\"clear improvement\\\"``` and ```\\\"improvements\\\"``` in both ```\\\"image quality\\\"``` and ```\\\"concept alignment\\\"``` (oJRi, grAS).\\n\\nBelow we address the reviewers\\u2019 questions. We have also updated the main paper and the Appendix (with the changed part marked in blue).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"[1/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"Thank you for your insightful and constructive feedback. We are glad that you found our proposed framework ```\\\"versatile\\\"```/```\\\"applicable to any conditional data generation task\\\"```/```\\\"enhances the interpretability\\\"```, our paper ```\\\"clearly written\\\"```/```\\\"well-organized\\\"```, and our experiments ```\\\"thorough\\\"```. Below we address your questions one by one in detail.\\n\\n**W1. ... the significance of this work within the broader field of generative models ... a straightforward combination of concept bottleneck models and standard conditional diffusion models.**\\n\\nWe would like to clarify that our ECDM's primary contribution lies not only in combining existing models but in developing a **unified probabilistic framework** that unlocks **new capabilities**. We provide more details below. \\n\\n**Fundamental Difference between ECDM and Concept Bottleneck Models (CBMs) Combined with Conditional Diffusion Models.** Conventional CBMs typically predict a set of concepts from an image input and then predict the class label based on these predicted concepts, i.e., predicting concepts $\\\\mathbf{c}$ and labels $\\\\mathbf{y}$ given an image $\\\\mathbf{x}$. Similarly, conditional diffusion models typically generate an image $\\\\mathbf{x}$ given an input text or class label $\\\\mathbf{y}$. In contrast to such **\\\"sequential\\\"** modeling, our proposed ECDM **jointly** models the relationships among class-level instructions $\\\\mathbf{y}$, concept sets $\\\\mathbf{c}$, and the generated image $\\\\mathbf{x}$ within an energy-based framework. \\n\\nConsequently, our energy-based framework facilitates a more flexible and unified inference process. Given **any subset** of these three elements (the instruction $\\\\mathbf{y}$, the concepts $\\\\mathbf{c}$, and the generated image $\\\\mathbf{x}$), the joint framework can infer the remaining elements by composing energy functions and deriving conditional probabilities. This unique characteristic allows us to achieve **generation, interpretation, and intervention** within a **unified** framework. These capabilities extend beyond what either component could achieve independently. Unifying these tasks under a flexible framework provides the interpretability and generation modeling community with deeper insights into how diffusion models generate images, comprehend, and incorporate concepts in the generation process through a human-understandable probabilistic interpretation, thus holding significant value for the community. Our novelty and contributions include:\\n\\n+ **Unifying Five Tasks in a Single Framework.** Our ECDM framework unifies concept-based generation, conditional interpretation, concept debugging, intervention, and imputation under a joint energy-based formulation. These five tasks encompass a typical workflow of the text-to-image diffusion model, and unifying them enhances generation quality, boosts interpretability, and enables interpretable intervention. In our proposed energy-based modeling of the diffusion model, we incorporate large-scale pretrained diffusion models (e.g., Stable Diffusion) within our framework to achieve a unified interpretation of these diffusion models, an area that previous methods have less extensively explored.\\n\\n+ **Probabilistic Interpretations by Energy Functions.** With ECDM\\u2019s unified framework, we have developed a set of algorithms to compute various conditional probabilities by composing corresponding energy functions. These conditional probabilities provide theoretical support for concept-based interpretation of the generation process, as opposed to merely visualizations, and enable flexible inference of different elements (as mentioned in point 1) for diverse tasks.\\n\\n+ **Better Experimental Results.** Empirical results on real-world datasets demonstrate ECDM's state-of-the-art performance in terms of image generation, imputation, and their conceptual interpretations.\"}", "{\"title\": \"Thank you for your time and effort in reviewing our paper.\", \"comment\": \"Dear Reviewer 5CJZ,\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe appreciate your valuable comments and suggestions, and we firmly believe that our response and revisions can fully address your concerns. We are open to discussion (before Nov 26 AOE, after which we will not be able to respond to your comments unfortunately) if you have any additional questions or concerns, and if not, we will be immensely grateful if you could reevaluate your score.\\n\\nThank you again for your reviews which helped to improve our paper!\\n\\nBest regards,\\n\\nECDM Authors\"}", "{\"title\": \"[3/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"## For Questions:\\n\\n**Q1. Further clarification for ```\\\"Pivotal Inversion\\\"``` and ```\\\"Energy Matching Inference\\\"```.**\\n\\nWe apologize for any unclear parts in our explanation and are pleased to provide more detailed explanations below.\\n\\nThe intuition behind the ECDM's interpretation task is that the diffusion model's sampling trajectory conditioned on the instruction $\\\\mathbf{y}$ and the optimal concept set $\\\\tilde{\\\\mathbf{c}}$ should be alike in our framework, since they are all under our joint energy-based formulation. To substantiate this intuition, two steps are required: (1) **pivotal inversion**, which aims to simulate the sampling trajectory conditioned on the instruction $\\\\mathbf{y}$; (2) **energy matching inference**, which optimizes the concept probability $\\\\tilde{\\\\mathbf{c}}$ to find the most compatible concept set (i.e., the concept set that is the most compatible with the generated image) by minimizing the distance between the sampling trajectory conditioned on concepts and the one obtained from pivotal inversion.\\n\\nWe explain these two steps individually below.\\n\\n**Pivotal Inversion:** The goal of pivotal inversion is to simulate how the pretrained diffusion model samples an image directly conditioned on the instruction. For instance, consider an image of a \\\"black billed cuckoo\\\" bird generated from the instruction \\\"A photo of the bird black billed cuckoo\\\" using pretrained Stable Diffusion 2.1. Pivotal inversion utilizes reverse DDIM [8] to derive a set of latents that represent how the image is gradually denoised from Gaussian noise conditioned on this instruction. This derived set of latents serves as pivots that illustrate the model's original sampling trajectory and, within our formulation, simulates the energy landscape of the external energy model (pretrained diffusion model). This facilitates the matching process in the subsequent step.\\n\\n**Energy Matching Inference:** The goal of this energy-matching-inference step is to determine the most compatible concept set, i.e., the concept set that is the most compatible with the generated image; this is done using the simulated trajectory from pivotal inversion. Specifically, we freeze all learned embeddings as well as the concept energy network, and optimize the concept probability $\\\\tilde{\\\\mathbf{c}}$. The optimization target is to minimize the distance between the sample trajectory conditioned on the concept set and the fixed pivotal trajectory simulated in the previous step. By minimizing this distance, we align the energy landscape of the concept energy network with the external energy network to obtain the most compatible concept set, which is why this process is called energy matching inference. Once the most compatible concept set is found, these concepts can then be used to interpret the generated image. \\n\\nWe hope this further explanation of these two concepts clarifies the overall idea. If you need additional clarification or have further questions, we would be more than happy to provide any additional details required. These further clarifications have been incorporated in Appendix D.1.\\n\\n\\n\\n[1] Ismail, Aya Abdelsalam, et al. \\\"Concept Bottleneck Generative Models.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n\\n[2] Du, Yilun, et al. \\\"Unsupervised Learning of Compositional Energy Concepts.\\\" Advances in Neural Information Processing Systems 34 (2021): 15608-15620.\\n\\n[3] Liu, Nan, et al. \\\"Unsupervised Compositional Concepts Discovery with Text-to-image Generative Models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[4] Hao, Shaozhe, et al. \\\"ConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction.\\\" ECCV. 2024.\\n\\n[5] Su, Jocelin, et al. \\\"Compositional Image Decomposition with Diffusion Models.\\\" ICML. 2024.\\n\\n[6] Hertz, Amir, et al. \\\"Prompt-to-prompt Image Editing with Cross Attention Control.\\\" arXiv preprint arXiv:2208.01626 (2022).\\n\\n[7] Xu, Xinyue, et al. \\\"Energy-based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n[8] Song, Jiaming, Chenlin Meng, and Stefano Ermon. \\\"Denoising Diffusion Implicit Models.\\\" arXiv preprint arXiv:2010.02502 (2020).\"}", "{\"title\": \"[1/2] Thank you for your encouraging and valuable comments.\", \"comment\": \"Thank you for your valuable reviews. We are glad that you found our model ```\\\"provides probabilistic interpretation\\\"``` and our paper ```\\\"well-written and easy to read\\\"```. Below, we address your questions one by one:\\n\\n**W1. \\\"... The point is that it is a deterministic mapping. ... I am expecting that the concept probability vector changes with generated image (not only the input text prompt).\\\"**\\n\\nThank you for this insightful comment. We would like to clarify that (1) our ECDM's concept probability does vary with both the text instructions and the generated image and that (2) it provides probabilistic interfaces rather than a deterministic mapping. \\n\\n**Why Our ECDM's Concept Probability Varies with the Generated Image Too.** This is actually a key property enabled by our ECDM's joint energy-based modeling. Specifically, we model the joint energy function as: \\n$E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}}) \\\\triangleq E_{\\\\mathbf{\\\\psi}}^{concept}({\\\\mathbf{x}},{\\\\mathbf{c}}) + \\\\lambda_m E_{\\\\mathbf{\\\\psi}}^{map}({\\\\mathbf{c}},{\\\\mathbf{y}})$, \\nwith the mapping energy function \\n$E_{\\\\mathbf{\\\\psi}}^{map}(\\\\mathbf{y},\\\\mathbf{c}) = D_{uw}(\\\\mathbf{u},\\\\mathbf{w})$, \\nand concept energy function \\n$E_{\\\\mathbf{\\\\psi}}^{concept}(\\\\mathbf{x},\\\\mathbf{c}) \\\\triangleq \\\\mathbb{E}_{\\\\mathbf{x}, \\\\epsilon \\\\sim \\\\mathcal{N}(\\\\boldsymbol{0}, \\\\boldsymbol{I}), t} [ \\\\left\\\\| \\\\epsilon - \\\\epsilon _\\\\theta(D_c(\\\\mathbf{c}),\\\\mathbf{x}_t, t) \\\\right\\\\|^2_2 ]$. \\n\\nThis joint formulation of energy functions, particularly the concept energy function, allows the concept probability ($\\\\tilde{\\\\mathbf{c}}$) in the inference process to change with the generated image in our model's interpretation task, which is the main focus of our paper. Specifically, we optimize the concept probability $\\\\tilde{\\\\mathbf{c}}$ using the concept energy network (Eqn. (17)) given a generated image for interpretation, with a probability range of $\\\\tilde{\\\\mathbf{c}} \\\\in [0,1]$. This non-binary probability indicates \\\"to what degree the diffusion model generates the image based on these specific concepts.\\\" For example, given a \\\"polar bear\\\" image generated by Stable Diffusion, our ECDM will infer a high probability for the concept \\\"arctic\\\" and a low probability for \\\"forest\\\" using Eqn. (17), suggesting that the Stable Diffusion model may generate this image based on \\\"arctic\\\" rather than \\\"forest\\\". \\n\\n**Additional Experiments: Concepts \\\"Water\\\" and \\\"Arctic\\\".** Inspired by your comments, we conducted additional experiments to observe the changes in this non-binary concept probability (**Figure 6 of Appendix B.1**). Given the same prompt, \\\"A photo of the animal Polar Bear\\\", the diffusion model generates two different \\\"Polar Bear\\\" images: The top image does not have a \\\"water\\\" and \\\"arctic\\\" background, while the bottom image has a \\\"water\\\" and \\\"arctic\\\" background. Our ECDM correctly infers that the probabilities of the concepts \\\"water\\\" and \\\"arctic\\\" in the top image are 0.1233 and 0.0363, respectively, much smaller than those in the bottom image (0.9543 and 0.8015, respectively). \\n\\n**Additional Experiments: Concept \\\"Big\\\".** For the concept \\\"big,\\\" we can also see meaningful variation in the inferred probabilities, 0.9067 (top image) versus 0.9922 (bottom image), meaning that our ECDM is more certain that the bottom image is a \\\"big\\\" polar bear, but is less certain about the top image since it only shows the head of the bear. \\n\\nTherefore, our ECDM's concept probability vector does adjust with the generated image in interpretation. We have incorporated your valuable insight and further discussions into Appendix B.1.\"}", "{\"title\": \"[3/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"**Q4. What distinguishes the generation process described in Equations (12) and (13) from that of a standard conditional diffusion model? It seems the only change is replacing the conditioning input y with the processed conditioning input $\\\\mathbf{c}$ obtained from $\\\\mathbf{y}$.**\\n\\nThis is a good question. After training with the joint energy-based objective, our model performs concept-based joint generation by minimizing the joint energy. As clarified in **W2.1**, this joint sampling process can be further simplified for computational efficiency and ease of implementation.\\n\\nNote that Eqn. (12) and (13) are only part of the generation process. As mentioned in the **response to W2.1** above, in practice, one can alternate between interpretation $p(\\\\mathbf{c}|\\\\mathbf{x})$ (using Eqn. (15) or (17) in the paper) and generation $p(\\\\mathbf{c},\\\\mathbf{x}|\\\\mathbf{y})$ (using Eqns. (11)-(13) in the paper) until convergence. This is the key difference between standard conditional diffusion models and our ECDM's **generation**. \\n\\nThe key distinction between our ECDM's formulation (Eqn. (6), (10)-(13), (17)-(19)) and a standard conditional diffusion model is that the variable $\\\\mathbf{c}$ in ECDM is not merely a processed condition.\\n\\nNote that concept-based **generation** is only a small part of our ECDM's contribution. Our ECDM goes far beyond **generation** and **unifies five different tasks**, i.e., **concept-based generation, conditional interpretation, concept debugging, intervention, and imputation**, in a **single** probabilistic framework. For example: \\n+ **Intervention.** During generation, one can easily intervene on the concepts $\\\\mathbf{c}$ to fix any incorrect generation.\\n+ **Interpretation.** Given a generated image, one can infer the concepts $\\\\mathbf{c}$ to check what concepts are expressed in the image, thereby interpreting the generation process. One can then perform intervention (mentioned above) based on the inferred $\\\\mathbf{c}$. \\n+ **Debugging.** Given the input $\\\\mathbf{y}$ and the generated image $\\\\mathbf{x}$, one can debug what concepts are generated *incorrectly* by comparing the what concepts are generated (i.e., $p(\\\\mathbf{c}|\\\\mathbf{x})$) and what concepts should be generated (i.e., $p(\\\\mathbf{c}|y)$). One can then perform intervention (mentioned above) based on the debugging results.\\n\\nIn the tasks above, the concept vector $\\\\mathbf{c}$ is not merely a \\\"processed conditioning input\\\"; it is also an interpretable variable that enables flexible compatibility measurement and joint modeling of the instruction, concept, and generation within our energy-based framework.\"}", "{\"summary\": \"The paper introduces concept bottleneck model into the diffusion generation process.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to read.\\n\\n2. The paper introduces concept bottleneck model into the generative diffusion model. Now the model has probilistic interpretation about the generated images.\", \"weaknesses\": \"1. Although I appreciate the idea of using concept sets to explain the generation, the proposed formulation does not make sense to me. The paper transforms a text embedding into a probility vector that represents the concepts. The point is that it is a deterministic mapping. For instance, \\\"polar bear\\\" outputs a determinsitic vector that represents the \\\"paws\\\", \\\"furry\\\" and \\\"big\\\". In many situations, we would expect a polar bear could be of different size, so \\\"big\\\" dim should vary from [0,1] in different polar bear images. In other words, I am expecting that the concept probability vector changes with generated image (not only the input tex prompt).\\n\\n2. Given a binary concept labes set, I am wondering the optimal output of the concept energy model with input y? It seems that a binary output is also expected from y to minimize the loss? If yes, can we just use a logic mapping, i.e., \\\"polar bear\\\"-> paws=1, big=1. (So, we do not have to learn the first energy model). If not, can you provide any explanation why it would not learn a binary vector given the target is binary?\\n\\nIf these two concerns are addressed, I am happy to raise my score.\", \"questions\": \"as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a framework that integrates diffusion models and Concept Bottleneck Models in an energy based model structure. ECDM aims to address interpretable control in current diffusion models. It allows concept-based generation, interpretation, debugging, intervention, and imputation. ECDM unifies tasks through energy networks, which enables modifications in the generated images based on probabilistic estimates. The model is evaluated on datasets like AWA2, CUB, and CelebA-HQ in concept accuracy, class accuracy, and FID scores with existing diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. ECDM combines diffusion models with concept bottlenecks in a way that supports both generative and interpretive tasks.\\n2. The model allows users to modify generated images based on specific concept-level controls, which is a practical tool.\\n3. The experiments on multiple datasets shows quantitative improvements in image quality and concept alignment.\", \"weaknesses\": \"1. The paper lacks comparisons with some related methods like COMET or CBGM, which are relevant energy-based interpretive frameworks for diffusion models.\\n2. While FID, class, and concept accuracy are used, other metrics like diversity or user-study-based interpretability scores could further validate the model's effectiveness.\\n3. Code for reproducibility is not provided.\\n4. The experiments rely on a fixed pretrained stable diffusion model, while other models are not explored.\\n5. Limitations: The method struggles with precise regional control over concept-based edits. Also, the energy-based approach is computationally intensive, especially during joint optimization steps.\", \"questions\": \"1. Concepts like \\\"pivotal inversion\\\" and \\\"energy matching inference\\\" could be better explained for clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your time and effort in reviewing our paper.\", \"comment\": \"Dear Reviewer AYc9,\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe appreciate your valuable comments and suggestions, and we firmly believe that our response and revisions can fully address your concerns. We are open to discussion (before Nov 26 AOE, after which we will not be able to respond to your comments unfortunately) if you have any additional questions or concerns, and if not, we will be immensely grateful if you could reevaluate your score.\\n\\nThank you again for your reviews which helped to improve our paper!\\n\\nBest regards,\\n\\nECDM Authors\"}", "{\"title\": \"[2/3] Thank you for your continued engagement and comments with our work.\", \"comment\": \"**Fundamental Difference between our ECDM and Conditional Diffusion Models.**\\n\\nConditional diffusion models (CDMs) generate images based on specific conditions, such as class labels or textual instructions. \\nUnfortunately, these models fail in terms of:\\n+ **Human-Understandable Interpretability**, which is essential for recognizing and understanding the generation behavior of CDMs, and\\n+ **Transparent Control**, which means the capability of precise and transparent control over the generated outputs, further supporting the development of advanced editing methods.\\n\\nOur ECDM can enable both **Human-Understandable Interpretability** and **Tranparent Control** under a unified framework. \\n\\n\\nNote that simply marrying CDMs with concept-based models **fails to provide meaningful Human-Understandable Interpretability and Tranparent Control**. For example:\\n\\n+ The explanation given by previous interpretable CDMs are **not always human-understandable and not informative enough**. \\nPrevious interpretable CDMs often fail to provide human-understandable and sufficiently informative explanations. For instance, in energy-based CDM literature, the number of decomposed and visualized concepts is typically fewer than ten. These visualized concepts are not guaranteed to represent the key factors driving the image generation process. Furthermore, increasing the number of decomposed concepts not only leads to prohibitively high inference costs, making this approach impractical, but also tends to result in the acquisition of abstract concepts that are often difficult for humans to understand, typically rendering them meaningless.\\n\\n+ Simply inputing human-understandable concepts into CDMs also **cannot ensure a faithful interpretable generation**.\\nSimply inputting human-understandable concepts into CDMs does not guarantee faithful or interpretable generation. This approach neither ensures nor monitors the involvement of these concepts during the generation process. Without understanding how or why the model generates specific outputs under given conditions, it becomes difficult to diagnose and correct errors when the generation fails. For example, if the model generates an incorrect image, we cannot identify the root cause of the error or determine how to intervene effectively.\"}", "{\"title\": \"Thank you for your time and effort in reviewing our paper.\", \"comment\": \"Dear Reviewer grAS,\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe appreciate your valuable comments and suggestions, and we firmly believe that our response and revisions can fully address your concerns. We are open to discussion (before Nov 26 AOE, after which we will not be able to respond to your comments unfortunately) if you have any additional questions or concerns, and if not, we will be immensely grateful if you could reevaluate your score.\\n\\nThank you again for your reviews which helped to improve our paper!\\n\\nBest regards,\\n\\nECDM Authors\"}", "{\"title\": \"[2/3] Thank you for your encouraging and valuable comments.\", \"comment\": \"**W2.1. The concept-based generation method described in (11)-(13) resembles a Gibbs sampling or coordinate-wise algorithm, but equation (11) focuses on maximizing mapping energy rather than the entire joint energy...the rationale behind this approach...as $E^{joint}=E^{map}+E^{concept}$ incorporates dependencies on the concept in both terms...**\\n\\nWe are sorry for the confusion. In the concept-based joint generation, we perform concept inference and image generation by minimizing the **joint** energy $E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}}) \\\\triangleq E_{\\\\mathbf{\\\\psi}}^{concept}({\\\\mathbf{x}},{\\\\mathbf{c}}) + \\\\lambda_m E_{\\\\mathbf{\\\\psi}}^{map}({\\\\mathbf{c}},{\\\\mathbf{y}})$. This process entails the minimization of both the mapping energy $E^{map}$ and the concept energy $E^{concept}$. \\n\\nIn the full model, to minimize the **joint** energy $E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}}) \\\\triangleq E_{\\\\mathbf{\\\\psi}}^{concept}({\\\\mathbf{x}},{\\\\mathbf{c}}) + \\\\lambda_m E_{\\\\mathbf{\\\\psi}}^{map}({\\\\mathbf{c}},{\\\\mathbf{y}})$, we alternate between \\n+ Eqn. (17) to infer $p(\\\\hat{\\\\mathbf{c}} | \\\\mathbf{x})$,\\n+ Eqn. (11) to infer $p(\\\\hat{\\\\mathbf{c}} | \\\\mathbf{y})$, and\\n+ Eqn. (12)-(13) to infer $p(\\\\mathbf{x}|\\\\hat{\\\\mathbf{c}} )$\\nuntil convergence. This is indeed similar to Gibbs sampling, but it is slightly different. For example, Gibbs sampling involves computing the conditional probability of one variable given all other variables. By contrast, in Eqn. (11), we infer $p(\\\\hat{\\\\mathbf{c}} | \\\\mathbf{y})$ rather than $p(\\\\hat{\\\\mathbf{c}} | \\\\mathbf{x}, \\\\mathbf{y})$. \\n\\nIn practice, we find that simply alternating between minimizing Eqn. (11) (mapping energy) and Eqn. (13) (concept energy) can already provide satisfactory results with improved computational efficiency. \\n\\nNote that in Eqns. (12)-(13), the diffusion-style sampling dynamics of the image $\\\\mathbf{x}_t$ necessitate computing the gradient with respect to $\\\\mathbf{x}$ for each sampling step. Here the mapping energy $E^{map}(\\\\mathbf{y},\\\\mathbf{c})$ in $E^{joint}$ is not relevant to $\\\\mathbf{x}$, and therefore is not involved in Eqn. (13). \\n\\n\\n**W2.2. Additionally, maximizing $E^{map}$ with respect to the binary vector suggests an integer programming problem, which the paper does not sufficiently address regarding efficiency.**\\n\\nThis is a good question. Note that while the concept label $\\\\mathbf{c}$ is binary, our ECDM's predicted concept probability $\\\\hat{\\\\mathbf{c}}$ is a **real value** in the range $[0, 1]$. Therefore, we can use gradient descent to compute the gradient of $E^{joint}$ (including $E^{map}$) w.r.t. $\\\\mathbf{c}$ and update $\\\\mathbf{c}$ iteratively to infer $\\\\mathbf{c}$. In this case, it is **not** an integer programming problem and is therefore **very efficient**. \\n\\n\\n## For Questions:\\n\\n**Q1. Code Availability**\\n\\nThank you for your interest in our code. We assure you that we will make our code open-source and available to the wider research community after acceptance, thereby facilitating the reproducibility of our results. So far, we have finished cleaning up the source code and will release it if the paper is accepted. \\n\\n**Q2. How is the number of concepts $K$ in the concept vector $\\\\mathbf{c}$ determined? Is $K$ fixed?**\\n\\nThe number of concepts $K$ is determined by the concept annotations provided by the specific dataset. The overall concept number $K$ is fixed during training and inference, but can be adjusted according to different datasets. We also include more details on the Datasets in the **Experiments Setup Section (Section 4.1) and Appendix C**.\\n\\n**Q3. How is the concept embedding $\\\\mathbf{v}_k$ modeled?**\\n\\nThanks for mentioning this. For a given concept phrase (e.g., \\\"black bird wings\\\"), we first extract the textual embeddings using the text encoder. Subsequently, the textual embedding is projected (through a learnable projection neural network) into a positive embedding $\\\\mathbf{v}_k^{(+)}$ and a negative embedding $\\\\mathbf{v}_k^{(-)}$. The final concept embedding $\\\\mathbf{v}_k$ is then a combination of $\\\\mathbf{v}_k^{(+)}$ and $\\\\mathbf{v}_k^{(-)}$, weighted by the concept probability $c_k$, i.e., $\\\\mathbf{v}_k = c_k \\\\cdot \\\\mathbf{v}_k^{(+)} + (1-c_k) \\\\cdot \\\\mathbf{v}_k^{(-)}$. This combined concept embedding is further utilized in all five unified tasks (i.e., generation, interpretations, debugging, intervention, and imputation).\"}", "{\"title\": \"Response\", \"comment\": \"I thank authors for your responses. But I am more interested in the generation part of the method. In text-to-image generation, the method generates a deterministic probability vector from text? Can we change a single dimension (e.g., 1->0 gradually) to get a slighly different image with the corresponding concept changes? As for the binary question, it is still unclear why the objectives won't lead to binary solutions (in training and generation phases, not interpretation phase). Since your target is binary and it is labeled correctly, why the optimal solution is not binary?\"}", "{\"title\": \"[1/3] Thank you for your continued engagement and comments with our work.\", \"comment\": \"Thank you for providing your feedback and outlining your remaining concerns. Since the discussion period has been extended by six additional days (until December 2nd AoE), we would like to take this opportunity to further explain that our approach **is not merely a combination of CBM and diffusion models** and **enjoys novel capabilities**.\\n\\n**Fundamental Difference between our ECDM and Concept Bottleneck Models (CBMs).**\\n\\nConventional CBMs predict a set of concepts from an input image and then use these predicted concepts to determine the class label. CBMs remain an important and active area of research.\\nHowever, despite significant progress in this field, **a major research gap remains**:\\n\\nMost CBMs are *discriminative* models. Previous work has primarily focused on discriminative settings (e.g., modeling $p(\\\\mathbf{c}|\\\\mathbf{x})$ and $p(\\\\mathbf{y}|\\\\mathbf{c})$), while the generative setting (e.g., modeling $p(\\\\mathbf{x}|\\\\mathbf{c})$ and $p(\\\\mathbf{c}|\\\\mathbf{y})$) has been largely unexplored.\\n\\nThis oversight in generative settings limits the potential to extend CBMs' interpretability into generative tasks, which are all critical for advancing generative model development, to provide:\\n+ Human-understandable explanations,\\n+ Interfaces for integrating human expertise,\\n+ Interpretations that can enhance generation quality,\\n+ Tools for model intervention.\\n\\nIn contrast, our ECDM can enable **all 4 capabilities above** under a unified framework. \\n\\nNote that simply marrying CBMs with generative models **fails to provide faithful interpretations with minimal cost**.\", \"for_example\": [\"**Only** predicting concepts relied on instructions and using them for generation models a fixed mapping between classes and concepts. This approach overlooks the dynamic interplay between generated images, concepts, and instructions. Such unidirectional learning may introduce biases, as it does not allow for the generated images to influence or modify the concepts being used. Consequently, it fails to accurately reflect the foundational basis of the image generation process. This limitation prevents the verification of whether the generated images faithfully reproduce the concepts intended by the instructions, indicating a critical need for mechanisms to address and correct these biases.\", \"**Only** predicting the concepts from the generated images using additional discriminative conceptual models cannot ensure these prediced concept's involvement during the generation process, since these predictions are not inherently involved in the generative process.\", \"**Inserting a concept prediction layer** in the diffusion UNet (e.g., CBGM) requires training the model from scratch again, overlooking the abundant visual information and rich potential interpretive informants embedded in the large-scale pretrained diffusion model (e.g., Stable Diffusion).\"]}", "{\"summary\": \"This paper proposes a concept-based diffusion model that enables conditional generation, concept interpretation and debugging, as well as image operations like intervention and imputation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly written and well-organized, making complex concepts more accessible.\\n2. The authors conduct thorough experiments across various datasets and tasks, providing clear comparisons to existing benchmarks.\\n3. The concept-based framework is versatile and seems applicable to any conditional data generation task.\\n4. The framework enhances the interpretability of the elements and features in the generated images.\", \"weaknesses\": \"1. I have to mention that I have not previously conducted research about concept-based generation, but the significance of this work within the broader field of generative models is unclear for me. It appears to be a straightforward combination of concept bottleneck models and standard conditional diffusion models.\\n2. The concept-based generation method described in (11)-(13) resembles a Gibbs sampling or coordinate-wise algorithm, but equation (11) focuses on maximizing mapping energy $ E^{map} $ rather than the entire joint energy $E^{joint}$. This raises questions about the rationale behind this approach, as $ E^{joint}=E^{map} +E^{concept} $ incorporates dependencies on the concept $c $ in both terms. Additionally, maximizing $ E^{map} $ with respect to the binary vector $c$ suggests an integer programming problem, which the paper does not sufficiently address regarding efficiency.\", \"questions\": \"1. Is the code for the model available now?\\n2. How is the number of concepts $K$ in the concept vector $ c \\\\in \\\\set{0,1}^K $ determined? Is $ K $ fixed?\\n3. How is the concept embedding $v_k $ modeled?\\n4. What distinguishes the generation process described in equations (12) and (13) from that of a standard conditional diffusion model? It seems the only change is replacing the conditioning input $ y $ with the processed conditioning input $ c $ obtained from $y$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I thank the authors for the rebuttal and efforts. Some of my concerns, such as the metrics and term clarifications, are addressed. Comprehensively considering the contributions of the paper, the current experiment volume (only on the pretrained stable diffusion model), and the perspective from other reviewers, I keep my score for now.\"}", "{\"title\": \"Response\", \"comment\": \"I thank the authors for their detailed response. However, I still feel that the novelty of this work is limited, given the previous results on diffusion models and concept bottleneck models. Thus, I maintain my score.\"}", "{\"title\": \"Thank you for your time and effort in reviewing our paper.\", \"comment\": \"Dear Reviewer oJRi,\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe appreciate your valuable comments and suggestions, and we firmly believe that our response and revisions can fully address your concerns. We are open to discussion (before Nov 26 AOE, after which we will not be able to respond to your comments unfortunately) if you have any additional questions or concerns, and if not, we will be immensely grateful if you could reevaluate your score.\\n\\nThank you again for your reviews which helped to improve our paper!\\n\\nBest regards,\\n\\nECDM Authors\"}", "{\"title\": \"[2/2] Thank you for your encouraging and valuable comments.\", \"comment\": \"**Q4. How robust is the concept interpretation when handling out-of-distribution samples?**\\n\\nThis is a good question. Inspired by your comment, we conducted additional experiments regarding out-of-distribution samples, and the results are included in **Figure 7 and Appendix B.2**. \\n\\n**Additional Results on Out-of-Distribution Samples.** Specifically, the experiments are conducted on the TravelingBirds dataset following the robustness experiments of CBM [7]. We provide the bird image under significant background shift to our models for concept interpretation. In this case study, our model can still accurately infer the corresponding concepts of the bird \\\"Vermilion Flycatcher\\\" (e.g., \\\"all-purpose bill shape\\\" and \\\"solid belly pattern\\\"). These findings demonstrate our model's robustness when facing domain shifts.\\n\\n**Why ECDM Is Robust for Out-of-Distribution Samples.** Typical methods tend to suffer from spurious features, e.g., irrelevant backgrounds. In contrast, the concept-based modeling framework of our ECDM ensures the robustness of the interpretations. Specifically, ECDM forces the model to learn *concept-specific* information and use these concepts to generate images and interpret these images; this way, ECDM focuses more on the genuine attributes of the target object and is less influenced by irrelevant, spurious features, such as irrelevant backgrounds. As a result, our ECDM enjoys robustness when dealing with out-of-distribution samples. For example, when interpreting a water bird with a spurious land background, our ECDM focuses only on the concepts of the water bird in the foreground and, therefore, will not be fooled by the spurious features in the background. \\n\\nWe have incorporated the discussion above in our revised paper (e.g., Appendix B.2) as suggested.\\n\\n\\n[1] Gao, Ruiqi, et al. \\\"Learning Energy-Based Models by Diffusion Recovery Likelihood.\\\" International Conference on Learning Representations. 2021.\\n\\n[2] Zhu, Yaxuan, et al. \\\"Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n\\n[3] Xie, Jianwen, et al. \\\"A Theory of Generative Convnet.\\\" International conference on machine learning. PMLR, 2016.\\n\\n[4] Du, Yilun, and Igor Mordatch. \\\"Implicit Generation and Modeling with Energy Based Models.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[5] Xu, Xinyue, et al. \\\"Energy-based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n[6] Song, Jiaming, Chenlin Meng, and Stefano Ermon. \\\"Denoising Diffusion Implicit Models.\\\" arXiv preprint arXiv:2010.02502 (2020).\\n\\n[7] Koh, Pang Wei, et al. \\\"Concept Bottleneck Models.\\\" International conference on machine learning. PMLR, 2020.\"}", "{\"title\": \"[1/2] Thank you for your continued engagement and comments with our work.\", \"comment\": \"Thank you for your feedback and for providing additional clarifications regarding your questions. Below, we will address them point by point in detail.\\n\\n**Q1: Can we change a single dimension (e.g., 1->0 gradually) to get a slighly different image with the corresponding concept changes?**\\n\\n**Yes, our ECDM can change a single dimension (e.g., 1->0 gradually) to get a slighly different image with the corresponding concept changes.** When performing generation, the images produced by our model can vary depending on the adjustment of the concept probabilities. This feature is enabled by the novel joint energy-based modeling approach of our ECDM. This approach facilitates the complex representation of relationships between concepts and further supports concept-based generation by leveraging these modeled interactions through the minimization of joint energy.\\n\\n**Additional Experiments to Demonstrate this Ability.** Inspired by your detailed comments, we conducted additional experiments to visualize the variation of the generated image according to the adjustments of concepts; the new results are included in **Figure 6 of Appendix B.1**. \\n\\nGiven the same prompt, \\\"A photo of the animal horse\\\", we adjusted the probabilities of the concepts \\\"white\\\" and \\\"brown\\\". Specifically, we gradually decreased the probability of the concept \\\"white\\\" from $1$ to $0$, simultaneously increased the probability of the concept \\\"brown\\u201d from $0$ to $1$, and then performed joint generation. \\n\\nAs shown in **Figure 6 of Appendix B.1**, our ECDM accurately reflected these concept probability changes, producing images of a horse with the corresponding colors. When the probability of \\u201cwhite\\u201d was set to $1$ and \\u201cbrown\\u201d to $0$, the model generated a purely white horse. As the probability of \\u201cwhite\\u201d gradually decreased and that of \\u201cbrown\\u201d increased, the generated horse images gradually shifted in color, eventually producing a purely brown horse.\\n\\nTherefore, our ECDM's generated image does adjust with the concept probability vector in generation. *This further validates that our model does not only learn a deterministic mapping.* We have incorporated your valuable insight and further discussions into **Appendix B.1**. \\n\\n**Q2: In text-to-image generation, the method generates a deterministic probability vector from text?**\\n\\nThank you for your further clarification. **The concept probability vector is also a probabilistic vector from text.** The concept probability depends on both image and instruction on the training and generation process. We refer the reviewer to our **response to Q3** below for further details.\"}", "{\"title\": \"[2/2] Thank you for your encouraging and valuable comments.\", \"comment\": \"**W2. \\\"Given a binary concept labels set, I am wondering the optimal output of the concept energy model with input y? It seems that a binary output is also expected from y to minimize the loss? Can you provide any explanation why it would not learn a binary vector given the target is binary?\\\"**\\n\\nThis is a good question. Our ECDM's formulation and learning process goes beyond a simple binary logical mapping; instead, they involve probabilistic interactions among concepts, instructions, and generated images. \\n\\nAs shown in the **response to W1** above, given the *same text prompt* (e.g., \\\"A photo of the animal Polar Bear\\\"), our ECDM can generate *different concept probabilities* according to the *different generated images*. Therefore, it is not a simple binary logical mapping from the text prompt (and its embedding) to the concepts; they also *depend on the images*. \\n\\nFormally, we model the joint energy function as: \\n$E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}}) \\\\triangleq E_{\\\\mathbf{\\\\psi}}^{concept}({\\\\mathbf{x}},{\\\\mathbf{c}}) + \\\\lambda_m E_{\\\\mathbf{\\\\psi}}^{map}({\\\\mathbf{c}},{\\\\mathbf{y}})$, \\nwith the mapping energy function \\n$E_{\\\\mathbf{\\\\psi}}^{map}(\\\\mathbf{y},\\\\mathbf{c}) = D_{uw}(\\\\mathbf{u},\\\\mathbf{w})$, \\nand concept energy function \\n$E_{\\\\mathbf{\\\\psi}}^{concept}(\\\\mathbf{x},\\\\mathbf{c}) \\\\triangleq \\\\mathbb{E}_{\\\\mathbf{x}, \\\\epsilon \\\\sim \\\\mathcal{N}(\\\\boldsymbol{0}, \\\\boldsymbol{I}), t} [ \\\\left\\\\| \\\\epsilon - \\\\epsilon _\\\\theta(D_c(\\\\mathbf{c}),\\\\mathbf{x}_t, t) \\\\right\\\\|^2_2 ]$. \\n+ Therefore, given the input $\\\\mathbf{y}$, the optimal concepts $\\\\mathbf{c}$ depend on not only mapping energy function $E_{\\\\mathbf{\\\\psi}}^{map}(\\\\mathbf{y},\\\\mathbf{c})$ but also the concept energy function $E_{\\\\mathbf{\\\\psi}}^{concept}(\\\\mathbf{x},\\\\mathbf{c})$. Since the inferred concepts $\\\\mathbf{c}$ would also change with respect to the image $\\\\mathbf{x}$, the resulting $\\\\mathbf{c}$ for each concept will be a real value between $0$ and $1$. \\n+ Since different images $\\\\mathbf{x}$ may have different sizes, concepts such as \\\"big\\\" are actually real values from $[0,1]$, as you mentioned. With our ECDM inferring concepts $\\\\mathbf{c}$ from both the input text $\\\\mathbf{y}$ and the image $\\\\mathbf{x}$, the resulting $\\\\mathbf{c}$ for each concept will be a real value between $0$ and $1$.\\n\\nLast but not least, we would like to thank you again for your insightful comments and for keeping the communication channel open. Please do not hesitate to let us know if you have any follow-up questions. We will be very happy to provide more clarifications.\"}", "{\"metareview\": \"The submission presents a generative model for image and \\\"concept\\\" jointly for a given instruction label, in hope to support better interpretability, diagnosis, instruction following, and modified/controlled image generation, through the various derived conditional distribution samplers. Reviewers acknowledge the general idea and the capabilities, while also raised some insufficiencies in discussions on related work, only using a pretrained stable diffusion, and asked for more finer demonstration on the controllability and more evaluation metrics. The authors have addressed some, but there remain concerns (e.g., only using one pretrained model) and the reviewers did not update their neutral-to-negative scores.\\n\\nIn addition, I also found the mathematical formulations confusing. I posted my concerns to the authors: \\\"In Eq. (5), is the E^concept model to be used as the epsilon model in the diffusion formulation? If so, why it does not depend on t (while the right-most expression contains t)? In Eq. (6), how does the l.h.s still a function of x (and t, if you answered yes to the first question) while you take expectations w.r.t x and t on the r.h.s? Particularly, how can you recover Eq. (5) from the definition in Eq. (6)? In the following description on training the energy model, if you are training it using maximum likelihood, why can you only take the expectation of the energy under the data distribution, while omitting the expectation of the energy under the \\\"model\\\" distribution (the distribution that the energy function defines) from the loss, which should be there in the standard energy-based training loss? Is there any specialty here?\\\" The authors replied, but without sufficiently detailed justification on the equations, deductions, and methods, and my concerns persist. In addition, Eq. (2) also seems inaccurate as there should be an appropriate time weighting to the noise prediction loss at each t to make it an ELBO. Therefore, the submission does not seem like a serious draft for publication. I hence recommend a reject.\", \"additional_comments_on_reviewer_discussion\": \"(Already covered in Metareview)\"}", "{\"summary\": \"This paper introduces Energy-based Conceptual Diffusion Models (ECDMs), which integrate diffusion models and Concept Bottleneck Models within an energy-based framework. The key contribution is providing a unified approach for concept-based generation, interpretation, debugging, intervention, and imputation. The method enables both high-quality image generation and human-interpretable control through concepts. The authors demonstrate effectiveness on three datasets (CUB, AWA2, CelebA-HQ) through quantitative and qualitative evaluations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Novel integration of concept bottleneck models with diffusion models through an energy-based framework\\n\\n2. Comprehensive theoretical framework with detailed proofs\\n\\n3. Multiple practical applications (generation, interpretation, debugging, intervention)\\n\\n4. Strong empirical results across different datasets\\n\\n5. Clear improvement over baseline methods in both generation quality and concept accuracy\", \"weaknesses\": \"The paper fails to acknowledge pioneering work on energy-based diffusion models, particularly \\\"Diffusion Recovery Likelihood\\\" and \\\"Cooperative Diffusion Recovery Likelihood\\\" and also fail to include a wide range of works using EBM as compositions such as \\\"a theory of generative convnet\\\", \\\"Implicit Generation and Generalization in Energy-Based Models\\\" etc.\", \"questions\": \"1. How does the method scale with increasing number of concepts?\\n\\n2. What is the computational overhead compared to standard diffusion models?\\n\\n3. Could the framework be extended to handle continuous concept values rather than binary?\\n\\n4. How robust is the concept interpretation when handling out-of-distribution samples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[2/2] Thank you for your continued engagement and comments with our work.\", \"comment\": \"**Q3: Why the objectives won't lead to optimal solutions in simple binary form in the training and generation?**\\n\\nThank you for your detailed follow-up question. We would like to kindly clarify that **in the training and generation stage, the optimal solution is still not in simple binary form**.\\n\\n**Generation.** During generation, our model focuses on **minimizing the joint energy** $E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}})$, i.e., \\\"given instruction ${\\\\mathbf{y}}$, find image generation ${\\\\mathbf{x}}$ and concept set ${\\\\mathbf{c}}$\\\" that minimize this joint energy. In this way, ${\\\\mathbf{x}}$ and ${\\\\mathbf{c}}$ will *mutually affect each other*. \\n\\nFor example, our full model's generation are equivalent to: \\n - (A) given ${\\\\mathbf{y}}$, generate ${\\\\mathbf{c}}$; \\n - (B) given ${\\\\mathbf{c}}$, generate ${\\\\mathbf{x}}$; \\n - (C) given ${\\\\mathbf{y}}$ and ${\\\\mathbf{x}}$, adjust ${\\\\mathbf{c}}$; \\n - (D) *repeat step (B) and (C)* until convergence. \\n\\nWhile the generated ${\\\\mathbf{c}}$ in step (A) may be binary, ${\\\\mathbf{c}}$ is not binary after alternating between step (B) and (C). The key is that the generation of ${\\\\mathbf{x}}$ is stochastic (probabilistic), i.e., the generated images will differ depending on the initial noise of the diffusion model. Therefore, the final $\\\\mathbf{c}$ depends on not only $\\\\mathbf{y}$ **but also ${\\\\mathbf{x}}$**, making it probabilistic and non-binary. \\n\\n**Training.** Similarly, in training, our model focuses on **optimizing the estimation of the joint energy**, instead of simply producing binary predictions.\\n\\nFor the **mapping energy network $E(\\\\mathbf{y}, \\\\mathbf{c})$**, the optimization target is to minimize the energy estimate (a scalar value) for each correct instruction-concept pair and vice versa using contrastive divergence, rather than predicting a binary output. Consequently, we aim for optimal concept embeddings that minimize the energy of the correct input combination to reduce loss rather than directly predicting the correct binary label.\\n\\nRegarding the **concept energy network $E(\\\\mathbf{x}, \\\\mathbf{c})$**, the input consists of the combined concept embedding $\\\\mathbf{v}_k = c_k \\\\cdot \\\\mathbf{v}_k^{(+)} + (1-c_k) \\\\cdot \\\\mathbf{v}_k^{(-)}$ and the corresponding correct images $\\\\mathbf{x}$, rather than a binary prediction. Similarly, we optimize the concept embedding so that the pretrained energy estimator assigns the lowest energy to the correct concept-image combination, ensuring accurate compatibility estimation between concepts and images.\\n\\nThis training process is non-binary under the energy-based formulation, forcing the network to learn more complexed relationships among the instructions, concepts, and image generations. As a result, the training process does not learn a binary vector during training because (1) the target is contrastive energy minimization, not binary classification; (2) the joint minimization of both energy networks enforces the concept embedding to be a complex, non-binary vector. \\n\\n\\n**Conclusion.** Therefore, if we train and generate using $E_{\\\\mathbf{\\\\psi}}^{map}(\\\\mathbf{y},\\\\mathbf{c})$ separately, the output ${\\\\mathbf{c}}$ will be binary; if we train and generate considering $E_{\\\\mathbf{\\\\psi}}^{joint}({\\\\mathbf{x}},{\\\\mathbf{c}},{\\\\mathbf{y}}) \\\\triangleq E_{\\\\mathbf{\\\\psi}}^{concept}(\\\\mathbf{x},\\\\mathbf{c}) + \\\\lambda_m E_{\\\\mathbf{\\\\psi}}^{map}(\\\\mathbf{c},\\\\mathbf{y})$ jointly, the final ${\\\\mathbf{c}}$ will not be binary. This is true both for training and generation.\\n\\nIf the model had only learned fixed deterministic and binary mappings as the optimal solution during the training process, the **additional experiments** on \\n+ **concept-based intervention** in the **response to Q1** and **Figure 6 of Appendix B.1** and\\n+ **energy matching conceptual interpretation** in **Appendix B.2**\\n\\ncould not have been successful, as both of they heavily rely on leveraging non-binary concept-image interactions to perceive and derive concept probabilities. \\n\\nAgain, we are immensely grateful for your follow-up comments and keeping the communication channel open. If you feel that we have adequately addressed your concerns, we would appreciate your consideration in adjusting our score.\"}" ] }
BUpdp5gETF
Different Rates for Different Weights: Decoupled Relative Learning Rate Schedules
[ "Jan Ludziejewski", "Jan Małaśnicki", "Maciej Pióro", "Michał Krutul", "Kamil Ciebiera", "Maciej Stefaniak", "Jakub Krajewski", "Piotr Sankowski", "Marek Cygan", "Kamil Adamczewski", "Sebastian Jaszczur" ]
In this work, we introduce a novel approach for optimizing neural network training by adjusting learning rates across weights of different components in Transformer models. Traditional methods often apply a uniform learning rate across all network layers, potentially overlooking the unique dynamics of each part. Remarkably, our introduced Relative Learning Rate Schedules (RLRS) method accelerates the training process by 13.6%, particularly in complex models such as the Mixture of Experts (MoE). Hyperparameters of RLRS can be efficiently tuned on smaller models and then extrapolated to 27x larger ones. This simple and effective method results in a substantial reduction in training time and computational resources, offering a practical and scalable solution for optimizing large-scale neural networks.
[ "learning rate", "transformer", "mixture of experts", "LLM" ]
https://openreview.net/pdf?id=BUpdp5gETF
https://openreview.net/forum?id=BUpdp5gETF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xn4JTVbGu6", "tMocNdBPHV", "s1wlf9NGaP", "eXYR0xz6EQ", "IGaiZL6QRM" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730455806716, 1730683855447, 1729335310899, 1730488798532, 1733264701851 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3807/Reviewer_T5Xq" ], [ "ICLR.cc/2025/Conference/Submission3807/Reviewer_ciJG" ], [ "ICLR.cc/2025/Conference/Submission3807/Reviewer_V9H1" ], [ "ICLR.cc/2025/Conference/Submission3807/Reviewer_ETGF" ], [ "ICLR.cc/2025/Conference/Submission3807/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes using different peak learning and final learning rates for different types of layers in GPT training across both dense and mixture-of-expert configurations. They show that the adjustments can be tuned on a small scale and transferred to larger models, providing a speedup (reduced tokens for a given loss) on the order of 10-20% in both cases. The paper then analyzes and interprets some of the tuned adjustment values.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Problem tackled is practical and of interest to the community.\", \"The core idea is interesting.\", \"The paper is clear and easy to follow.\", \"The authors tune their hyperparameters well.\"], \"weaknesses\": [\"The overall novelty of the work might be limited given that tuning learning rates for each component of the network is already well known e.g. in muP works [1] [2].\", \"The paper doesn\\u2019t answer the question of whether the final lr values are truly needed. Figure 4 suggests that training is mostly sensitive to the peak lr of different components, but this is exactly what existing work does.\", \"The experiments are relatively small in scale and could be better ablated.\", \"[1]: https://arxiv.org/abs/2407.05872\", \"[2]: https://arxiv.org/abs/2407.17465\"], \"questions\": \"Recommendation: Overall I recommend rejection based on seemingly limited novelty over existing approaches combined with the lack of other significant contributions (e.g. a large-scale validation or a theoretical analysis of these approaches could be valuable despite the method not being novel).\", \"suggestions\": [\"I recommend trying to add stronger evidence that the final learning rate values are needed to justify your approach over existing work.\", \"I somewhat doubt that schedules for short runs transfer well to longer runs. For example weight decay can be seen as an (effective) learning rate scheduler but it only makes a significant difference for longer runs. It might be more convincing if you could provide an example or arguments to support this approach.\", \"In Figure 1 you show the update size over time and use this to justify the learning rate schedules. Some other optimizers like Lion would fix the update size. Would you expect different schedules for different parameters to help in this case?\", \"If you pursue using schedules for each parameter it would be nice to see a greater difference, e.g. something like different warmup lengths or earlier decay with a WSD. This would be a greater differentiator from the peak lr.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Conventionally, when training a transformer model, the learning rates and the learning rates schedules of parameters in the entire model are set the same. However, recent works have shown that the weight updates of different modules in a Transformer model have varied dynamics, indicating that letting all modules share the same learning rate schedule may decrease the stability of the training process. Motivated by this, this paper has proposed a module-adaptive learning rate schedule called Relative Learning Rate Schedules (RLRS), which fine-tune the learning rate schedule for each module adaptively using a proposed approach called Local Search. The paper has also suggested that the RLRS's of a smaller model can be preserved to be applied by a larger model without much loss of performance. Experiments have been conducted to show the efficiency and effectiveness of the proposed RLRS.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Has a valid and strong motivation.\\n2. The finding that the RLRS's of a smaller model can be transfered (with slight adjustment) to a larger model without losing much fitness on the larger model is intriguing and practically meaningful.\\n3. The experiments are comprehensive and do show the effectiveness and efficiency of the proposed RLRS.\", \"weaknesses\": \"1. The validity of the claim that the hyperparameters of small model remains robust in large model is not convincingly demonstrated.\\n2. The hyperparameter tuning process is based on plain searching, which make the algorithm lack of efficiency in practice. Also, in this context, it's not convincing enough to say that the improvement of RLRS comes from using different learning rates schedules for different modules. It's simply because we have introduced more hyperparameters so that we have more degree of freedom to fine-tune the training process, just like in a standard deep learning task, if the model has more parameter, it's natural to predict that the training performance of the model will be much improved.\", \"questions\": \"1. Line 123 to 124 on page 3. Here the authors indicate that they demonstrate that the same set of relative learning rates remains robust across a range of model sizes \\\"in the next section\\\". However, in the next section 2.1, I don't see any demonstrations of the claim, but only introductions of the relative LR adjustment algorithms. Also, although Figure 4 does somehow show that the performance of $\\\\lambda_{small}$ extrapolated to large models remains optimal or nearly optimal, it still hasn't demonstrated the \\\"across 'a range of model sizes'\\\" part in the claim.\\n2. Algorithm 1 and Algorithm 2 are named the same: \\\"Relative LR Adjustment Algorithm\\\". Although they can be distinguished by simply indicating algorithms 1 or algorithm 2, it's still better to give them distinct names since they seem to be an essential part of the paper.\\n3. Line 159. What \\\"substantial gains\\\" exactly has applying relative rate to an already tuned base model offered?\\n4. Algorithm 3, step 2. How are the set of the 4 values derived?\\n5. Algorithm 3, step 3. How is an \\\"improvement\\\" defined? Also, speak of improvement, I believe we are comparing the current experiment with the previous one. Then in this context, how is the very first experiment set up and run?\\n6. Section 3.3. This section indicates that after we have got the best learning rate schedules for both RLRS model and baseline model, the former one can be trained faster than the latter one. But is the process of fine-tunning taken into account? Meaning, will the process of fine-tunning in RLRS take much longer time that that in baseline, since there are more hyperparameters to be tuned, and the process seems to be based on plain searching?\\n7. Should we care about the changes in the test loss between RLRS trained model and baseline trained model?\\n8. Section 4.1. Is the analysis in this section conducted on a small model or a large one?\\n9. Section 4.1, Attention. Is there an explanation or an insight behind the behavior of the Attention module that the learning rate remains unchanged? Also, the authors haven't analyzed the behavior of learning rates in the Attention module in the dense models, which is illustrated in Figure 5.\\n10. Table 4. The start value of Embedding should be 3.3 right, according to line 279 to 280?\\n11. Whenever mentioning \\\"performance\\\" or \\\"loss\\\" etc., please specify which kind of performance or loss (training or test), e.g., Figure 2, and other places.\\n12. Does the stability of the training process necessarily lead to better eval loss?\\n13. Line 375. The authors claim the Figure 4 also shows the importance of tuning the relative learning rate for individual modules. Please explain this claim from the plots in Figure 4.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to have different learning rates for different parts of the transformer model. The authors propose a simple heuristic for deciding what the best learning rate is for each component. Their method, RLRS, improves training speed and stability. The proposed approach shows a 13.6% acceleration in training time and can scale effectively to models up to 27 times larger without the need for extensive re-tuning of hyperparameters.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"\\u2022 I think the topic the authors are aiming to address is interesting, and it makes sense that different learning rates for different components would help to increase convergence and performance.\", \"weaknesses\": \"\\u2022 This proposed method is not theoretically motivated and the proposed method appears to be a very crude heuristic for selecting the learning rate. Algorithm 3, which discusses how the hyperparameters are found suggests that the authors are just doing a kind of random search, where you multiply the values by a factor from (x0.2 - x5).\\n\\n\\u2022 There is very little background or discussion covering the central topics of this paper. For example, there is almost no time spent discussing contemporary hyper-parameter optimization techniques. Furthermore, the idea of giving different components different learning rates is not a new one \\u2013 in fact it is natively supported in libraries such as PyTorch.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Decoupled Relative Learning Rate Schedules (RLRS) for Transformers, assigning distinct learning rates to different components like Embedding and Attention layers. The authors suggest tuning these rates on smaller models and then applying them to larger ones to save on computational costs. Experiments with dense and Mixture of Experts (MoE) models show improved training speed and stability, particularly in large-scale MoE setups. The results indicate that RLRS can scale efficiently, offering an approach to optimize Transformer training at larger scales.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The proposed method has been applied to large model scales, which is practical if the implementation and results can be verified.\", \"weaknesses\": \"1. (**Limited Novelty of proposed approach**) The authors assume that relative learning rates tuned on smaller models will perform equally well on much larger models. A similar idea has been proposed in [1], but there is no discussion of comparing it with related work.\\n\\n2. The rationale for introducing separate learning rate schedules for different components is not well-explained. The rationale and motivation mentioned in the paper are only listed below, which is not convincing.\\n> At the same time, modern Deep Learning architectures are not homogeneous, with different parts on the training phase, which can be problematic in some cases. For example, in Mixture of Experts (MoE) models, the Router often stabilizes early in training, leading to deterministic routing to the Experts\\n\\n3. The paper does not sufficiently compare its approach with established methods for adaptive learning rates, such as [2]. \\n\\n\\n**Reference**\\n[1] Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer\\n\\n[2] Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training\", \"questions\": \"1. could the author provide the training curves or the measurement of training time to verify the time reduction reported in the paper?\\n2. What is the computational overhead associated with tuning the relative learning rates on smaller models? A breakdown of the resources required to tune these rates on smaller models and transfer them to larger ones would clarify the practical efficiency of this approach in real-world applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"**Summary**\\n\\nWe thank all the reviewers for their thorough feedback and valuable insights. Based on their comments, we recognize that our paper requires substantial revisions, particularly in terms of writing. Therefore, we have decided to withdraw the submission. We understand the concerns raised and provide detailed responses to reviewers below, outlining our planned improvements. We acknowledge that many significant issues mentioned in the reviews stemmed from a lack of adequate explanations and comparisons in our paper. We apologize for the rushed work and aim to do better in the future. We are grateful for all the feedback, as it will significantly help us improve the paper. We maintain our belief in the presented research work while working on further revision.\\n\\n**Regarding strengths**\\n\\nWe sincerely appreciate the positive feedback and constructive insights provided by the reviewers. We are pleased that Rev. ciJG recognizes our \\\"valid and strong motivation\\\" and finds our discovery that \\\"the RLRS's of a smaller model can be transferred (with slight adjustment) to a larger model without losing much fitness\\\" both \\\"intriguing and practically meaningful.\\\" We also value the acknowledgment of our \\\"comprehensive experiments\\\" that demonstrate the \\\"effectiveness and efficiency\\\" of our proposed RLRS. Rev. ETGF highlights the practical application of our method \\\"to large model scales,\\\" reinforcing its utility and relevance. Rev. T5Xq emphasizes the relevance of the problem we address and proper hyperparameter tuning. Similarly, Rev. V9H1 supports the \\\"interesting\\\" topic and the logic behind our approach, emphasizing that varying learning rates \\\"makes sense\\\" for improving convergence and performance. We are encouraged by these endorsements and remain committed to advancing this line of research.\\n\\n**Regarding Tensor Programs V**\\n\\nSome reviewers suggested that no comparison is made between our work and Tensor Programs. However, such a comparison can be found in Section 5.1 (\\u201cCombination with Tensor Programs\\u201d). While at first glance our work might seem similar to muP / Tensor Programs, we believe there are some crucial differences. Firstly, we do not aim to transfer learning rate from small to large models - while differently-sized models can have different optimal learning rates, our relative learning rates (the coefficients) can be transferred from small to large models while preserving a large part of the improvements. Furthermore, we believe Tensor Programs and RLRS could be unified, as they serve different purposes, enabling both faster convergence and a fuller transfer of hyperparameters.\\n\\n**Regarding adaptive optimizers like Sophia**\\n\\nOne reviewer suggested comparing our method to adaptive normalization, like Sophia, which dynamically adjusts the learning rate during training based on feedback from gradients or other training dynamics. However, our approach takes a different direction: instead of dynamically adjusting learning rates based on gradient behavior, we use fixed relative scaling factors for learning rates across different components, tuned on a smaller proxy model. Moreover, our method is designed to work with any underlying optimization algorithm. Unlike adaptive methods that strive to find better minima, our goal is to improve training efficiency and performance specifically for Transformers and Mixture of Experts (MoE) models. As such, the proposed method is orthogonal to the existing literature on adaptive learning rates.\\n\\n**Regarding the local search algorithm**\\n\\nReviewers have highlighted a few issues with the local search algorithm used to tune the relative learning rates, RLRS, such as its inefficiency due to being a simple search algorithm and the fact that it has its own hyperparameters. At the same time, the reviewers perceive the lack of comparison of our approach with other contemporary hyperparameter search methods as a weakness of our work. The local search algorithm we employ is not our main contribution and can easily be replaced by another technique. It is simply an example of an algorithm that could be used to find the relative learning rates. We will ensure to make that clearer in future versions of our manuscript.\\n\\nFurthermore, one reviewer posits that simply introducing more hyperparameters will trivially allow us to improve the quality of the training procedure. While this critique could be applied if we did not extrapolate our experiments, we argue that it is not valid in our case. We transfer the relative learning rates fitted on small experiments to much bigger models, trained on many more tokens, preserving the speed-up to a large extent. Moreover, we show that our technique interacts with other hyperparameters in a non-trivial manner by showing it improves the final loss across a range of learning rates (Figure 3), while also enhancing the training stability (Figure 2).\\n\\n**Regarding the motivation and the rationale**\\n\\nWe appreciate the feedback regarding the need for clearer motivation and rationale. Our work addresses the critical role of the learning rate in Transformer models by decoupling and tuning relative learning rate schedules for different model components. Furthermore, we propose a scalable approach where relative learning rates, tuned on small models, can be effectively applied to much larger models. This eliminates the need for extensive hyperparameter searches and results in significant computational savings. We will strive to improve the clarity of our writing and better articulate the motivation behind our approach in the next version of the paper.\\n\\n**Regarding the scale of experiments**\\n\\nSome reviewers suggested that the claim about the transferability of the relative learning rates (RLRS) to larger models is not convincingly demonstrated. While it is true that many techniques require validation at a proper scale, we believe that we have sufficiently demonstrated our findings. We transfer the RLRS from models with 210 million parameters to models with 5.67 billion parameters (a 27x increase), and from training runs on 1.3 billion tokens to runs on 20 billion tokens (a 15x increase). This constitutes over 400x increase in training FLOPs, which we consider both convincing and challenging to exceed on an academic budget. We believe that placing greater emphasis on the scale increase between the source and target training runs of the transfer could make our claims more convincing. We acknowledge that we have not made this point clear and will address this issue in future versions of the manuscript.\"}" ] }
BUj9VSCoET
Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping
[ "Ziye Huang", "Haoqi Yuan", "Yuhui Fu", "Zongqing Lu" ]
Universal dexterous grasping across diverse objects presents a fundamental yet formidable challenge in robot learning. Existing approaches using reinforcement learning (RL) to develop policies on extensive object datasets face critical limitations, including complex curriculum design for multi-task learning and limited generalization to unseen objects. To overcome these challenges, we introduce ResDex, a novel approach that integrates residual policy learning with a mixture-of-experts (MoE) framework. ResDex is distinguished by its use of geometry-agnostic base policies that are efficiently acquired on individual objects and capable of generalizing across a wide range of unseen objects. Our MoE framework incorporates several base policies to facilitate diverse grasping styles suitable for various objects. By learning residual actions alongside weights that combine these base policies, ResDex enables efficient multi-task RL for universal dexterous grasping. ResDex achieves state-of-the-art performance on the DexGraspNet dataset comprising 3,200 objects with an 88.8% success rate. It exhibits no generalization gap with unseen objects and demonstrates superior training efficiency, mastering all tasks within only 12 hours on a single GPU. For further details and videos, visit our project page.
[ "dexterous grasping", "residual policy learning", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=BUj9VSCoET
https://openreview.net/forum?id=BUj9VSCoET
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zS5M2GfOU9", "yaDZs9FFhs", "vA83P3BII3", "uu7DmaC2Xa", "sCVGjlRvkw", "nNCtKjWjzi", "lWOsTaksEL", "il65vuvEZq", "iiUIV4cIKc", "eD0fIfWGa1", "dk377w6ZjI", "XJvPHKg4fn", "WkuuyFmwYQ", "VsckLh6jnY", "S4JmrblMDi", "NNn71FRrS2", "JWw4rGFLrE", "ISTTYK0k5s", "HEAz6GgVZT", "1FjMk8GAGD" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730271261784, 1732243938674, 1732342495332, 1732638471752, 1730568670884, 1732335718880, 1732243723815, 1737524286472, 1734792637410, 1730673769984, 1732684631760, 1729393847032, 1732243882107, 1732602697575, 1732342667393, 1732259179073, 1732243571604, 1732243663616, 1732243777892, 1732565258080 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_3ip3" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_HhuW" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_HhuW" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_BKwA" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13867/Area_Chair_Hisd" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_8Don" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_BKwA" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_3ip3" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Authors" ], [ "ICLR.cc/2025/Conference/Submission13867/Reviewer_8Don" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes residual learning with an MoE method for generalized grasping in simulation. The proposed method includes a set of k geometry-unaware base policies and a hyper policy that learns the weights of each base policy. It also includes a residual action based on the geometry and position of the target object and the robot's proprioception.\\n\\nThe proposed method avoids complex curriculum design and can be trained within 12 hours on a single 4090 GPU. Its performance peaks SOTA methods and shows no performance drop when generalized to unseen objects and categories. All claims are supported by solid experimental evidence from simulation.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"This paper proposed a novel combination of residual learning and MoE for general dexterous grasping.\", \"The proposed method significantly outperforms SoTA grasping methods on a large-scale simulation benchmark.\", \"The proposed method avoids complex curriculum design and observes almost zero performance drop when generalizing to unseen objects and categories.\", \"The proposed method can be trained within 12 hours on a single 4090 GPU.\", \"Extensive experiments in simulation support the authors\\u2019 claims.\", \"Overall, the paper is well-organized and written.\"], \"weaknesses\": [\"The authors did not discuss the proposed method's limitations and failure cases. It will be interesting to see and discuss what cases still challenge the proposed method.\", \"There is no real robot experiment to test if the learned policy adapts well to noises and challenges in the real world.\", \"In line 353, the authors wrote, \\u201cIncreasing k leads to a slight performance gain.\\u201d This is not true, as the proposed method performs best when k=4 and the performance drops with k larger than 4. It would be better to discuss why the model performs best when k=4.\", \"When reading subsections 4.1 and 4.2, I am confused about whether the base policy is trained on a single object or multiple objects. The paper contains both descriptions. This confusion is quickly resolved when I discover MoE in subsection 4.3. I suggest specifying how the base policy is used early in subsection 4.1 to avoid this confusion in the future.\"], \"questions\": [\"In line 064, what do you mean by \\u201cbase policies that only observe \\u2026 3D positions of objects to infer the object location\\u201d? What\\u2019s the difference between the 3D positions of objects and the object location?\", \"When training the base policy, do you train it with randomized object positions? What about orientations?\", \"Tables 1 and 2 suggest that the proposed method\\u2019s performance peaks with four base policies. Why is it not the case that more base policies always yield better performance?\", \"What is the setup for the vision-based policy? How many cameras are used? How are the cameras placed? Are there any treatments for the observed point cloud before feeding it into the policy? Is the vision-based policy evaluated in simulation or on real robots?\", \"Around line 421 \\u201c\\u2026 and we evaluate their performance on the training set\\u201d, does the training set refer to the training set of the ablation study (i.e., the six objects), or the training set of DexGraspNet?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q5.** How is $g$ in equation 2 sampled? Will randomly sampling $g$ cause gradient interference?\\n\\n**A5.** Each grasping proposal $g$ is associated with an object in the dataset and is integrated into the reward to guide the policy towards effective grasping poses. For each training episode, we uniformly sample one grasping proposal per object to compute the reward function. This approach is a commonly employed method of reward shaping. Gradient interference typically arises from simultaneous training across objects, which is unrelated to the method of sampling $g$ for each object.\\n\\n**Q6.** In lines 160 and 237, are $q$ and $q_t$ hand joint positions or hand joint angle configurations?\\n\\n**A6.** In our manuscript, $q$ and $q_t$ denote the joint positions of the hand, as we have explicitly defined in Section 3.1. We did not use the term \\\"joint angle configurations\\\". These terms refer to the same concept in robotics. \\n\\n**Q7.** Does the limitation that \\\"the base policy typically provides only a single grasping pose\\\" arise due to the use of argmax for base actions?\\n\\n**A7.** In reinforcement learning, modeling continuous actions with Gaussian distributions typically results in unimodal behaviors, irrespective of whether argmax or sampling methods are used. Furthermore, since the base policy is trained to grasp a single object, it does not inherently develop novel grasping poses when exposed to other objects during the multi-task training stage. This limitation has motivated our introduction of a mixture-of-experts base policies to enhance the diversity of grasping poses.\\n\\nWhile some novel architectures, such as Diffusion Policies [1], are capable of achieving multi-modal behaviors, integrating these architectures with our method could be explored as a future direction but is beyond the scope of our current research.\\n\\n[1] Chi, Cheng, et al. \\\"Diffusion policy: Visuomotor policy learning via action diffusion.\\\" The International Journal of Robotics Research (2023)\\n\\n**Q8.** Questions about combining residual policy learning with MoE, the hyper-policy learning, and the collapse issue. \\n\\n \\n**A8.** In our method, combining residual policy learning with a mixture-of-experts (MoE) is designed to alleviate the exploration burden when training on a diverse set of objects. Our findings show that even a single geometry-agnostic base policy can generalize effectively across a broad range of objects, substantiating this approach.\\n\\nThe primary goal of introducing MoE is to enhance the diversity and naturalness of grasping poses. Base policies trained on objects with varying geometric features develop distinct grasping styles, crucial for handling objects of diverse shapes and sizes effectively.\\n\\nFor the hyper-policy, $\\\\pi^H_\\\\phi$, it treats the base policies as part of the environment dynamics within the RL framework, enabling it to dynamically generate weights $\\\\lambda_t$ that maximize returns. This process does not require direct observation of the outputs produced by the MoE base policies, akin to existing work in MoE and modular networks [2,3].\\n\\nFor the question about whether $\\\\lambda$ collapses, we provide an analysis of the learned $\\\\lambda$ in Appendix B.3. Our results demonstrate that more than two base policies are assigned positive weights in all experimental settings, and $\\\\lambda$ varies across different objects, indicating that the hyper-policy does not collapse.\\n\\n[2] Cai, Weilin, et al. \\\"A survey on mixture of experts.\\\" (2024).\\n[3] Yang, Ruihan, et al. \\\"Multi-task reinforcement learning with soft modularization.\\\" NeurIPS (2020)\"}", "{\"comment\": \"Thank you for your positive feedback! We have updated the term \\\"joint positions\\\" to \\\"joint angles\\\" throughout the paper to enhance clarity.\"}", "{\"title\": \"Feedback from Reviewer HhuW\", \"comment\": \"Thank you for your efforts in addressing my concern. The rebuttal has addressed my concern, and I remain in favor of accepting this paper.\"}", "{\"summary\": \"For universal dexterous grasp execution, this work introduces a residual RL policy based on a mixture of experts for the base geometry-unaware policies. The major motivation is to address the training inefficiency and limited generalization issues in the previous work. Technically, the authors propose to combine residual RL and the mixture of experts to tackle the gradient interference issues in training multi-task RL and the limited diversity of using a single base policy. The simulated benchmarking results and ablation study demonstrate superior generalization performance against the baseline.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This work tackles an important problem in learning-based grasping, i.e., how to learn a performant policy for grasp execution;\", \"The creative combination of existing ideas to address the problems in the previous work is well-motivated and sensible;\", \"The idea of using geometry-unware training to enhance the generalization of the base policy is interesting and meaningful;\", \"The presentation is easy to follow and possesses good readability;\", \"Comprehensive comparison and ablation study in the experiments.\"], \"weaknesses\": [\"The definitions of different reward functions are scattered across several sub-sections. It would be more clear for the reader if they could be grouped and discussed together.\", \"It would be clearer to reframe the technical contributions in the draft so that the readers can grasp the key idea more conveniently. It's because the main technical contribution is to develop a novel combination of existing techniques and demonstrate its effectiveness in learning generalizable grasping policies.\", \"There is no specific comparison on this aspect. It would be nicer to also compare the training time with Unidexgrasp as the authors claim that the previous approach is inefficient, and this has been addressed by the proposed idea.\", \"In the experiment part, for conciseness, the ablation of different numbers of experts in Tables 1 and 2 can be taken out and put into the ablation study subsection.\"], \"questions\": [\"The actions from different base policies are summed together based on the predicted weights of the hyper policy. I am wondering about the rotation representation used in this summation as they lie in a different space than the Euclidean one.\", \"Is it seemingly contradictory to first perform geometry-aware clustering and then learn a geometry-unware policy? In the end, the mixture of experts is geometry-aware. For the presentation part, it would be clearer to refine the texts for such differences. For the technical part, can they be merged into a single step in a more intelligent way?\", \"How is the part of grasp synthesis done?\", \"It seems that the number of experts doesn't represent the specific grasp styles as the model performs the best with only 4 experts, which is counter-intuitive for a dataset with more than 3k objects.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"Thanks for the detailed response. For Q6, although the response helps clarify, I still find the expression \\\"hand's joint positions\\\" confusing because \\\"position\\\" usually means the 3D position (i.e. in $\\\\mathbb{R}^3$) for each of the joints instead of the actual DoF value as described.\\n\\nOther than that, I think most of my concerns are properly addressed. I am happy to raise my rating given that the authors integrate all the necessary changes discussed above.\"}", "{\"comment\": \"**Q6.** The confusion about geometry-aware clustering and geometry-unaware policies. Is it possible to merge them into a single step?\\n\\n\\n**A6.** In our framework, the geometry-unaware policy and geometric clustering serve **distinct purposes** and are presented in different sections. The geometry-unaware policy, discussed in Section 4.1, aims to minimize overfitting on object geometry, thereby enhancing generalization to unseen objects. This \\\"unawareness\\\" implies that the base policy does not directly observe object geometry nor is it influenced by geometry-specific rewards during its training.\\n\\nConversely, geometric clustering, introduced in Section 4.3, is utilized to develop a mixture of base policies capable of producing varied grasping styles. This strategy leverages geometric similarities to ensure that each base policy specializes in handling a specific group of object geometries, enriching the diversity of the grasping poses for multi-task learning.\\n\\nWe acknowledge the potential to integrate these stages into a more unified approach, perhaps by dynamically generating object clusters as base policies are trained, which could streamline the learning process. While the current two-stage design is simple and well-motivated for the scope of this paper, your suggestion provides a promising direction for future research.\\n\\n\\n**Q7.** How is the part of grasp synthesis done? \\n\\n**A7.** The grasping proposals are accompanied with the DexGraspNet dataset, provided by UniDexGrasp, as discussed in Section 3.1. Their synthesis process involves using a point-cloud-conditioned generative model and ContactNet. Please refer to UniDexGrasp [1] for further details.\\n\\n[1] Xu, et al. \\\"Unidexgrasp: Universal robotic dexterous grasping via learning diverse proposal generation and goal-conditioned policy.\\\" CVPR 2023.\\n\\n\\n**Q8.** The model performs the best with only 4 experts, which is counter-intuitive for a dataset with more than 3k objects.\\n\\n**A8.** We appreciate your concern that achieving optimal performance with only 4 experts might seem counterintuitive given the dataset's size. We would like to provide the following clarifications:\\n- The effectiveness of the hyper-policy primarily stems from its capacity for residual learning, rather than solely relying on weighting actions from base policies. Leveraging our geometry-unaware base policies, which exhibit strong generalization capabilities, the hyper-policy can efficiently learn residual actions for a multitude of tasks (3,200 objects).\\n- It is important to note that the diversity of grasping styles does not directly correlate with grasp success rates. Even with a single base policy providing relatively unimodal grasping poses, our model maintains high success rates (refer to Tables 5 and 6). While increasing the number of base policies enhances the model's capacity and aids the hyper-policy in learning diverse grasping poses for different objects, this expansion does not necessarily translate to improved success rates.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"ResDex is a framework for dexterous grasping that combines residual policy learning and a mixture-of-experts (MoE) approach. Each base policy is trained on clusters of objects using only proprioceptive data, while a hyper policy fuses these base policies with a small residual adjustment. This design circumvents complex multi-task curricula, showing 88.8% success on a 3,200-object dataset (DexGraspNet), no performance gap on unseen objects, and efficient training (12 hours on a single GPU).\", \"strengths\": \"-- The method effectively tackles multi-object grasping by mitigating gradient interference and leveraging geometry-unaware base policies.\\n\\n-- Demonstrates robust performance on unseen objects without additional fine-tuning.\\n\\n-- Achieves state-of-the-art results within 12 hours, surpassing prior curriculum-heavy approaches.\", \"weaknesses\": \"-- The method\\u2019s transferability to physical robots remains unverified.\\n\\n-- Insufficient discussion of cases where performance might degrade.\\n\\n-- The multi-stage approach (MoE + residuals) still adds overhead despite aiming to streamline training.\\n\\nAfter carefully reading the paper, the reviews and rebuttal discussions, the AC agrees with the reviewers on recommending to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"The weaknesses are described above. The authors have addressed most comments in rebuttal and the reviewers generally agree to accept the paper.\"}", "{\"summary\": \"This work combines concepts from residual policy learning, mixture of experts and student-teacher distillation to train generalizable grasping policies with a dexterous hand in simulation. The proposed method ResDex has multiple stages of reinforcement learning in simulation, including the training of proprioception only policies on different types of objects, training of a residual mixture of experts policy and training of policies with a curriculum of reward functions. The resulting policies are shown to achieve a high performance for grasping unseen object instances and categories.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The authors propose a residual mixture-of-experts policy for dexterous grasping, where the individual base policies are trained on different datasets. This is both a novel and a very interesting idea. In particular, the individual policies are trained on clusters of object geometries using only proprioceptive information, whereas the high level mixing policy is trained with state information.\\n2. The work further includes a curriculum of two reward functions: the first reward function encourages similarity to demonstrated grasps, whereas the second reward only encourages grasping success. This is a good trade-off between encouraging natural and optimal grasps.\\n3. The method is compared to prior work on the reinforcement learning of dexterous grasping and it is shown to achieve a higher zero-shot grasping success rate. Appropriate ablations for the various parts of the mixture policy are included.\", \"weaknesses\": \"1. The paper lacks any real-world experiments. Therefore, it is not clear if the specific design decisions made in this work, which increase the performance in the simulator, lead to a higher real-world grasping success rate. Further, real-world evaluation might be challenging because the Shadow Hand is very expensive. The work could be strengthened by also running experiments with the LEAP Hand, for example, which is more accessible.\", \"questions\": \"Is a sophisticated robot hand necessary to reach a high performance, or could similar performance be reached with simpler hands like the LEAP or Allegro?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your review and your positive feedback on our work!\"}", "{\"summary\": \"In this work, the authors introduce ResDex, which integrates residual policy learning with a Mixture-of-Experts (MoE) framework for learning universal dexterous grasping policies. The method addresses drawbacks in conventional methods such as UniDexGrasp and UniDexGrasp++, including limited generalization and complex multi-task curriculum design, by leveraging geometry-unaware base policies. ResDex achieves efficient training and superior generalization, performing state-of-the-art on the DexGraspNet dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Thorough experiments: The experiments are comprehensive, including comparisons with baselines and ablation studies that validate the importance of the method's components.\\n2. Performance: ResDex demonstrates state-of-the-art success rates on the DexGraspNet dataset, achieving 88.8% success in grasping unseen objects.\\n3. Clarity: The method is well-explained, and the presentation is enhanced by figures and tables that clearly illustrate key components of the approach.\", \"weaknesses\": \"1. Complexity of Approach: While simpler than UniDexGrasp and UniDexGrasp++, the combination of multiple base policies and MoE adds complexity, which goes against the original spirit of residual RL to reduce exploration burden.\\n2. Training Efficiency: The claim of training efficiency is not substantiated through controlled experiments. Although training times are given in the appendix, there is no comparison to baselines using comparable parameter counts and hardware.\\n3. Generalizability: While generalization is a key claim, the evaluation is limited to simulation on DexGraspNet data. In contrast, both UniDexGrasp and UniDexGrasp++ evaluated generalizability in different experimental settings, providing stronger support for their claims.\\n4. Minor Writing Issues: There are some citation issues (e.g., misuse of \\\\citep vs. \\\\citet in lines 101-102, line 296). Section 4.4 would benefit from a \\\\begin{algorithm}. Additionally, the term \\\"geometry-unaware\\\" could be more appropriately named \\\"geometry-agnostic.\\\"\", \"questions\": \"1. How is $g$ in equation 2 sampled? Will randomly sampling $g$ cause gradient interference?\\n2. In lines 160 and 237, are $q$ and $q_t$ hand joint positions or hand joint angle configurations?\\n3. One reason for using MoE is that \\\"the base policy typically provides only a single grasping pose for its training object.\\\" Does this limitation arise due to the use of argmax for base actions (line 252)? Will other multimodal policy training methods also address this?\\n4. Could the authors provide more insights into how combining residual policy learning with MoE improves learning? Given that residual RL typically combines known, stable controllers with RL, what role does the MoE play? How does $\\\\pi^H_{\\\\phi}$ learn to weight $a^B_{t,i}$ dynamically without having $a^B_{t,i}$ as input? Could $\\\\lambda_t$ collapse to a mean or one-hot value?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"**Q1.** The combination of multiple base policies and MoE adds complexity, which goes against the original spirit of residual RL to reduce exploration burden.\\n\\n**A1.** From the perspective of method simplicity, ResDex, consisting of two main components (the MoE and residual RL), is the **simplest design for universal dexterous grasping**. It avoids the complex curriculum design and proves to be insensitive to the choice of hyperparameters, as demonstrated in our experiments. Even without MoE, ResDex with a single base policy achieves a success rate of 94% (see Table 6, k=1), significantly outperforming prior methods.\\n\\nFrom the perspective of training complexity, integrating a mixture of base policies aims to increase the diversity of grasping, which **does not conflict with the spirit of residual RL**. Residual RL effectively addresses the exploration issue in multi-task optimization, improving learning efficiency whether using a single base policy (k=1) or multiple base policies within the MoE (k>1). Incorporating MoE **does not significantly increase the training cost**. As shown in Appendix A.3, adding a base policy takes only 20 minutes, which is significantly shorter than the 11 hours required to train the hyper-policy.\\n\\n\\n**Q2.** Regarding training efficiency, there is no comparison to baselines.\\n\\n**A2.** We apologize for not providing the training times of baselines initially. Unfortunately, the curriculum training code for UniDexGrasp and UniDexGrasp++ has not been released, which limited our ability to perform a direct time comparison.\\n\\n\\nHowever, we can provide a comparative analysis based on the number of training rounds, as detailed in their publications. UniDexGrasp implements a progressive training strategy \\u2014 starting with a single object, expanding to several objects within the same category, and finally covering the full training set \\u2014 requiring **three multi-task training stages**. UniDexGrasp++ is more complex, involving the training of **20 multi-task policies** along with **several distillation stages**.\\n\\nIn contrast, our method only necessitates the training of a **single multi-task policy** in one trial, using between **one to six low-cost, single-task base policies**. Our approach is not only simpler but also efficient. As demonstrated in our experiments, our method achieves high success rates even with just one base policy.\\n\\nRecognizing the importance of presenting a comparison of training efficiency to baselines, we have now included this analysis in Appendix A.3.\\n\\n\\n**Q3.** The evaluation is limited to the DexGraspNet dataset. \\\"Both UniDexGrasp and UniDexGrasp++ evaluated generalizability in different experimental settings\\\".\\n\\n**A3.** We respectfully clarify that this assessment may stem from a misunderstanding. In fact, **both UniDexGrasp and UniDexGrasp++ use only the DexGraspNet dataset for evaluation**. To the best of our knowledge, DexGraspNet remains one of the largest and most diverse datasets available for dexterous grasping tasks, encompassing over 3,200 objects with varied sizes and geometric complexities. This makes it an exceptionally suitable dataset for assessing the generalizability of grasping models.\\n\\nTo further demonstrate generalization beyond the DexGraspNet dataset, we tested our policy on **YCB objects** in a zero-shot manner, achieving a success rate of **65.55%**. This result highlights the strong generalization capabilities of our method with unseen datasets. It is important to note that 30% of YCB objects are very flat and thin, which significantly challenges tabletop grasping. Additionally, because the models of YCB objects are scanned from real-world objects, they often feature irregular, non-convex shapes. This leads to differences between visual observations and collision meshes in IsaacGym, increasing the difficulty for the grasping policy, which relies on visual point clouds but interacts with mismatched physical shapes.\\n\\n\\n**Q4.** About minor writing issues.\\n\\n**A4.** Thank you for highlighting these issues! We have corrected the citation errors you pointed out. \\nAdditionally, we acknowledge the benefit of including pseudocode in Section 4.4. Due to page constraints, we have added this pseudocode to Appendix A.1 and have made corresponding references in Section 4.4.\\nWe agree that \\\"geometry-agnostic\\\" is a more natural expression than \\\"geometry-unaware\\\". Accordingly, we have updated our terminology throughout the paper. Thank you once again for your valuable suggestions.\"}", "{\"comment\": \"Thank you! We sincerely appreciate your positive feedback.\"}", "{\"comment\": \"Thank you for your positive feedback on our work!\"}", "{\"comment\": \"Thank you for the detailed response. I remain positive on this paper.\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"**Q1.** The paper lacks real-world experiments.\\n\\n**A1.** We appreciate your comment regarding the necessity of real-world validation to demonstrate the practical applicability of our method. Currently, our research has focused on algorithmic enhancements for universal dexterous grasping within a simulated environment, aligning with the experimental setups used in prior studies such as UniDexGrasp, UniDexGrasp++, and UniDexFPM. Conducting experiments in the real world presents additional complexities, particularly the significant challenge of bridging the sim-to-real gap. We fully recognize the importance of this aspect and are committed to including real-world experiments in future work.\\n\\n**Q2.** The work could be strengthened by running experiments on LEAP Hand, which is more accessible.\\n\\n**A2.** Thank you for your valuable suggestion. In response, we have implemented a simulation setup for the LEAP Hand attached to a 6-DoF robot arm that is fixed on a table. The action space includes PD control targets for both the hand joints and the six arm joints. This setup enhances the practicability for sim-to-real deployment.\\n\\nWe trained ResDex using this setup without modifying any hyperparameters and achieved an average success rate of 60.71% on the 3.2K objects in DexGraspNet.\\n\\nSeveral factors affect the LEAP Hand's performance, which is lower than that of the ShadowHand: (1) LEAP Hand is significantly larger and has thicker fingertips, posing challenges for grasping small objects in DexGraspNet; (2) LEAP Hand policies are trained without the grasping proposal reward due to the absence of corresponding data; (3) LEAP Hand has fewer degrees of freedom compared to ShadowHand, which can limit its capabilities; (4) The attachment to a robot arm reduces the effective workspace and alters the mechanism for controlling wrist pose, potentially affecting training performance.\\n\\nWe present the detailed results for LEAP Hand with the robot arm setup in Appendix B.2.\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"**Q1.** The definitions of different reward functions are scattered across several sub-sections.\\n\\n**A1.** Thank you for your feedback regarding the presentation of reward functions. We have consolidated the full details of the reward functions in Appendix A.2 and have added a directive in Section 3.1 to guide readers to these details for clarity. In the main body of the paper, we simplify our discussion on rewards to three primary notations, $r^{task}, r^{proposal}$, and $r^{pose}$, to minimize confusion.\", \"we_have_chosen_not_to_present_the_full_details_of_each_reward_term_directly_in_the_main_text_for_several_reasons\": \"- While the design of the reward functions is important, it is not the primary contribution of this work. The reward terms we use are adopted from established methodologies in prior research, particularly UniDexGrasp and UniDexGrasp++.\\n - Due to page constraints, we prioritized brevity in the main text. Detailed implementations of the reward functions are thus included in the appendix. In Section 3.1, we clarify the distinctions between $r^{task}$ , a hand-designed reward function for grasping tasks, and $r^{proposal}$, which is derived from the grasping proposals within the dataset. The term $r^{pose}$, which is a sub-component of $r^{proposal}$, is introduced in Section 4.1 based on our observations during the training of base policies.\\n\\n\\n**Q2.** Reframe the technical contributions.\\n\\n**A2.** Thank you for your suggestion! We have revised the summary of our technical contributions at the end of the Introduction. Our technical contributions are grounded in the novel integration of residual multi-task reinforcement learning, geometry-agnostic base policies, and a mixture-of-experts framework, which together enable the development of a more generalizable and effective grasping policy.\\n\\n\\n**Q3.** Comparison of training efficiency to UniDexGrasp.\\n\\n**A3.** We apologize for not providing the training times of baselines initially. Unfortunately, the curriculum training code for UniDexGrasp and UniDexGrasp++ has not been released, which limited our ability to perform a direct time comparison.\\n\\nHowever, we can provide a comparative analysis based on the number of training rounds, as detailed in their publications. UniDexGrasp implements a progressive training strategy \\u2014 starting with a single object, expanding to several objects within the same category, and finally covering the full training set \\u2014 requiring **three multi-task training stages**. UniDexGrasp++ is more complex, involving the training of **20 multi-task policies** along with **several distillation stages**.\\n\\nIn contrast, our method only necessitates the training of a **single multi-task policy** in one trial, using between **one to six low-cost, single-task base policies**. Our approach is not only simpler but also efficient. As demonstrated in our experiments, our method achieves high success rates even with just one base policy.\\n\\nRecognizing the importance of presenting a comparison of training efficiency to baselines, we have now included this analysis in Appendix A.3.\\n\\n\\n**Q4.** About presentation of the ablation of different numbers of experts.\\n\\n**A4.** Thank you for your suggestion! We have revised the presentation to report only the results for k=4 in the main results (Table 1). To improve the clarity of our paper, results for different values of k have been moved to the ablation section (Tables 5 and 6).\\n\\n**Q5.** How to handle rotation representations in the weighted summation of base policies' actions?\\n\\n**A5.** We use 6D force to control wrist translation and rotation. While it is true that Euler angles do not form a Euclidean space and linear interpolation between Euler angles does not typically result in a linear rotation, the weighted sum and residual actions produced by the hyper-policy are nevertheless capable of generating any required 3D torques. Since the hyper-policy dynamically assigns weights and residual actions, there is no need to explicitly define a rotation action within a Euclidean space for the purposes of our method. Regarding the finger actions, they involve individual joint positions which can be linearly interpolated.\"}", "{\"title\": \"Thanks for your review! Here, we respond to your comments and address the issues. We hope to hear back from you if you have further questions!\", \"comment\": \"**Q1.** The proposed method's limitations and failure cases.\\n\\n**A1.** One notable limitation of our approach is its current inability to be directly applied to functional grasping tasks. This limitation stems from the fact that our base policies are trained with restricted observations, which do not adequately capture the intricacies required for fine-grained functional grasping. Future efforts could focus on extending our method to functional grasping tasks, thereby enhancing the robot's manipulation capabilities in practical settings.\\n\\nFailure cases arise with objects of specific sizes and shapes. For example, some large objects may unexpectedly collide with the dexterous hand upon initialization in the simulator, leading to failure cases. Similarly, extremely small or thin objects, such as scissors and knives, pose challenges under the tabletop grasping setting. Additionally, the policy sometimes generates unstable grasps that result in objects falling off the table before reaching the goal position. However, the policy's closed-loop nature allows itself to adapt to such cases by performing regrasping.\\n\\nWe have expanded upon these limitations and failure cases in Section 6, offering a more comprehensive discussion to guide future improvements and research.\\n\\n\\n\\n**Q2.** There is no real robot experiment.\\n\\n**A2.** We appreciate your comment regarding the necessity of real-world validation to demonstrate the practical applicability of our method. Currently, our research has focused on algorithmic enhancements for universal dexterous grasping within a simulated environment, aligning with the experimental setups used in prior studies such as UniDexGrasp, UniDexGrasp++, and UniDexFPM. Conducting experiments in the real world presents additional complexities, particularly the significant challenge of bridging the sim-to-real gap. We fully recognize the importance of this aspect and are committed to including real-world experiments in future work.\\n\\n\\n**Q3.** \\\"Increasing k leads to a slight performance gain\\\" is not accurate. Why is it not the case that more base policies (k>4) yield better performance?\\n\\n**A3.** Thank you for highlighting the inaccuracy in our description. We have revised the relevant text in Section 5.3. In terms of success rates, configurations with k>2 outperform those with k=1 and k=2. However, no significant improvement is observed with further increases in $k$. This is because higher success rates are not solely dependent on increasing k; the success rate metric does not directly evaluate the appropriateness of grasping poses in the formulated grasping task.\\n\\n\\n\\n**Q4.** About writing issues in Section 4.1 and 4.2.\\n\\n**A4.** We apologize for the confusion regarding the training of the base policies. To clarify, each base policy is trained on a single object. We have updated Section 4.1 to specify this and to explain that these base policies are later used in a Mixture of Experts (MoE) approach.\\n\\n\\n**Q5.** What is the difference between the 3D positions of objects and the object location?\\n\\n**A5.** \\\"3D positions of objects\\\" and \\\"object location\\\" refer to the same concept \\u2014 the xyz coordinates of the object. This sentence is intended to emphasize that the base policy is provided with the object xyz position to ensure it can determine where the object is located.\\n\\n\\n**Q6.** Are base policies trained with randomized object positions and rotations?\\n\\n**A6.** Yes, following the settings from UniDexGrasp and UniDexGrasp++, we randomize the rotation and z-axis of the objects. As the objects fall onto the table, this randomization also leads to randomized xy positions.\\n\\n\\n**Q7.** About the vision-based setting.\\n\\n**A7.** For point cloud observations, we follow the approach in GraspGF [1]. In simulation, first, object point clouds are constructed from the objects' mesh data. At each timestep, the point clouds are transformed based on the objects' poses. During training, we apply Farthest Point Sampling to sample 512 points, which are then fed into a PointNet to extract features. The PointNet is trained simultaneously with the policy during the distillation process.\\nWhile our experiments are conducted in simulation, the same point cloud can be acquired in the real world by using four RGBD cameras to capture the object point cloud initially, followed by object pose estimation at each timestep [1].\\n\\n\\n[1] Wu, et al. \\\"Learning score-based grasping primitive for human-assisting dexterous grasping.\\\" NeurIPS 2023.\\n\\n**Q8.** Around line 421, does the training set refer to the training set of the ablation study (i.e., the six objects), or the training set of DexGraspNet?\\n\\n**A8.** We are referring to the DexGraspNet training set, which includes over 3,000 objects. To avoid confusion, we have updated the paper to clarify this point. The six objects used in the ablation study are solely for case study purposes.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for conducting an additional experiment! I remain in favor of accepting this paper.\"}" ] }
BUQLiu4VA8
Variational Potential Flow: A Novel Probabilistic Framework for Energy-Based Generative Modelling
[ "Junn Yong Loo", "Julia K. Lau", "Chee-Ming Ting", "VISHNU MONN BASKARAN", "Raphael CW Phan", "Chee Pin Tan" ]
Energy based models (EBMs) are appealing for their generality and simplicity in data likelihood modeling, but have conventionally been difficult to train due to the unstable and time-consuming implicit MCMC sampling during contrastive divergence training. In this paper, we present a novel energy-based generative framework, Variational Potential Flow (VAPO), that entirely dispenses with implicit MCMC sampling and does not rely on complementary latent models or cooperative training. The VAPO framework aims to learn a potential energy function whose gradient (flow) guides the prior samples, so that their density evolution closely follows an approximate data likelihood homotopy. An energy loss function is then formulated to minimize the Kullback-Leibler divergence between density evolution of the flow-driven prior and the data likelihood homotopy. Images can be generated after training the potential energy, by initializing the samples from Gaussian prior and solving the SDE governing the potential flow. Experiment results show that the proposed VAPO framework is capable of generating realistic images on various image datasets. In particular, our proposed framework achieves competitive FID scores for unconditional image generation on the CIFAR-10 and CelebA datasets.
[ "generative models", "energy-based models", "variational methods", "particle filtering" ]
https://openreview.net/pdf?id=BUQLiu4VA8
https://openreview.net/forum?id=BUQLiu4VA8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "iwxPfOKbq5", "a1UpeBHBDD", "SpaLowkSlE", "MRS0mlDRR8", "7Mq9z0BUSF" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730708089127, 1730670753749, 1732435481551, 1731078633220, 1729949413302 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6001/Reviewer_EfBV" ], [ "ICLR.cc/2025/Conference/Submission6001/Reviewer_LWEK" ], [ "ICLR.cc/2025/Conference/Submission6001/Authors" ], [ "ICLR.cc/2025/Conference/Submission6001/Reviewer_27rP" ], [ "ICLR.cc/2025/Conference/Submission6001/Reviewer_wVir" ] ], "structured_content_str": [ "{\"summary\": \"This work presents a novel framework for learning energy-based SDEs that can transport samples from a latent prior distribution to a distribution that closely approximates data samples (up to a small additive noise). First, a conditional density homotopy between the prior distribution of noise and the posterior distribution of a slightly perturbed sample given a data sample is defined, along with an unconditional homotopy defined by taking the product of the data density and conditional homotopy and integrating w.r.t. the data variable. This defines a continuous sequence of densities interpolating between the prior and approximate data distribution which will be the target of the generative model. The generative model is an energy function whose gradients are trained to act as the drift term of an SDE with predetermined diffusion coefficients. The evolution of this SDE is intended to match the density homotopy, so that the gradients of the energy function can produce samples along the homotopy trajectory that start from the prior and end at the approximate data distribution. To achieve this, several propositions are used to present analytical forms for the evolution of the homotopy over time, establish a Poisson equation which is equivalent to minimizing the KL-divergence between the homotopy densities and the optimal trained model, and establish a practical loss function that can enforce the Poisson equation and be used to train a neural network. Experiments are conducted for unconditional image generation on CIFAR-10 and Celeb-A which show strong performance among EBMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work brings an interesting and novel perspective to the study of generative modeling with potential energy. The density homotopy introduced in Section 3.1 is a fresh and potentially fruitful direction for using smooth interpolation between unnormalized densities in a manner similar to interpolation between data and prior samples in diffusion models and flow matching. Using this homotopy as the training target for a SDE with a learnable drift potential seems like a natural and effective choice.\", \"The theoretical background has a high degree of technical merit and thorough presentation. It is notable (and somewhat surprising) that the time evolution of the density homotopy has the simple and intuitive form in (8) and that the objective in (15) can be used to train a potential that can be used to match the homotopy over time. This could inspire future research in a similar direction.\", \"The experimental results show strong performance among EBM methods.\"], \"weaknesses\": [\"There are a few hyperparameters which are not fully explored, such as the noise schedule $\\\\beta (t)$ (left constant), perturbation level $\\\\sigma$, spectral gap $\\\\lambda$, and $\\\\varepsilon$. Some discussion of why these were chosen and the sensitivity to these hyperparameters would be helpful. In particular, the $\\\\beta$ schedule is crucial for diffusion models. Why is it not essential here? Could the results be improved with a better schedule?\", \"Although experimental results are strong among EBMs, they lag behind the diffusion and other SOTA methods. Furthermore, this work also performs slightly worse than the related work [a] which learns a potential energy and uses the SDE (10) to draw samples, which is based on a straightforward diffusion objective (which is actually a component of the proposed loss). In general, the paper would be strengthened by comparing the results from the proposed method with a more straightforward diffusion model using a potential energy network instead of a score network (such as [a]), since both cases are essentially training a network so that (10) transports samples from a latent prior to data.\", \"[a] https://openreview.net/forum?id=9AS-TF2jRNb\"], \"questions\": [\"Could the questions about hyperparameters in the first weakness be addressed?\", \"Could the questions about the relative performance of the proposed method and diffusion models parameterized by potential energy functions be addressed?\"], \"technical\": [\"About the equation (6): if I am understanding correctly, joint probability of $x$ and $\\\\bar{x}$ satisfies: $p( \\\\bar{x} | x) q(x) = p(\\\\bar{x}, x) = p (x | \\\\bar{x}) p_\\\\text{data} (\\\\bar{x})$. But why does $p( \\\\bar{x} | x) q(x) = p (x | \\\\bar{x}) p_\\\\text{data} (\\\\bar{x})$?\", \"In (41) second and third equality, the signs be reversed after integration by parts, right?\", \"I am not sure how (46) and (47) imply $C=0$. It looks like it should instead be: $C = - (1/2) \\\\int E [ \\\\rho(x; \\\\bar{x} (\\\\gamma (x, \\\\bar{x}) - \\\\bar{\\\\gamma} (x, \\\\bar{x}))] dx$.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces VAPO, a new framework to train Energy-Based Model(EBM). Different from traditional EBM training algorithms. VAPO does not require expensive MCMC sampling at each training step. Instead, VAPO defines an interpolation path $p(x, t)$ between a noise distribution $q(x)$ at $t=0$ and distribution $\\\\bar{p}(x)$ at $t=1$. $\\\\bar{p}(x)$ is the kernel density approximation of the true data likelihood. The authors then introduce a potential function $\\\\Phi(x_t)$, whose gradient guides the transport equation of the defined density path. The potential function is learned by constructing a variational formulation of the homotopy path-matching problem. After the model is learned, $\\\\nabla\\\\Phi(x_t)$ is used to guide the data transformation in ODE during the test time to generate new samples. Experimental results show that VAPO achieves competitive generative performance among EBM baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a novel framework for learning EBM. VAPO has the advantage of more efficient training without the needs of expensive MCMC sampling.\\n2. The paper provides a sound and detailed derivation and formulation. I appreciate its theoretical contribution.\\n3. The generative performance of VAPO is competitive among the EBM baselines.\", \"weaknesses\": \"My primary concern lies in the validity of the learned energy (or potential) function $\\\\Phi(x_t)$. While the generative capability highlights one aspect of Energy-Based Models (EBMs), the ability to perform accurate density estimation is equally crucial. Although I acknowledge the competitive generative performance of VAPO, the relationship between the proposed potential function $\\\\Phi(x_t)$ and the log data density remains unclear to me. Specifically:\\n\\n1. Theoretically, what is the interpretation of the learned $\\\\Phi(x_t)$? Are there any proofs ensuring that $\\\\Phi(x_t)$ converges to the log data density? The paper demonstrates that the learned potential function $\\\\Phi(x_t)$ can guide an ODE to generate valid samples via its gradient. However, a correct gradient field does not necessarily imply a valid energy function.\\n\\n2. Experimentally, most of the results in the paper focus on demonstrating generative performance. I recommend that the authors include additional evaluations to demonstrate the validity of the learned energy function..\", \"questions\": \"Please check the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a new framework for energy-based generative models called Variational Potential Flow (VAPO). This approach eliminates the need for implicit MCMC sampling and does not depend on auxiliary latent models or cooperative training methods. The VAPO framework achieves this by learning a potential energy function path, where the gradient flow guides prior samples along an approximate data likelihood homotopy. Additionally, the authors develop an energy loss function through a variational formulation that leverages the KL divergence between the density evolution of the flow-driven prior and the data likelihood homotopy. The framework is tested on CIFAR-10 and CelebA datasets for unconditional image generation.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper aims to tackle an important problem that is reduce the complexity of energy-based generative models due to the reliance on MCMC sampling or auxiliary latent models.\", \"weaknesses\": [\"The presentation could be improved, as the authors included extensive mathematical content in the main paper. The reviewer recommends simplifying this mathematical material within the main text and focusing on presenting high-level concepts and core results to enhance accessibility for readers.\", \"The paper allocates considerable space to unnecessary mathematical content, which detracts from the clarity and quality of the experimental section. Additionally, the proposed method underperforms compared to many other energy-based model (EBM) approaches. Given that the paper's objective is to simplify and stabilize EBM components, such as MCMC sampling and auxiliary latent models, the authors should include a comprehensive analysis. This should involve demonstrating aspects like training convergence and complexity to better support the method's effectiveness and contributions.\", \"In its current form, the paper presents extensive mathematical content along with some experimental results, but lacks thorough analysis. Consequently, it is unclear how beneficial the proposed tools truly are. The authors should consider demonstrating the proposed framework on simple, synthetic datasets to provide clearer insights into its behavior and effectiveness.\"], \"questions\": [\"The reviewer observed from the appendix that the authors used a lot of tricks for the network architecture and optimizer. Is this fair compared to the baselines presented in Table 1.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new generative model framework called VAPO. This framework learns a potential energy function to guide prior samples through a density evolution that approximates the data likelihood, bypassing the need for implicit and unstable MCMC sampling. VAPO applies Poisson's equation and the deep Ritz method to define and solve the flow of prior samples, ensuring they align with the data likelihood homotopy. VAPO is tested on image datasets like CIFAR-10 and CelebA, achieving competitive FID scores for unconditional image generation compared to state-of-the-art models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The VAPO method uses a potential energy function to guide the flow of prior samples toward the data likelihood, which is a fresh approach combining elements from particle flow and the Deep Ritz method. Experiments on CIFAR-10 and CelebA show that VAPO achieves competitive performance on FID scores relative to existing EBM-based approaches.\", \"weaknesses\": \"1. VAPO-A adopts the architecture from VAEBM [1], yet its FID scores fall short of baseline performance. This raises concerns regarding the overall advantage of VAPO. If the authors aim to address the high variance training, computational complexity, and low flexibility of MCMC and strengthen the claim of stability and efficiency of VAPO, the authors should conduct targeted experiments that compare training variance or computation time with MCMC-based methods. For example, record the loss value for each training epoch and calculate the standard deviation, measure the average time taken to generate a single sample, and record the total time and memory consumption required for the model to converge and compare it with MCMC methods. This would highlight the practical benefits of VAPO beyond standard performance metrics like FID.\\n\\n[1] Zhisheng Xiao, Karsten Kreis, Jan Kautz, and Arash Vahdat. {VAEBM}: A symbiosis between variational autoencoders and energy-based models. In International Conference on Learning Representations, 2021.\\n\\n2. VAPO incorporates computationally costly methods, such as numerical SDE solver. The authors also acknowledge that VAPO requires a large number of training iterations to converge, which might be a significant drawback for scaling this model to more complex or higher-dimensional datasets. While the elimination of MCMC improves training stability, the optimization process remains computationally expensive.\\n\\n3. Previous approaches using MCMC methods and those based on flow models should be discussed together in the Related Work section and clearly referenced in the main text. This would provide readers with a comprehensive context for understanding the advancements presented in VAPO. Additionally, Algorithm 1, which outlines the algorithmic workflow, should be included in the main body of the paper rather than placed in the appendix. This is a critical component for grasping the methodology and overall contributions of the work. I recommend placing the Related Work section as Section 2 and positioning Algorithm 1 before Section 4 to enhance the flow and accessibility of the content.\\n\\n4. Some sections, particularly those involving complex mathematical derivations, such as Sections 3.2 and 3.3, are difficult to follow due to the density of the technical content. These parts could be made more accessible by including additional explanatory text or visual aids to help guide readers through the more intricate aspects of the methodology. Moreover, the sketch proof of the main theorem, which is essential for understanding the core contributions, should be included in the main text.\\n\\n5. The citation formatting throughout the paper is often incorrect. For instance, many references are not properly enclosed in brackets, which disrupts the flow of reading and makes it difficult to distinguish between the main text and references.\", \"questions\": \"1. What assumptions are made about the potential flow in the theoretical analysis? For example, are there any assumptions about the target distribution? Are these assumptions too restrictive to be realistically met in experiments, or how are they satisfied in practice?\\n\\n2. In Line 199, what is $\\\\bar{p}(x)$ and why is it stated that $\\\\bar{p}(x)$ acts as a continuous interpolation of the data likelihood $p_{\\\\text{data}}(x)$? Could you clarify what is meant by \\\"continuous interpolation\\\" in this context and how $\\\\bar{p}(x)$ relates to $p_{\\\\text{data}}(x)$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
BUElLMIyOt
SMPLy Private: From Masks to Meshes in Action Recognition
[ "Nidhish Shah", "Asfandyar Azhar", "Shaurjya Mandal", "Yongjie Jessica Zhang" ]
In this paper, we introduce Mask2Mesh (M2M), a novel privacy-preserving data augmentation framework that effectively bridges the realism gap seen in synthetic-based action recognition methods. Traditional privacy-enhancing techniques, such as feature masking and synthetic data supplementation, tend to degrade data quality and reduce model performance. In contrast, our method leverages the SMPL-X model to replace real humans with detailed 3D meshes in video data, preserving the subtle nuances of human movement and expressions that are crucial for accurate action recognition. By augmenting real data with superimposed meshes, M2M simplifies both pre-training and fine-tuning processes, without introducing the overheads and biases typically associated with synthetic data. Empirical results show that our approach achieves performance within 0.5\% of models trained on unmodified video data, proving that overlaying meshes leads to no significant performance loss in action recognition tasks. This work presents a practical solution for data anonymization without compromising accuracy, offering valuable insights for more efficient and scalable video data processing techniques in computer vision and action recognition.
[ "Action Recognition", "Computer Vision", "Body Mesh Recovery", "Dataset Augmentation", "Video Data Processing" ]
https://openreview.net/pdf?id=BUElLMIyOt
https://openreview.net/forum?id=BUElLMIyOt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ybWvuRAIAy", "xlIW53K2d7", "vcXk9JtnAw", "qqsIPNkB81", "ngdX2WkHp9", "nGxmAE7jKm", "n5snbZMMwW", "mlSrsqAfm2", "mXV6UVlByO", "m46HYxKOog", "luz9H5CR8J", "kzVzf6jKru", "khOdZ0fVL7", "jfVYBZfk9l", "jbj5Vc5S6K", "isoopaOFC8", "idY28OVP6R", "hPbAYMI5FD", "elwgaBPsYE", "atP992nETS", "aVswJjIou9", "aS0OyA8hIR", "ZyVstOASnz", "Zgd53qTfVc", "Xuz8Spn3X9", "WbT17E2Fm0", "VtKtcZOLco", "VJAVAdW7vn", "V0JTvEyUjI", "S6HwMkveQw", "RZfBHQaMla", "OrsRj0Yd4q", "N3iojhbxHL", "MbD05dzg9b", "LYtoQBacS6", "IQzNIHixGA", "GniwtcNWyb", "Du5Ig01O4q", "DSYNpuceyI", "D3the3nYCx", "BE0wFt3Fw9", "9zJxmrVh1S", "8OTefBGW6d", "7vlUqjQjCj", "7eq3z7sOH4", "5iuHsl2AP8", "360rGDHMXw", "2JEPGpzyVo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732104206541, 1732059880931, 1732084297251, 1733161719401, 1730697031309, 1732059963055, 1732656781582, 1732794800758, 1733872701141, 1730489471273, 1732417375458, 1732304458623, 1732060031864, 1732104692061, 1732305016437, 1732939330567, 1733163004512, 1732060170271, 1732793057240, 1729081166491, 1732653715897, 1732104635821, 1732303164309, 1733162235602, 1732108573053, 1732403896806, 1732104460641, 1733161973012, 1732793164189, 1732793265935, 1732407779223, 1732110309945, 1732417234958, 1732399988574, 1732084965688, 1732084599829, 1732108916915, 1732416538605, 1733163380310, 1732346109622, 1732403998920, 1732400055060, 1732108654806, 1732084444807, 1730567013255, 1732404435273, 1732404370833, 1732084843196 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_C3CH" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_C3CH" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_1a4G" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_vWLp" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_vWLp" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_C3CH" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_C3CH" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_vWLp" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_8VCT" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_1a4G" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_vWLp" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Reviewer_vWLp" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ], [ "ICLR.cc/2025/Conference/Submission8122/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal to vWLp: Part 1/4\", \"comment\": \"# Addressing Weakness 1 #\\n\\nThis is the same concern brought up by reviewer `C3CH`. **Please refer to Part 2 of that rebuttal**. We hope this sufficiently answers your concern on this too. \\n_____\\n\\n# Addressing Weakness 2 #\\n\\nThis is the same concern brought up by reviewer `1aVG`. **Please refer to Part 1 of that rebuttal (titled \\u201cAddressing Dataset Weakness 1\\u201d)**. We also encourage you to read that particular rebuttal as a whole because it will more than likely answer all your questions on K-NEXUS in general. \\n\\n_______\\n\\n# Addressing Weakness 3 and Question 2 #\\n\\nAgain, we ask you to kindly look at our response to reviewer `C3CH`. **Please refer to Part 1 of that rebuttal** for the run-time analysis. Furthermore, please kindly see our response to reviewer `8VCT` where we address their **Sub-point 3.1**. We hope these responses put together should sufficiently ensure your questions on this part of our paper are addressed. \\n\\n_______\\n\\n# Addressing Weakness 4 #\\n\\nThis is a somewhat similar issue brought up by reviewer `1aVG`. **Please refer to Part 2 and 3 of that rebuttal (titled \\u201cAddressing Technical Weakness 1\\u201d and the following table with the various MAE-ViT setups)**. Again, we hope this suffices. \\n\\n_________\\n\\n# Addressing Weakness 5 #\\n\\nWe acknowledge this limitation, as noted in our conclusion, since we do not utilize video instance segmentation (VIS). However, this issue affects only a small portion of the videos (**refer to Figure 8 in the revised Appendix B.1 and Part 4 of our response to reviewer** `1aVG` **under \\\"Addressing Technical Weakness 2\\\"**). Currently, no VIS methods are compatible with SMPL-X; they only support SMPL. Adopting SMPL, however, would mean sacrificing expressions and limb details, which are crucial not only for preserving privacy but also for action recognition. This trade-off was a necessary decision. Developing a VIS-compatible approach for SMPL-X is a substantial undertaking deserving of a separate paper and was beyond the scope of our work. That said, we are happy to provide more complex examples if requested. Let us know if you\\u2019d like us to include additional .gif/.mp4 examples in the supplementary materials for the final version of the paper.\\n\\n________\\n\\n# Addressing Weakness 6 #\\n\\nOur primary contribution was intended to be the dataset, not a model-centric approach or the proposal of a novel method. Our goal was not to pursue novelty but to adopt an informed approach by synthesizing various literature to address a pressing issue for \\\"social good\\\"\\u2014preserving human privacy in video data. This directly ties to our response to Weakness 4 above. \\n\\nWhile we have included multiple pretraining methods, we were unable to explore additional encoder architectures due to time and resource constraints. However, we would like to highlight that the VideoMAE [1] paper extensively demonstrated the performance of ViT backbones, showcasing their ability to efficiently and effectively accommodate larger parameter sets. Similarly, this was observed in the Swin Transformer paper [2, 3].\\n\\nIn our final version, we are willing to allocate resources to train a ViT-L backbone as additional work to demonstrate the scalability of our method if the reviewer deems it necessary. However, we **prefer not to**, as we believe the table provided in our rebuttal response to reviewer `1aVG` sufficiently addresses this concern. \\n\\nWe hope the reviewer recognizes that MAE pretraining with ViT backbones is widely accepted as the gold standard for robust video SSL pretraining [1, 4, 5]. Our contribution lies in synthesizing existing methods and developing the M2M augmentation, rather than proposing a new architecture or SSL training regime.\\n\\n[1] Tong, Z., Song, Y., Wang, J., & Wang, L. (2022). VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Advances in Neural Information Processing Systems, 35, 4093\\u20134104\\n\\n[2] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv.\\n\\n[3] Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., & Guo, B. (2022). Swin Transformer V2: Scaling Up Capacity and Resolution. arXiv.\\n\\n[4] Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., & Qiao, Y. (2023). VideoMAE V2: Scaling video masked autoencoders with dual masking. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14549\\u201314560\\n\\n[5] Feichtenhofer, C., Fan, H., Li, Y., & He, K. (2022). Masked autoencoders as spatiotemporal learners. arXiv.\\n\\n_____\\n\\n# Addressing Weakness 7 #\\n\\nThanks for pointing this out! We have made the change, see revised paper.\"}", "{\"title\": \"Rebuttal to C3CH: Part 1/4\", \"comment\": \"# Addressing Weakness 1 #\\n\\nThe primary goal of our M2M pipeline is to provide a method for anonymizing human-based datasets through the use of mesh representations. These operations are not intended to be real-time nor optimized for edge-device deployment. Instead, the pipeline is designed for preprocessing datasets on robust computational systems, which are more than capable of handling the computational load (see Table 5). These operations run locally, ensuring that private content does not leave secure environments. The goal is, that any derived data (e.g., meshes) are stripped of sensitive identifiers prior to external usage. The anonymization process is thus a preparatory step, distinct from the lightweight inference requirements of the utility model (e.g., VideoMAE). \\n\\nIn contrast to real-time privacy-preservation frameworks like STPrivacy, which aim for deployment optimization, M2M operates offline for dataset curation, which inherently allows for greater computational expense. Furthermore, M2M distinguishes itself by offering full 3D anonymization through SMPL-X meshes, which addresses privacy concerns holistically rather than solely through adversarial anonymization or video frame transformations. While this comes with higher computational costs, the model nearly guarantees anonymization (see Part 2 of this rebuttal) without reliance on privacy leakage tolerance.\\n\\nNonetheless, we understand the concern of the reviewer and herewith provide the run-time analysis in the table below and will add it to the final version of the paper. Note, that these can slightly vary based on the technique employed at each step, but we report the FLOPs for our best performing set of methods. \\n\\n| Component | Operation | Inference FLOPs/Frame (G) | Distributed Runtime for 25 fps x 10 sec Video (s) |\\n|--------------------------|---------------------------------|---------------------------|---------------------------------------------------|\\n| Inpainting (E$^2$FGVI) | Frame inpainting | 293 | 0.57 |\\n| Object Detection (SAM) | Dense segmentation | 792 | 1.59 |\\n| SMPL-X Fitting | Pose parameter estimation | 10 | 0.02 |\\n| | | | |\\n\\n| Model | Operation | Training FLOPs/Frame (G) | Distributed Runtime for 25 fps x 10 sec Video (s) |\\n|----------|--------------------------------|--------------------------|---------------------------------------------------|\\n| VideoMAE | Action recognition (ViT-based) | 1693 | 3.42 |\"}", "{\"title\": \"Rebuttal to 1aVG: Part 1/5\", \"comment\": \"# Addressing Dataset Weakness 1 #\\n\\nWe utilize K-NEXUS not only for computational efficiency\\u2014reducing the pretraining dataset to 150 classes instead of the full 400\\u2014but primarily to demonstrate that refining the dataset into a \\\"purer\\\" coarse-grained action recognition set with reduced class-action bias enhances our framework's performance. By removing the additional noise from extra data classes that are highly correlated with each other, this approach acts as a form of \\\"data compression\\\" for video data classes (similar to something basic like PCA in standard ML) but applied to action recognition via smart and informed sampling (as outlined in Section 3.1). Refining the dataset to 150 classes with less action overlap allows the model to focus on specific action features rather than being overwhelmed by redundant actions with different class labels (e.g., \\\"playing a guitar\\\" vs. \\\"playing a violin\\\"). During downstream tasks, the model can fine-tune effectively for similar actions (e.g., \\\"playing a violin\\\") even if pretraining was performed on related actions (e.g., \\\"playing a guitar\\\"), and K-NEXUS effectively handles these redundancies. Additionally, this refinement aligns with prior works, as shown in Table 1, where we cite results from other papers (SynAPT and PPMA) for fair comparison [1, 2]. SMPLy Private (without K-NEXUS) uses the same Kinetics-150 splits from the SynAPT and PPMA papers to ensure consistency. However, we demonstrate that our data-centric method, K-NEXUS, improves performance by producing meaningful, well-informed splits that are neither random nor manually labor-intensive, unlike the approaches in SynAPT and PPMA.\\n\\n[1] Zhong, H., Mishra, S., Kim, D., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Oliva, A., & Feris, R. (2024). Learning human action recognition representations without real humans. Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23), 2839.\\n\\n[2] Kim, Y., Mishra, S., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Saenko, K., Oliva, A., & Feris, R. (2024). How transferable are video representations based on synthetic data? Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22), 2588.\\n________\\n\\n# Addressing Dataset Weakness 2 #\\n\\nThank you to the reviewer for pointing this out. This is indeed an interesting experiment, which we conducted initially and now briefly make more clear in the revised Footnote 1. We chose random sampling within the video's mid-quartile range of frames over entropy-based smart sampling, as the latter\\u2014while correlating slightly better with the action\\u2014showed minimal performance improvement (less than 0.1%) in our experiments. Given that Kinetics videos are inherently short, this negligible gain did not justify the significantly higher computational cost associated with entropy-based sampling compared to random sampling once the mid-quartile range of frames was identified.\"}", "{\"comment\": \"Thank you for considering my suggestions. I believe the writing is more clear now that there are specific sections which better explain the problem statement, previous works, and the benefit of your proposed method. I also appreciate the explanation regarding other baselines and agree that it seems SynAPT and PPMA seem to be the only current relevant work. I believe these specific concerns of mine have been addressed.\", \"title\": \"Reviewer Reply to Points 1, 1.1, and 1.2.\"}", "{\"summary\": \"The paper presents a method attempts at mitigating private attribute information in video by converting humans into 3D meshes. This approach utilizes various off-the-shelf models, starting with human segmentation, followed by replacing segmented humans with 3D meshes and inpainting to remove the original RGB humans. Additionally, it incorporates object information by leveraging another segmentation model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper features clear writing and well-illustrated figures, making it easy to follow.\", \"The diversity and quantity of action recognition datasets used are commendable.\"], \"weaknesses\": \"- **W1**: A primary concern is that the method contradicts the goal of privacy preservation by anonymizing video with minimal computation, which is essential for deployment on edge devices before transmitting anonymized content to the cloud computation or storage. The anonymization should require less computation than the utility model (e.g., VideoMAE used here). However, the proposed method employs multiple off-the-shelf models- video inpainting, object detection, and SMPL-X, and their combined computational load significantly exceeds that of the utility model. This compromises privacy by exposing private attributes to these models and makes edge computing infeasible.\\n\\n- **W2**: Another key concern is the lack of evaluation for privacy protection to quantify privacy leakage. In the main comparison Table 1, the method appears to assume that it inherently resolves privacy issues. While 3D meshes might seem to avoid privacy risks at the human perception level, the essential measure is computer perception, which all prior work evaluates [a,b,c,d]. The method should follow standard privacy evaluation protocols, such as training a classifier to detect private attributes from the 3D-mesh-transformed VISPR image dataset, thus quantifying privacy leakage. The results would then indicate whether or not the private attributes are effectively anonymized.\\n\\n- **W3**: Another limitation is that the approach is restricted to human-related private attributes only. Prior works like [b,c,d] address a broader range of personal identifiable information, including scenes and objects. While this is not a central evaluation point, the authors should acknowledge this as a limitation.\\n\\n- **W4**: The results on the Diving48 dataset are notably low- 66%, significantly below comparable baselines like TimeSformer-L (81%) and the state of the art at approximately 91%. I strongly recommend that the authors address this low baseline, as the current claims may be misleading. Diving48, as a fine-grained action dataset, includes fast-moving objects where even minor errors in input modalities substantially affect classification performance. It is well-established that 3D meshes are not ideal for representing fast-moving objects, and this is a fundamental limitation of the approach, given its reliance on individual off-the-shelf components that can propagate errors.\\n\\n[a] \\\"Privacy-preserving deep action recognition: An adversarial learning framework and a new dataset.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence 44.4 (2020): 2126-2139.\\n\\n[b] \\\"Spact: Self-supervised privacy preservation for action recognition.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[c] \\\"Ted-spad: Temporal distinctiveness for self-supervised privacy-preservation for video anomaly detection.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[d] \\\"STPrivacy: Spatio-temporal privacy-preserving action recognition.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"questions\": \"While the method introduces a novel approach to privacy preservation, it has fundamental flaws in its formulation. Specifically, it undermines the objective of efficient and feasible anonymization relative to the utility branch. Furthermore, the absence of quantitative evidence on privacy leakage reduction is concerning. In its current form, I am inclined to recommend against accepting the paper. To address these issues, it would be beneficial for the authors to respond to the following weaknesses (see weakness section for more details):\", \"w1\": \"Provide FLOPs for both the anonymization process and VideoMAE, and compare with prior work [a-d].\", \"w2\": \"Adhere to the privacy protocols established in prior work using the VISPR dataset, and include a detailed analysis. Additionally, provide experimental implementation details.\", \"w3\": \"(Optional) Report results on non-human privacy attributes as well, as done in [b, d].\", \"w4\": \"Improve the baseline on Diving48 and base conclusions on the revised results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to C3CH: Part 2/4\", \"comment\": \"# Addressing Weakness 2 #\\nWe initially did not consider VISPR because the inpainting process completely removes the human figure, leaving no human attributes for our framework to evaluate. However, we acknowledge that the meshes include gender options and shape parameters that may resemble the original human subject, which could potentially lead to privacy attribute leaks.\\n\\n**Upon further experimentation, we observe that with M2M augmentation, the cMAP scores on VISPR1 subset (see Part 3 for more on this choice) are 38.1 and 33.9 without and with K-NEXUS (we take the mode/most occurring gender-type mesh in the latter case), respectively**. These exceptionally low scores highlight the robustness of the inpainting method we use. The scores likely reflect only gender and color attributes, as the meshes vary between male, female, and neutral and are uniformly colored white, which might lead to the classification of \\\"white\\\" as a skin tone category. Hence, we posit that the already competitive cMAP scores could be further enhanced by using meshes in distinct colors (such as red, green, blue, etc.) that do not resemble any human skin tones. This approach could potentially preserve the skin color attribute in VISPR almost perfectly.\\n\\nAlthough we use VISPR1 attributes, we acknowledge that this approach may not be entirely robust. VISPR2 includes attributes like weight, and SMPL-X meshes can conform to the size and shape of humans based on their set shape parameters. However, these parameters can be adjusted to prevent such leaks, mitigating this potential limitation anyways. **We humbly thank you for this strong suggestion and will definitely include it in the final version of our paper as we agree that it further completes the paper, the story we are trying to convey, and extends our framework's technical contribution and prowess**!\\n\\nIn the final version, we plan to showcase our VISPR1 cMAP scores alongside those of the papers you referred to, as M2M demonstrates a significant improvement over them. Additionally, we will evaluate the models in Table 1 to directly compare our method with works focused on human privacy preservation via SSL/MAE pre-training. However, due to the limited time available during this rebuttal period, we are unable to complete all these experiments at this stage. We sincerely hope that you, the reviewer, understand our constraints and grant us leniency, trusting in good faith that we will complete the experiments before the camera-ready deadline. Thank you once again for your consideration.\"}", "{\"comment\": \"**Response to W2, W6, and Q4.** Of course! Glad we could clarify. On comparing the classes we selected vs. the prior works (PPMA and SynAPT), we show that by selecting classes using K-NEXUS our method further outperforms PPMA and SynAPT in Table 1 (see our latest response to `1aVG` for more info on the matter). Shouldn\\u2019t this be sufficient? Unless you want us to list out our selected classes vs. theirs and do a sort of qualitative analysis showing that their classes might have some redundancies that ours don\\u2019t see, we used K-NEXUS? We could add more discussion on the design choices, outputs, justifications, etc. for K-NEXUS in camera-ready \\u2013 we hope now it is very clear what the purpose of K-NEXUS is. However, we do not want it to be that central to the paper to take the focus away from privacy-preservation for SSL pretraining \\u2013 K-NEXUS was just an additional means to an end. We plan on having a full-fledged paper for the algorithm in the near future at some point.\\n______\\n**Response to Q3.** No worries, honestly it was on us for having it written like that so we appreciate you helping us clarify it! And yes, embeddings are essentially concatenated.\\n_______\\n**Response to Q7 + Q8.** Of course, we agree with you, this was just an additional step we wanted to take to see if there would be a rationale for choosing what type of mesh to employ during VISPR evaluation. But in general we have found that the neutral mesh gives the lowest cMAP score which we have already reported. In the camera-ready we will simply show this (and how it improves over gendered meshes) vs. other reported measures in the literature like SPAct. We hope this puts your concern to rest, thank you. \\n________\\n**Response to Q9.** Fair enough \\u2013 we are glad you find the provided justification reasonable. We are grateful for you looking at our response to reviewer `8VCT`. Thanks!\"}", "{\"title\": \"Reply W1 + W2\", \"comment\": \"# Reply to W1 #\\n\\nThank you for your thoughtful reply. We\\u2019d like to clarify that the paper does not suggest or expect clients to directly receive raw data and independently apply the anonymization process we propose. The core value of our approach lies in a one-time anonymization process applied to generalizable datasets (e.g., Kinetics), which can then be utilized as a pretraining method for a wide range of applications.\\n\\nFrom a practical perspective, if a client undertakes a one-time anonymization process, they would subsequently have access to a dataset that can be reused for pretraining purposes across various models and applications. This greatly enhances the utility of our method. Regarding your reference to clients, could you clarify which clients you are referring to? Most tech companies today, particularly those with access to GPUs, should have the resources to implement this.\\n\\nAre you suggesting a comparison of anonymization methods, such as U-Net versus a mesh overlay? If so, we\\u2019d like to note that the computationally intensive part of this pipeline is the VideoMAE pretraining, which reflects a tradeoff for the performance gains achieved. For instance, U-Net-based methods do not perform as well on datasets like UCF-101 in a privacy-conscious regime.\\n\\nWe\\u2019d be happy to explore this further if you can share a specific reference or example. Otherwise, we hope this explanation provides clarity on this point you raised.\\n\\n______\\n\\n# Reply to W2 #\\n\\nWe believe there might be an misunderstanding between us on this. The reported cMAP score takes into account all attributes in VISPR1. That should be sufficient for preserving human-privacy as we have already discussed with other reviewers. Unless you mean something else? If not, we hope this has clarified your concern.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are deeply grateful for the extensive feedback provided by the reviewers, which has significantly contributed to shaping the potential future directions of our research. The discussion highlighted strengths, including the novel use of SMPL-X meshes to address privacy concerns while maintaining action recognition accuracy and reducing biases. However, several critical concerns were raised:\\n\\n1. **Privacy-Utility Tradeoff**: The reviewers consistently emphasized the lack of a robust privacy evaluation metric, which was a central limitation in our method. Specifically, our work lacked empirical validation of privacy-preserving claims using established protocols such as VISPR.\\n\\n2. **Computational Feasibility**: The high computational cost of our pipeline raised concerns about its applicability, especially in edge computing environments.\\n\\n3. **Technical Contributions**: While reviewers appreciated the integration of off-the-shelf methods, they noted the limited novelty and suggested that the technical innovation of K-NEXUS and M2M augmentation required further elaboration and comparative analysis.\\n\\n4. **Evaluation and Benchmarking**: The reviewers pointed out that our benchmarking did not fully incorporate certain relevant baselines, and suggested that additional comparisons, particularly in terms of action class bias and dataset selection, would strengthen our contributions.\\n______\\n\\n### Changes Made During the Rebuttal ###\\n- **Privacy Evaluation**: Preliminary results on VISPR attributes were added to address privacy concerns, though they highlighted areas for further refinement.\\n- **Ablations and Analysis**: Expanded experiments clarified the impact of M2M augmentation and K-NEXUS on action recognition tasks.\\n- **Terminology Refinement**: Updated definitions of \\u201ccoarse-grained\\u201d and \\u201cfine-grained\\u201d actions aimed to better align with established conventions in the field.\\n- **Technical Clarifications**: Enhanced methodological clarity and alignment with previous works, along with additional contextualization of SMPLy Private\\u2019s goals and results.\\n\\n### Points of Agreement and Respectful Disagreement ###\\n- **Agreement**: We concurred with the feedback regarding the need for more thorough privacy evaluations and additional benchmarks.\\n- **Respectful Disagreement**: We clarified that our work focused on the feasibility of an integrated privacy-preserving pipeline rather than developing new algorithmic models or being computationally efficient in all precise settings. \\n\\n__________\\n\\n### Reasons for Withdrawal ###\\nWe have decided to withdraw this submission to address the identified limitations comprehensively. We aim to refine the privacy evaluation protocols, explore additional baselines, and enhance the clarity and technical depth of the manuscript. This decision reflects our commitment to presenting a more robust and impactful contribution in the future.\\n\\nWe sincerely thank the reviewers and the conference organizers for their invaluable feedback and hope to resubmit a significantly improved version of this work in a future venue.\"}", "{\"summary\": \"SMPLy Private proposes a new way to pre-train an action recognition model for privacy-preserving purposes. It leverages human meshs extracted from the Kinetics dataset to replace the real human subjects in the videos, removing human-specific visual biases during pre-training. Previous methods tend to approach privacy-preservation by distorting or augmenting the visual data itself, hindering model performance and creating a gap between augmented/synthetic training data and real-world downstream data. M2M solves this by replacing human subjects with their meshes, retaining important action information and the surrounding visual information while removing personal attributes. They also propose a modified k-means clustering algorithm to train on a subset of Kinetics with minimized inter-class similarity. They provide extensive experiments and some ablations to support their proposed work.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea to replace humans with their meshes to remove private attributes is an intuitive and effective solution to the proposed problem of privacy-preserving action recognition.\", \"Their quantitative improvement over PPMA in Table 1 (and other baselines) are significant.\", \"-Their K-Nexus curated dataset seems to contribute to their method/training objective and is a more direct way of reducing action class bias than random sampling (used by previous methods).\"], \"weaknesses\": [\"The paper is not very clearly written. For example. their methodology section describes each component, but it is difficult to understand how each component is exactly used during training. It is only after looking at Table 1 that it is clear M2M is used during both MAE training and alignment, which could be addressed in Section 3.3.\", \"Table 2 is similarly difficult to understand: what does each row represent? It would seem the caption is meant to correspond each row to the order in which the methods are discussed, but this seems incorrect (OSX is first but it does not use just segmentation).\", \"It seems their writing structure follows closely with PPMA, which itself is not a major issue, but M2M is much more difficult to follow and seems to leave out more of the contextual and background information that PPMA provides. I had to first read PPMA to understand the problem formulation and their solution, then refer back to M2M to fully understand the method. On a similar note, Table 1 is the same as Table 1 in PPMA - are there no other baselines to consider in this table? Surely other privacy-preserving methods such as [1] and the methods discussed in [1] could also be added to further support the superiority of SMPLy Private?\", \"The structure of their method is also very similar to PPMA. PPMA proposes pre-training with 'human-removed' data and synthetic data for privacy preservation, where M2M is simply replacing synthetic data with mesh-extracted data from Kinetics. Moreover, since off-the-shelf, pre-trained models are used to segment, inpaint, and recover the meshes from Kinetics in M2M, I feel that limits the novelty contributed by this work.\", \"The experiments in Section 4.3 and onwards are not sufficiently supported/are not convincing. Table 3 explores gender bias by investigating model accuracy on samples where women are performing male-biased actions and vice-versa. Firstly, the difference between gendered and non-gendered meshes are not described. Comparisons with other privacy-preserving methods would further support whether M2M is truly superior than previous methods at gender de-biasing, as opposed to just comparing with the standard baseline of VideoMAE on real data. Moreover, a general comment is that the extracted meshes are very unnatural and \\\"stick out\\\" so to speak when they replace the real humans in the data. It may be much easier for the model to focus/learn better action representations since these meshes are very clearly visible in the videos, as opposed to real humans which look natural and blend better into the surrounding environment/visual stimuli. This comment ties into Table 3, as the improved performance over the baseline could come from these unusual meshes in the video as opposed to the claimed privacy-preserved attributes learned by the SMPLy Private model - another reason why comparing with other privacy-preserving methods would be beneficial.\", \"Section 4.4 is an interesting observation, but does not seem to actually provide any benefit. Firstly, the authors note \\\"we demonstrate that our model [...] learns representations quicker in earlier stages because humans are consistently\", \"depicted as meshes\\\" which I describe as a shortcoming in my point above. Furthermore, the benefit of learning representations faster is null if the best-performing epoch is what ends up being used for all experiments anyway. Are any of these early epochs where M2M outperforms the baseline used? If not, then the fact that VideoMAE eventually catches up by the end of training leads me to believe this observation is not significant.\", \"Section 4.5 and Table 4 is also a product of following PPMA. In PPMA, they first show that using NH Kinetics is best for Stage 1 training since it equips MAE to understand contextual action information (background and objects). They then show that NH Kinetics+Synthetic is best for Stage 2, since NH Kinetics continues to provide contextual alignment while the synthetic data provides temporal action information. This progression of results makes logical sense. However, Table 4 in this paper simply shows that M2M is better than NH Kinetics for Stage 2 training, which is obvious since NH Kinetics doesn't have any humans performing any actions. It would make more sense to compare M2M to NH Kinetics+Synthetic from PPMA to show that M2M leads to better downstream performance while also closing the realism gap against a model trained on real Kinetics data.\", \"In summary, I believe only Table 1 provides meaningful and significant results. I do not believe Table 1 alone is enough for acceptance, as the rest of the quantitative results in this paper lack proper support and/or explanation. The writing is not very clear, but I do think the general idea of human meshes for privacy-preservation is interesting and valid, despite the lack of novelty regarding how the meshes are extracted and used in this work. The construction of K-Nexus through distinct action classes and showing significant improvements when using this sophisticated subset I thought was novel and applicable to future work, providing some strength to this work.\", \"[1] Dave, I. R., Chen, C., & Shah, M. (2022). Spact: Self-supervised privacy preservation for action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20164-20173).\"], \"questions\": \"Most of my questions regarding this work are listed in the weaknesses section. I am open to improving my score if the authors are able to address my concerns listed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Question 9\", \"comment\": \"# Reply to Question 9 #\\n\\nWhile specific ablations have not been done on this point, based on prior works like SynAPT and PPMA, we know that there is a higher scene-object bias from the left-side of our Table 1, starting at UCF101, which reduces to lower object-scene bias datasets like UAV-Human (see Figure one in the SynAPT paper: \\u201cHow Transferable are Video Representations Based on Synthetic Data?\\u201d). We are going based on this. If this is not substantial enough evidence, we acknowledge that and relent this point, but we would appreciate your understanding nonetheless (note, we do well on higher scene-object datasets downstream, but struggle a little with UAV-Human relative to PPMA showcasing that SMPLy Private is indeed better for higher scene-object bias tasks). \\n\\n____\\n\\n*We again thank you for your invaluable feedback and hope to have clarified your concerns well enough for a potential reconsideration of our paper's current score. We know how time-consuming this process can be so we've tried to be proactive in our responses while appreciating your time. Thanks again!*\"}", "{\"comment\": \"## Response to Point 3\\nFirstly, I believe the quote: \\n\\n> \\\"[...] the distinction between gendered and non-gendered meshes is valid and crucial. These distinctions are outlined in our methodology, where we explain that gendered meshes are instantiated based on gender labels, while gender-neutral meshes lack any specific gender characteristics.\\\" \\n\\nis incorrect, as I am not able to find a description of gendered vs. non-gendered meshes in the entire paper, let alone the methodology section. Moreover, I am not able to find the description regarding these meshes from the quote:\\n\\n> \\\"We will address this by including these details in the Appendix in the final version of the paper. These additions will include differences in mesh characteristics, classes that exhibit gender bias, and a manual count we conducted to substantiate this (we have now included the gender splits in the revised supplementary material for your reference).\\\"\\n\\nAgain I suggest highlighting changes in the updated pdf to see what was absent in the original draft and to find the changes easier.\\n\\n> \\\"However, we also emphasize that our work uniquely focuses on exploring the potential of meshes for gender bias mitigation\\u2014a novel frontier not explicitly addressed by prior methods. Including benchmarks like PPMA or other privacy-preserving approaches that do not address gender bias would dilute the emphasis on this key aspect. To our knowledge, no existing privacy-preserving data augmentation framework addresses gender bias in tandem with privacy considerations.\\\"\\n\\nIf the first sentence is true, wouldn't M2M outperform previous methods on gender-biased action recognition when compared against each other? That is exactly what I am suggesting in Point 3. \\n\\nRegarding the last paragraph, I believe Table 3 in the main paper is measuring action recognition performance on samples where women are performing male-dominated actions and vice-versa. The idea is that a higher action recognition performance indirectly implies gender-bias mitigation, as the model is focusing solely on the action irrespective of who is performing it (i.e., a woman playing football). My point is that the increased accuracy reported for SMPLy Private could have less to do with gender-bias mitigation, but rather that meshes stick-out more than regular humans. I understand that human meshes could play a part in gender-bias mitigation, leading to higher action recognition in tandem with my \\\"stick-out\\\" conjecture, but Table 3 does not sufficiently explore how much each of these points contribute to the reported numbers. \\n\\n## Response to Point 3.1\\n\\n> \\\"This insight highlights the efficiency of using M2M-augmented data in resource-constrained environments where computational efficiency is crucial. Notably, these findings suggest that SMPLy Private-trained models are inherently better suited for early-stage deployment. Although the final epochs achieve real-data baseline performance, the shorter training curve for SMPLy Private models reveals an underexplored advantage, paving the way for further research in low-resource optimization and early deployment strategies.\\\"\\n\\nThis is in direct contradiction to your first response to reviewer C3CH. \\n\\n> \\\"we extended training by an additional 50 epochs. Under these slightly more resource-intensive conditions, our model surpassed VideoMAE with real data by approximately 1.1%. We appreciate the reviewer\\u2019s suggestion, which motivated this extended experiment.\\\"\\n\\nJust for clairty's sake, you extend both M2M AND VideoMAE by 50 epcohs? So you train both methods for 250 epochs and that is when you see your method outperform VideoMAE by $1.1\\\\\\\\%$? If so, this still does not address my earlier point that faster learning in the early stages of training is irrelevant if the best-performing epoch is at the end of training, especially considering I do not find the resource-constrained argument sufficiently convincing.\"}", "{\"title\": \"Rebuttal to C3CH: Part 3/4\", \"comment\": \"# Addressing Weakness 3 #\\n\\nThrough our novel approach, we aim to demonstrate that representations of entities (in our case, humans) can be effectively learned by introducing a mesh into the pretraining dataset, which helps prevent the leakage of identity information. We appreciate the reviewer's thoughtfulness in pointing us toward the VISPR dataset. However, since our study focuses specifically on human privacy, we limit our evaluation to the VISPR1 subset, as it aligns more closely with the objective of our paper: **preserving human privacy**. The results presented in our work strongly support the idea that robust representations can be learned using a self-supervised learning approach, even when the finer details of the objects (in this case, humans) are not fully visible or disclosed.\\n\\nIt would be fascinating to extend our findings to include a more customizable family of meshes that could address additional forms of personal information, such as the cases outlined in the VISPR dataset, in future work. We acknowledge this as a potential limitation and will include a brief discussion of this point in the limitations section of the revised/final version of our paper.\"}", "{\"title\": \"Rebuttal to vWLp: Part 4/4\", \"comment\": \"*We sincerely apologize if our response caused any inconvenience, as we understand it may be somewhat cumbersome to navigate this page and review our responses to other reviewers due to overlapping points. Once again, we would like to emphasize that devising a novel pretraining or SSL method was not our primary objective whatsoever. For this reason, we believe MAE-type training for videos is sufficient (the standard pre-training method for video data), and we hope the table of SSL MAE-encoder ablation included in our response to reviewer `1aVG` addresses your concerns somewhat satisfactorily. We kindly ask, if it is not too much trouble, that you reconsider your evaluation of our work in light of the updates and responses provided to both your concerns and those of other reviewers. We are deeply grateful for your time and thoughtful feedback, which have significantly contributed to strengthening our paper. Thank you for your consideration.*\\n\\n**EDIT / P.S.** -- Here are some examples of contrastive SSL video methods that pale in comparison to MAE-type methods [a, b]. Also, refer to the Kinetics-400 leaderboard and see that most SSL methods are populated by MAE / VideoMAE type pretraining schemes [c].\\n\\n[a] Qian, R., Meng, T., Gong, B., Yang, M.-H., Wang, H., Belongie, S., & Cui, Y. (2021). Spatiotemporal Contrastive Video Representation Learning. arXiv. \\n\\n[b] Wang, J., Bertasius, G., Tran, D., & Torresani, L. (2022). Long-Short Temporal Contrastive Learning of Video Transformers. arXiv.\\n\\n[c] Papers with Code Leaderboard on Kinetics-400 is populated with MAE ViT-based SSL-encoder frameworks: https://paperswithcode.com/sota/action-classification-on-kinetics-400\"}", "{\"comment\": \"## Response to Point 3.2\\n\\nFrom my understanding, the alignment stage means you take a model pre-trained in a self-supervised manner, and fine-tune it on the same dataset in a supervised manner. Thus, the model learned generally important features from self-supervision and direct action information from supervised training with ground truth labels. My point is that in Table 4, the authors are claiming that M2M Kinetics is better for alignment since it performs better and closes the realism gap, however they are comparing against NH-Kinetics. If there is no human performing the action during alignment, what exactly is the model being aligned with? It is probably using the background or surrounding objects it has erroneously been aligned with to perform action recognition. This is why I suggested using NH-Kinetics+Synthetic in the alignment stage for your baseline in Table 4, as that would better prove that M2M is better for alignment and closing the realism gap while maintaining privacy. Note that this is still not the same as the PPMA row in Table 1. \\n\\nOverall, I think very few to none of my comments were sufficiently addressed. Thus, I retain my score for now as I await the author's response.\"}", "{\"comment\": \"# Reply to W4 #\\n\\nJust to be clear, the datasets used, including Diving48, were so that we could effectively compare our works against current best methods in this regime: PPMA and SynAPT.\"}", "{\"title\": \"Reviewer Reply to Point 3 and 3.1\", \"comment\": \"The authors note:\\n\\n> \\\"[..] you would like for us to compare action recognition performance given gendered classes with other methods. We conducted those experiments but chose not to include them in the paper as we wanted to show more of the efficacy of the meshes. Our experiment was more self-serving in that if the appropriate meshes are chosen, the performance improves with a middle-ground/balanced approach with the neutral/gender-agnostic mesh\\\"\\n\\nIs the paper's main contribution not about privacy preservation? I would think experiments that further exhibit that your proposed method decreases gender or race bias as opposed to previous methods would be integral to the paper. It is great that your experiments with previous methods seemed to not perform as well, but I think further investigation into how your method mitigates gender bias in general should be preferred over investigating which meshes improve action recognition performance. I acknowledge that the authors ran some experiments where your method improved over baselines by $10\\\\\\\\%$ on average, but similar to my feelings on many other points, more detail and time would be needed to incorporate these experiments into the story of the paper (especially since very little detail is given about this $10\\\\\\\\%$ improvement). I believe the experiments addressing the \\\"stick out\\\" hypothesis are great and I appreciate that the authors addressed that, but again it will require a major change to the story/overall work to incorporate that experiment and discuss its impacts.\\n\\nRegarding Point 3.1, I still feel the same concern is present. If you train for a really long time, you eventually improve over VideoMAE by a marginal amount, which is alright. But the whole section is about early representation learning, and the claim about resource-constrained settings is not supported well enough and still contradicts the discussions with Reviewer C3CH regarding computational complexity of the proposed pipeline. Moving this entire section into the Appendix and replacing it with an entirely new VISPR section that was only considered during the rebuttal period is too major of a change in my opinion - it will be a good addition to the paper but will require a resubmission.\"}", "{\"title\": \"Rebuttal to C3CH: Part 4/4\", \"comment\": \"# Addressing Weakness 4 #\\n\\nWe respectfully feel that this is an apples to oranges comparison that has been raised, however, we hope that you, the reviewer, are satisfied with our response as we explore your feedback nonetheless. Firstly, the focus of our paper was to benchmark against prior literature (e.g., PPMA [1] and SynAPT [2]) that employed synthetic methods, and to demonstrate not only that we outperform these methods, but also that our approach closely approximates real-data performance\\u2014surpassing it in certain cases with K-NEXUS (thus closing the realism gap). The issue you raise is more of a model-swap concern. For example, we ran TimeSformer-L and closed the realism gap on Diving48 with a delta of 5.6%, achieving 75.4% accuracy (calculated as 81% - 75.4%). This performance is significantly better than what PPMA [1] and SynAPT [2] achieved with TimeSformer and their other models on Diving48. Notably, all our work employs self-supervised learning without using ImageNet-initialized weights, unlike the original TimeSformer paper, which relies on supervised training. Since we pretrain the model from scratch using M2M Kinetics to achieve these results, this distinction is further elaborated on later in this rebuttal. We can expand on these additional experiments with other off-the-shelf SOTA architectures (we\\u2019ll consult the Papers with Code leaderboards) in the camera-ready version. However, respectfully, we would **really prefer not to** due to the immense computational cost. For context, this single experiment required ~100+ GPU hours just for this rebuttal. Instead, if you want to observe the influence of using different MAE pretraining methods with different ViT backbones, please refer to our response to reviewer `1aVG`.\\n\\nThis starts to underscore the potential of our M2M framework as a universal dataset privacy augmentation method for preserving human privacy. Importantly, we are not applying meshes to downstream datasets like Diving48 but only to Kinetics (forming M2M Kinetics as the pretraining dataset), after which we evaluate classification through fine-tuning and linear probing on downstream datasets, like Diving48, that do not require anonymization. Moreover, Kinetics likely lacks the fast-moving, complex motions present in Diving48 (e.g., the bodies are twisting and turning), which might explain its performance drop. \\n\\nIt\\u2019s worth noting that while we conducted this additional experiment to address your feedback, the models you reference from the Papers with Code leaderboard are inherently supervised. Our paper, in contrast, focuses on SSL pretraining scenarios. Furthermore, TimeSformer-L is pretrained on large-scale labeled datasets like ImageNet before being fine-tuned on video datasets like Kinetics 400. Under a strict/stringent definition of privacy preservation, ImageNet itself may contain identifiable human features, and fine-tuning on Kinetics-400 further exposes the model to non-anonymized human data. We had to strip this away from the model and train TimeSformer-L from scratch on M2M Kinetics using our pipeline which was a significant effort and partially accounts for the high computational costs (~100 GPU hours). We did not perform similar experiments with Video-FocalNet (the top-performing model with ~91% accuracy on the leaderboard) due to the exhaustive effort this would entail, which was infeasible within the rebuttal timeline.\\n\\nWe hope you, the reviewer, appreciate our extensive efforts and consider leniency in potentially re-evaluating our submission. While this experiment was not central to our paper\\u2019s primary focus (it is more of an adjacent contribution), we acknowledge its relevance and thank you for prompting this exploration!\\n\\n[1] Zhong, H., Mishra, S., Kim, D., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Oliva, A., & Feris, R. (2024). Learning human action recognition representations without real humans. Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23), 2839.\\n\\n[2] Kim, Y., Mishra, S., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Saenko, K., Oliva, A., & Feris, R. (2024). How transferable are video representations based on synthetic data? Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22), 2588.\\n\\n--------------\\n\\n*Thank you once again for your valuable feedback. We are committed to incorporating these revisions fully in the final/camera-ready version of the paper. We hope that, with these improvements and our thoughtful responses, you may consider raising your score, as we have aimed to address all the key concerns raised.*\"}", "{\"title\": \"Response to W1\", \"comment\": \"My main concern was that the proposed method addresses an impractical problem where, instead of sharing secured (anonymized) videos, the method requires sharing raw videos with a computationally heavy anonymization process, which is often infeasible from the client side. This defeats the purpose of privacy preservation and was beyond the scope of the rebuttal.\\n\\nThe authors did not provide a comparison with prior anonymization methods such as UNet in the rebuttal. However, this was not strictly necessary, as the current method is significantly (~100x) more computationally demanding than those approaches.\"}", "{\"summary\": \"This paper introduces a new privacy-preserving data augmentation framework, Mask2Mesh. It uses off-the-shelf models such as Mask R-CNN with ResNet-101 to extract human masks and OSX for mesh recovery. Before that, this work designed a new K-Nexus algorithm based on K-means to further select 150 classes from the Kinetics-400 dataset to reduce the class bias. Experiments based on the VideoMAE demonstrate its effectiveness in certain situations.\", \"soundness\": \"2\", \"presentation\": \"1. The writing in this work is organized unusually. For example, in the Introduction, after outlining two paths to solving the problem, the content jumps directly to the contributions without an explanation of the specific methods.\\n\\n2. The figures are unclear. For instance, Figure 2 uses arrows to directly present the workflow, which is straightforward. However, it would be more informative to label which model is used for each step, especially since most of them are off-the-shelf models.\", \"contribution\": \"2\", \"strengths\": \"1. Most designs appear technically correct. This paper is easy to understand and practical to follow.\\n\\n2. This work is well-motivated, addressing issues such as identifiable individuals, gender bias, etc.\\n\\n3. The results are credible, supported by the code provided in the Suppl., which lays a foundation for future research.\", \"weaknesses\": \"Dataset:\\n1. Why was it necessary to further select a subset from Kinetics-400? If the goal was to increase the differences between the training data categories, I don\\u2019t think this is reasonable for action recognition tasks. Category ambiguity can be mitigated by adjusting the model or the training process. However, improving performance by removing similar actions does not seem justified.\\n\\n2. The Limitation at the end of the paper discusses the absence of temporal considerations in segmentation, which is understandable. However, in #Line178, it is mentioned that only one random frame is selected per video to represent the corresponding class. Is this truly appropriate? Intuitively, distinguishing between \\\"standing up\\\" and \\\"sitting down\\\" seems difficult using just a single frame.\\n\\n3. The selection of 150 categories from the original 400 is presented as a way to reduce category bias. It would be more convincing to explicitly show which categories were chosen and highlight which categories were prone to confusion. This would make the method more persuasive.\", \"technical\": \"1. Firstly, it is important to acknowledge that this paper\\u2019s perspective on information safety is commendable. However, the technical contributions are quite limited, as most of the models used are off-the-shelf. It seems that the only technical innovation is the K-NEXUS algorithm, but its performance appears to fall significantly short compared to random selection (as shown in Table 6).\\n\\n2. As mentioned in #Lines862-869, there are occlusion-based issues in the data construction process, which is common for SMPL. However, manually checking only five videos per class seems highly inadequate and lacks rigor.\", \"experiments\": \"1. Since the K-NEXUS algorithm is presented as the main contribution of this work (#Line100), shouldn't the experiment comparison for \\\"Ours\\\" be \\\"SMPLy Priv. w/o K-NEXUS vs. SMPLy Priv.\\\" rather than \\\"SMPLy Priv. vs. SMPLy Priv. w/ K-NEXUS\\\"? Moreover, ablation studies should also based on the model involving K-NEXUS.\\n\\n2. There are no test results for Kinetics-400 or Kinetics-150.\\n\\n3. As illustrated in Appendix C, this work defines the categories selected by K-NEXUS as coarse-grained actions, while the remaining 250 classes are considered fine-grained actions. This definition lacks rigor and is not supported by examples. In the action recognition field, datasets like Kinetics-400 and UCF-101 are commonly referred to as coarse-grained datasets, while fine-grained datasets, such as FineGym, feature hierarchical annotations and subtle differences in both visual content and semantic labels.\", \"questions\": \"Please refer to the Weaknesses. I'm willing to raise my score if my concerns are well addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Reply to Weakness 1:** Thank you for the clarification here. These numbers seem much more reasonable and better than existing methods. I would not expect a perfect score here, and your explanation is reasonable.\\n\\n**Reply to Weakness 2 and Question 4** That makes the most sense. Thank you for clarifying. As reviewer `1a4G` has suggested, a comparison between the K-NEXUS classes and the previously proposed classes would help strengthen your argument.\\n\\n**Reply to Weakness 3** I do think this change makes sense, given this is poised as a privacy-preserving action recognition work.\\n\\n**Reply to Weakness 5 and Question 5** Thank you for including this, it is an important limitation, and could be a call to improve video-based detection/segmentation models. It would be interesting to see how the SAM2 [1] segmentation model performs in maintaining the detections, though not crucial to your work.\\n\\n**Reply to Weakness 6** This was just a single suggestion for a potential contribution that makes sense to me, I do not want you to include it just to address this concern. The main point is that the technical contribution is a bit weak as is, noted by the other reviewers as well. More comprehensive evaluation related to K-NEXUS design choices, outputs, and justification of claims about the benefits of selecting the subset (dataset W.1, reviewer `1a4G`) would help.\\n\\n**Reply to Question 3** I apologize for this comment then, it appears I had misread it the first time through due to it just saying \\\"LLaVA image encoder\\\". Thank you for clarifying this, the text is a bit more clear now. Since both the image and text are embedded, how are these embeddings combined? Added? Concatenated?\\n\\n**Reply to Question 7 and 8** The choice of using neutral meshes is most appropriate, thanks for making this clarification. However, then a fair comparison with the previous K150 split and yours using neutral meshes only is warranted. \\n\\n> \\\"The most commonly used mesh superimposed by default is the neutral mesh on K-150, so this is the mesh we take forward for the VISPR1 evaluation. Without K-NEXUS it was the male mesh that most commonly occurred and when applied to VISPR1 that is why the cMAP scores using SMPLy Private without K-NEXUS was approx. 5% higher.\\\"\\n\\nThis claim is not well justified. This just means that the classes chosen during K-NEXUS contains less visually identifiable humans, resulting in neutral meshes. It does not make intuitive sense that the classes chosen by K-NEXUS happens to activate neutral meshes more commonly than without. Regardless, this also does not make sense for VISPR evaluation, which does not depend at all on classes chosen for video pretraining. It is an independent image-based evaluation that should only depend on the choice of mesh. There is no point in comparing Kinetics pretraining sets on VISPR. It should be labelled by the choice of mesh, as the mesh is what is being evaluated. This experiment was asked to evaluate the privacy-preservation capabilities of your model, not your subset selection algorithm.\\n\\n**Reply to Question 9** I agree that your method _may_ reduce this scene/object related biases, I just don't find the current rationale convincing. Nonetheless, this is not crucial to address, and your provided justifiation is reasonable. I do find your experiment with the color of the mesh (from reply to point 3, reviewer `8VCT`) interesting, this is good to include in the paper. Conclusions drawn from this would provide additional insight into replacing humans with 3D meshes, better supporting your contributions.\"}", "{\"title\": \"Rebuttal to vWLp: Part 3/4\", \"comment\": \"# Addressing Question 7 and Question 8 #\\n\\nThe algorithms built on SMPL-X utilize all three mesh types (male, female, and neutral) \\u2013 you can find more details on the differences between these meshes at the SMPL-X paper reference below [6]. Our mesh rendering strategies are designed to assign the appropriate mesh type for a more accurate representation: male or female meshes are used when the person's features in the video are descriptive enough, while a neutral mesh is applied when insufficient information is available. In analyzing action classification datasets, we observed that male meshes were activated more frequently for certain classes, while female meshes predominated in others. This led us to conduct this study to highlight that our models effectively capture dynamics across both genders. Our mesh approach preserves vital gender information without compromising performance, regardless of the mesh type.\\n\\nThe definitions of \\\"male-biased\\\" and \\\"female-biased\\\" classes in Table 3 are based prior analyses of action recognition datasets. Certain actions, such as weightlifting or football, are statistically more likely to feature male participants, whereas others, like ballet or yoga, tend to feature women more prominently. However, these categorizations were informed by dataset annotations during our manual qualitative review (**see Figure 8**). To ensure balanced evaluation, we selected an approximately equal number of male- and female-biased classes (34 to 36), reviewing dataset metadata and consulting prior research. A list of these classes are now included in a supplementary document for transparency.\\n\\nWhile analyzing subclass performance across all classes could provide broader insights, this experiment focuses on assessing whether gender-neutral meshes effectively mitigate biases specifically in male- and female-biased classes. This targeted approach isolates the impact of demographic features on model performance, ensuring clearer insights into bias mitigation. Including all classes might dilute these findings, as biases are most evident within specific subsets of data.\\n\\nOur evaluation on VISPR further supports this analysis (refer to our discussion with review `C3CH`, specifically Part 2). We\\u2019ve updated the supplementary materials in which you can now find the gender splits for this experiment. We appreciate this feedback and will extend the discussion accordingly in the revised version of our paper.\\n\\n[6] Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A. A., Tzionas, D., & Black, M. J. (2019). Expressive Body Capture: 3D Hands, Face, and Body from a Single Image. arXiv.\\n\\n______\\n\\n# Addressing Question 9 #\\n\\nThe reference to mitigating background and scene-object biases stems from the fact that SMPLy Private deliberately replaces real human appearances with SMPL-X meshes, which inherently decontextualize human actions from specific environmental or object-related cues (which is also further done by K-NEXUS, please see our discussion with review `1aVG` on this. Specifically Part's 1, 2, and 5). This substitution reduces the direct association between human actions and surrounding scene features. For example, in standard datasets, certain actions might disproportionately co-occur with specific object types (e.g., \\\"riding\\\" often appearing with bicycles or horses), potentially biasing models to associate the action with the object rather than the human dynamics. By using human-only meshes devoid of rich texture or context, our method pushes the model to focus on action dynamics rather than environmental correlations. If you refer to the SynAPT [7] and PPMA [8] papers that discuss this, just by outperforming their methods, we effectively are doing a great job mitigating scene-object related bias, especially when K-NEXUS is used. If you require we have this discussion in the revised/final version of the paper, we would be more than happy to include it! Thank you for bringing this up. \\n\\n[7] Zhong, H., Mishra, S., Kim, D., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Oliva, A., & Feris, R. (2024). Learning human action recognition representations without real humans. Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23), 2839.\\n\\n[8] Kim, Y., Mishra, S., Jin, S., Panda, R., Kuehne, H., Karlinsky, L., Saligrama, V., Saenko, K., Oliva, A., & Feris, R. (2024). How transferable are video representations based on synthetic data? Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22), 2588.\"}", "{\"comment\": \"## Response to Point 1\\n\\nThank you for adding the additional paragraph, however I suggest highlighting all new additions to the main paper and supplementary in some color (like blue). This makes it easier for us reviewers to see what has been added, how it impacted the clarity of the paper compared to the original draft, and keep track of what was and was not present in the original draft. I think the suggested training pipeline figure would be a nice addition to the paper, but I will not demand it as necessary with regards to my review of the paper. As a suggestion, the current state of Figure 2 could be condensed into a smaller module that plays into the entire training pipeline. Thus, Figure 2 would retain the same information it has now, but adds a visualization of the training pipeline as a whole. \\n\\n## Response to Point 1.1\\nAgain I request you highlight all changes in blue for full transparency.\\n\\n## Response to Point 1.2\\nFirstly, let me reiterate that my main two comments in this point was that **(a):** the writing of M2M is not as clear as PPMA (M2M structurally follows the main parts of PPMA, but M2M lacks the same background and context that PPMA provides), and **(b):** Table 1 in M2M is the same as in PPMA but with M2M added. Are there no other baselines that could be added, either that PPMA missed or that have been published since PPMA.\\n\\nThe authors' response firstly points me to Weaknesses 1 and 4 vWlp. As an aside, Weakness 1 of vWlp just directs me to part 2 of C3CH, and while I am fine with addressing common reviewer comments with a single response, this convoluted way of jumping around made it very difficult to follow how you are answering my question directly. Regardless, part 2 of C3CH only partially answers part (b) of my question - C3CH gives some references that provide some standard privacy leakage results and the authors respond with an experiment and that they will add it to the main paper. While somewhat valuable, it does not fully answer my question of if Table 1 truly encapsulates every possible (or at least most of the best) baselines for comparison to SMPLy Private. Weakness 4 of vWlp again redirects me to another reviewers response (reviewer 1a4G, not reviewer 1aVg as the authors refer to) and each of the reviewer's questions are not entirely related (reviewer vWlp ask about different architecture/training setups, reviewer 1a4g asks about technical contribution and reliance on off-the-shelf models, I am asking about comparing against more published privacy-preserving action recognition methods (synthetic and/or non-synthetic). \\n\\nI am then pointed to Weakness 1, 2, and 4 of C3CH. Weakness 1 discusses computational complexity which has nothing to do with **(a)** or **(b)**. Weakness 2 was already indirectly referred to in vWlp and is redundant. Weakness 4 is completely irrelevant to both **(a)** and **(b)**. Comparing to other synthetic methods is referenced, but not to the detail I am asking in **(b)**. On top of all of this confusion, the most important part is that **(a)** is not addressed whatsoever. \\n\\n## Response to Point 2\\n\\nThe author's quote \\\"[...] our work introduces significant innovations that extend beyond a mere replacement of synthetic data with mesh-extracted data. Unlike PPMA, which leverages generic synthetic data, SMPLy Private utilizes SMPL-X meshes, offering a structured and anatomically accurate representation of human motion and posture.\\\" directly contradicts itself. I agree that using meshes provides benefits for privacy-preserving action recognition (as I mention in the strengths section), but I am saying that while combining off-the-shelf methods to achieve this is interesting, this alone is not enough to motivate sufficient novelty. I do acknowledge the novelty of K-NEXUS and suggest finding ways to better incorporate similar ideas in the rest of M2M's pipeline.\"}", "{\"title\": \"Reviewer Reply to Point 2\", \"comment\": \"Again, I understand that the meshes serve to anonymize private attributes and also improves the learned representations. My original point is that this idea alone (and the propose pipeline) is not a strong enough for acceptance in my opinion. Usually limited novelty is not a sufficient, single reason to reject a paper, as extensive analyses, strong improvements over the baseline, and/or insightful experiments can also serve a benefit to the community. However, as I mentioned in my previous response, I don't believe this paper possesses any of those aspects (see Point 3).\"}", "{\"title\": \"Rebuttal to 8VCT: Part 1/3\", \"comment\": \"# Addressing Point 1 #\\n\\nWe appreciate the reviewer\\u2019s feedback and agree that more explicit connections between the components in the methodology section and their use in training could enhance the clarity of the paper. We now have explicitly outlined how M2M is utilized during both self-supervised pre training and label alignment (**see additional paragraph as per your suggestion at the end of Section 3.3**). \\nEssentially, during VideoMAE pretraining, M2M-augmented videos replace real human video data entirely. The superimposed 3D meshes are fed into the masked autoencoder to learn spatiotemporal representations while maintaining privacy. Then, during alignment (supervised pretraining), the M2M-augmented dataset is used to fine-tune the VideoMAE encoder with action recognition labels. This ensures alignment between the learned representations and the downstream task categories. \\n\\nWe can also include a clear visual representation of the training pipeline. A diagram will explicitly link each M2M component (masking, inpainting, mesh recovery) to its role in either pretraining or alignment, with arrows indicating how the outputs are used across different stages. Let us know if this is required for the final camera-ready version of the paper and we would be more than happy to add it then.\\n\\n--------\\n\\n# Addressing Sub-point 1.1 #\\n\\nYour interpretation of the table was correct; there was a typo/error in the caption. Row 1 should correspond to SAM segmentation, Row 2 to OSX mesh recovery, and Row 3 to E$^2$VGFI inpainting. We have clarified this in the revised version of the paper, updating both the caption and the surrounding text.\\n\\n--------\\n\\n# Addressing Sub-point 1.2 #\\nPlease refer to our responses to Weaknesses 1 and 4 from `vWLp` and Weaknesses 1, 2, and 4 from `C3CH`. The combined responses to these reviewers' comments and the identified weaknesses should sufficiently address your concerns. Specifically, we evaluate SMPLY Private on VISPR1 to ensure comparability with methods presented in the Spact paper. Additionally, we now report the lowest cMAP score of all methods. If deemed satisfactory by you and the other reviewers, we will include these findings and integrate them into our methodology in the final camera-ready version of the work.\\n\\n---------\\n\\n# Addressing Point 2 #\\n\\nWhile it is true that SMPLy Private's methodology shares similarities with PPMA in its use of pre-training on altered data for privacy preservation, our work introduces significant innovations that extend beyond a mere replacement of synthetic data with mesh-extracted data. Unlike PPMA, which leverages generic synthetic data, SMPLy Private utilizes SMPL-X meshes, offering a structured and anatomically accurate representation of human motion and posture. This precision allows us to preserve fine-grained motion dynamics essential for action recognition while ensuring robust anonymization, a balance that synthetic data can often fail to achieve. Additionally, by leveraging 3D meshes, we tackle inherent biases present in datasets, such as those related to race or gender, demonstrating bias mitigation that PPMA does not explicitly address. We also further do this with the new additional VISPR experiments (see Part 2 of our response to reviewer `C3CH`), distinct from PPMA. \\n\\nRegarding the use of off-the-shelf pre-trained models for segmentation, inpainting, and mesh recovery, this choice ensures that the pipeline is efficient and reproducible while focusing our contribution on integrating these tools into a cohesive privacy-preserving framework. The novelty lies not in the individual components but in their synergistic application to anonymize and reconstruct actionable features in videos without compromising privacy, we suggest a data augmentation technique that is easy, reproducible, and intuitive for anyone to pick-up -- that is our main novel contribution (alongside K-NEXUS). By evaluating our approach on diverse downstream datasets and demonstrating superior action recognition performance with minimal privacy leakage via VISPR, SMPLy Private sets a practical and impactful precedent in the domain of privacy-preserving action recognition. **We highly encourage you to read the rest of our discussions with the other reviewers to further understand the potential and contribution of our work -- the novelty lies in the combined application, not the disjoint technical methods**.\"}", "{\"title\": \"Replies to Points 1, 1.1, and 1.2.\", \"comment\": \"# Reply to Point 1 and 1.1 #\\nThanks for the suggestion! Moving forward, all revised sections will be highlighted in blue to clearly distinguish them from the original submission. We have now made this change, please see the updated paper. \\n_____\\n# Reply to Point 1.2 #\\nLet us apologize for the confusion. We now understand what you are asking and hope our response is more precise this time. For (a), we now know that we must refine the methodology as it is not as clearly written as PPMA, making it difficult to understand the role of M2M during training. To address this, we have now done the following:\\n\\n* The introduction is refined to set the scene of our SMPLy Private framework and M2M augmentation pipeline right before we summarize our contributions / under the hero image (Figure 1).\\u00a0\\n\\n* We have added a section (now section 3.1) explicitly comparing M2M and PPMA, noting where M2M improves and explaining how the processes differ in sufficient detail. We have avoided assuming prior knowledge of PPMA.\\n\\n* Toward the section of Section 3, in subsection 3.4, we have written a paragraph titled \\u201cPutting it All Together\\u201d that should connect the dots for the reader who has, at that juncture, almost completed reading the entire section.\\u00a0\\n\\n* We have revised the text to explain how M2M ties into Table 1, explicitly noting that M2M is used for both pretraining and alignment, and further hit this point home with a brief note at the start of section 4.\\n\\n* Furthermore, Section 4.1 has some additional discussion that should further remind the reader of what we are doing in our approach and again make it clear.\\u00a0\\n\\nHence, we have provided more context and background clarity in SMPLy Private (all changes are in blue, as requested). We have made these parts more self-contained, so readers don\\u2019t need to refer back to PPMA. We do vehemently agree that your suggestions on this have further made our paper more straightforward to read / more precise; if more is needed, please let us know specifically more about what you are looking for, and we will address/make the changes ASAP before the end of the rebuttal/discussion period.\\u00a0\\n\\nWe point to other reviewers partially for part (b) here because there are no additional works (that use synthetic or otherwise) that have followed PPMA, which is SOTA in this domain of privacy preservation. We beat it on all fronts across all datasets using meshes instead of generating/using entirely new data samples with synthetic video-game-like data as we argue features learned from scenes in such synthetic datasets, while useful, cannot capture various nuances from real-world scenarios as seen in Kinetics, and then also further transferring to downstream. This way, we retain scene and object features from the original dataset in our approach. Instead of curating new objects, subjects, and scenes altogether \\u2013 we superimpose a mesh where the human should be by taking them out. During our literature search, we tried to find comparable works given our objectives of human privacy preservation + SSL pretraining, but, to the best of our knowledge, SynAPT, PPMA, and our work are the most comprehensive. Other works like the one you have pointed to, for instance (i.e., SPAct), only do their evaluations on 2-3 datasets (only one of which overlaps: UCF101). However, after applying their framework, SPAct only achieves ~60% top-1 accuracy on UCF101, which pales compared to PPMA and SMPLy Private (both approx. +25-30% better). Hence, we felt it was appropriate to include more \\u201ccompetitive\\u201d models, such as those listed in SynAPT and PPMA. Implementing and re-running other baselines (outside of SPAct, as we did conduct experiments with their framework \\u2013 we can report those scores if need be in Table 1) across all the datasets we used would be a huge effort in our training scheme for them only to fall far short of both PPMA and SMPLy Private. While we do not capture every possible baseline/method in Table 1, we are comparing ourselves to the current best-published baseline (as you point out), PPMA. Again, we apologize for the miscommunication on our part and hope now this clears up your concern.\"}", "{\"title\": \"Rebuttal to vWLp: Part 2/4\", \"comment\": \"# Addressing Question 1 #\\n\\nThis was an oversight on our part and we sincerely apologize. A previous iteration of our work used a different configuration. Since then, our occlusion aware meshing is complemented by using the Segment Anything Model (SAM) [5]. So yes, it is one model that masks various objects including humans to avoid various occlusions. **This has been updated in our paper**. \\n\\n[5] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., Doll\\u00e1r, P., & Girshick, R. (2023). Segment Anything. arXiv.\\n\\n_______\\n\\n# Addressing Question 3 #\\n\\nPlease refer to the updated Section 3.1. We encode both labels (as text) and image frames, and we apologize for any earlier confusion. Using only the text encoder would not be feasible for K-NEXUS, as it would lack the necessary functionality. Clustering identical class labels or text and expecting K-NEXUS to select 150 classes from such a homogenous clustered space would be ineffective. This approach would lack diversity; for example, repeated labels across all classes would lead to redundant clusters (multiplying the number of text labels per class by the number of samples in that class). \\n\\nTo address this, we use image-text pairs to create a shared representation by projecting and aligning both features into the same space. This approach ensures greater diversity, enabling K-NEXUS to function effectively. The image frames are crucial as they introduce differences visually, which then translate into semantic distinctions. We apologize for any lack of clarity in the original explanation and have corrected this in the **updated paper**, addressing any typographical issues as well.\\n\\n________\\n\\n# Addressing Question 4 #\\n\\nThis is a similar concern brought up by reviewer `1aVG`. **Please see Parts 1, 2, and 5 of that rebuttal for a full understanding on K-NEXUS. Furthermore, we have now included the K-NEXUS splits in the supplementary material.** \\n\\n_________\\n\\n# Addressing Question 5 #\\n\\nPlease see Figure 1 (e.g., \\u201cclean and jerk\\u201d, \\u201czumba\\u201d) and Figure 2 for multi-human examples. If you require additional examples, we would be happy to include them in the supplementary material for the final/revised version of the paper.\\n\\n_________\\n\\n# Addressing Question 6 #\\n\\nSMPLy Private is what we call our entire end-to-end pipeline (segmentation + mesh recovery + inpainting + VideoMAE pre-training + alignment + downstream evaluation), and M2M is the actual data augmentation method that creates the meshed dataset (i.e., M2M Kinetics). **We have now made this distinction in the updated version of the paper now for more clarity right before Section 4.1**.\"}", "{\"comment\": \"**K-NEXUS:** Yes, even more technical/qualitative insight would be useful here. I get that your selected subset achieves better performance, but looking into it, the previous subsets were randomly selected. It is no surprise that your method is better than random. In order for this selection to be a valuable contribution, I believe more insight is necessary. Figure 4 in the appendix is a solid start, but maybe a full embedding plot of all 150 classes would be more convincing than comparisons between two random classes. I still believe an ablation on this selection would better support claims here (visual vs. text vs. text + visual). Also, the original motivation for selecting a Kinetics subset in SynAPT was to balance it out with the size of their pretraining dataset for a more fair comparison. I feel that this class selection makes for an unfair comparison with prior methods.\\n\\n**Meshes:** I do see that your mesh method does still outperform prior methods on the same subset, but the performance difference is not very strong. More clear benefits of utilizing the meshes over prior methods of privacy preservation, if not a direct performance increase, would better support this method. I recognize that there are attempts at this, but right now the benefits seem minor, computation may be the deciding factor between this and prior work.\\n\\n**Overall:** This problem is motivating and the solution makes intuitive sense. However, the contributions of this work appear \\\"matter-of-fact\\\" and lack robust justification/exploration. The technical novelty is slim without comprehensive analysis of each component. While many of my concerns have been addressed, not everything from myself and other reviewers have been. It is wonderful to see the amount of effort put in by these authors during the rebuttal and the improvement of the work itself over the course of the rebuttal period, but I still believe that more work needs to be done to convince me that this paper is ready for acceptance. I recommend that the authors spend more time writing a concrete, coherent story to better motivate the contributions, differentiating this work from prior work, and thoroughly analyzing each component of the contributions in the context of a coherent story. I am updating my score to a 5 to reflect my current understanding of the improved work.\"}", "{\"title\": \"Response to W2\", \"comment\": \"My concern was that the method appears to assume it inherently resolves privacy issues without providing any empirical evidence. The authors do not provide any results on the VISPR1 split; instead, they show results on a subset that considers only 1/7 of the privacy attributes from the original split. The results are inconclusive, and since this is the only way to evaluate whether the method preserves privacy, it should be studied thoroughly, following prior work.\"}", "{\"title\": \"Response to W4\", \"comment\": \"My concern was not about nitpicking the numbers; however, the claimed improvement could be misleading due to the poorly trained baseline method. I understand that the baselines and the Diving48 dataset is not computationally insignificant. I would suggest removing the results on the Diving48 dataset in a future version, as they are currently misleading and could lead to poor benchmarking practices in the community.\"}", "{\"title\": \"Reply to Reviewer 1a4G\", \"comment\": \"# Reply to Dataset W1 #\\nWhile we understand that may be one way to substantiate our claim, not only would it be infeasible to conduct an experiment like that given the time constraint of this discussion phase but it is clear that K-400 may perform better than K-150 purely due to more training samples in any case. But yes, this was to save on computational cost but also stay consistent with prior works like PPMA and SynAPT which we have cited multiple times in our study. We have now added an additional footnote clarifying this (see footnote 3). \\n______\\n# Reply to Dataset W2 #\\nThat is unfortunate to hear. We do believe we addressed your concern to the best of our ability based on our understanding of this point. Your elaboration would be much appreciated. Randomly sampling a frame from the mid-quartile of a video from the short Kinetics dataset videos more times than not does capture and represent a frame of the particular action class. So our sampling approach was appropriate and there was a difference of < 0.1% between that and an entropy based smart sampling approach. We have detailed this in the paper (revised Section 3.2 and footnote 1). \\n______\\n# Reply to Dataset W3 #\\nWe already included the K-NEXUS splits (the included 150 classes and excluded 250 classes) in the supplementary material zip as .txt files. Comparing our 150 classes vs. PPMA classes directly do not make complete sense as we have shown in Table 1 when we use PPMA\\u2019s 150 classes, our model is not as high-performing as when we use the K-NEXUS selected 150 classes (difference of around +2% on both FT and LP). \\n______\\n# Reply to Technical W1 #\\nThank you, we respect your opinion. We would again like to clarify that K-NEXUS was not built for extracting \\\"finer\\\"-grained classes from fine-grained classes (hence why it suffers in that set-up). We just wanted to show its efficacy as a method that is able to only select appropriate coarse-grained (or rather \\\"macro-level\\\") classes eliminating redundant or highly correlated classes. This is not a weakness, I we sincerely hope this has clarified your concern. \\n______\\n# Reply to Experiments W2 #\\nSorry if we were not clear enough. Our results reported are indeed when we have the K-150 and K-400 datasets as downstream tasks. K-150 does not have a leaderboard of benchmarks like K-400. But in the case of K-400, our approach cracks the Top 50 on the Papers with Code leaderboard. However, we do not quite understand the value of this experiment entirely as it just shows that our set-up can learn Kinetics features with meshes reasonably well, which is already shown in Table 1 and Section 4.4. Again, we can include this in the final version if need be. Thank you. \\n______\\n# Reply to Experiments W3 #\\nThank you for your thoughtful feedback and for highlighting the potential ambiguity in our terminology for \\u201ccoarse-grained\\u201d and \\u201cfine-grained\\u201d actions. We understand the importance of aligning our terminology with established conventions to ensure accessibility and clarity for the broader community.\\n\\nIn light of your suggestion, we propose revisiting these terms to better reflect the distinct characteristics of the clusters. For example, we could adopt terms like \\u201cmacro-actions\\u201d and \\u201cmicro-actions,\\u201d or explicitly label the clusters based on their defining features (e.g., \\u201cbroad activity classes\\u201d and \\u201cspecific interaction types\\u201d). If this addresses your concern here appropriately let us know and we will incorporate these revisions into the final manuscript and ensure the terminology aligns with the paper\\u2019s context and widely understood conventions.\\n_________\\n*Thank you again for your valuable input, which helps refine and enhance the clarity of our contributions.*\"}", "{\"title\": \"First Notification to Reviewers: Thank you for all the constructive feedback, our work is much stronger now because of it!\", \"comment\": \"Dear Reviewers,\\n\\nWe hope this message finds you well.\\n\\nWith a little under a week left for the rebuttal/discussion period, we wanted to ensure that our responses and the additional experiments we've shown sufficiently address your concerns and feedback. We would greatly appreciate any further feedback or confirmation that our rebuttals are satisfactory/have been acknowledged. Your insights are invaluable to us, and we are eager to finalize our submission with your guidance.\\n\\nThank you for your time and consideration.\\n\\nBest,\\n\\nAuthors of Submission #8122\"}", "{\"title\": \"Reply to Weakness 5 and 6 + Questions 3, 5, 7, and 8\", \"comment\": \"# Reply to Weakness 5 and Question 5 #\\n\\nThank you for clarifying your concern regarding the potential impact of missed detections on the final video. You are correct that missed detections during human removal may occasionally lead to unnatural or jittery movements, particularly when residual artifacts persist across frames. We have observed that such issues are rare due to the robustness of the inpainting and segmentation steps, but when they do occur, they may slightly disrupt temporal smoothness in specific regions of the video. While these artifacts are unlikely to affect downstream performance significantly (as the model learns to generalize across noise during pretraining), we acknowledge that they could be more noticeable in fine-grained, motion-sensitive tasks. To address this, we will include an explicit discussion of this potential limitation in the camera-ready paper, along with examples in the appendix to illustrate how such artifacts manifest and their implications for model performance. We appreciate your insightful feedback, which helps us refine both our limitations section and presentation of the paper. We hope this is sufficiently addressing this particular concern of yours.\\n\\nFurthermore, we have updated the supplementary materials with an example (see under the folder \\u201cfailed-example\\u201d), this also backs up the validity of your concern in Weakness 5. Again, we will address this in our limitations section of the for the final camera-ready version of the paper. \\n\\n____\\n\\n# Reply to Weakness 6 #\\n\\nYou are correct in pointing out that the more innovative aspect of our paper lies in K-NEXUS, while the other elements represent a creative combination of existing methods. The goal of our paper was not to introduce a novel SSL method (or any other type of novel method) entirely distinct from MAE and using mesh-augmentation for privacy preservation on video data. However, if this is considered a weakness, we respect and acknowledge your perspective. It could indeed be intriguing to explore or develop a method that surpasses existing off-the-shelf solutions in learning mesh representations (something that is uniquely suited to learning mesh forms). This is a valuable suggestion, and we could certainly consider addressing it in a \\\"Future Work\\\" subsection within our conclusion. Would that be sufficient enough to address this concern? Thank you for your feedback on this. \\n\\n_____\\n\\n# Reply to Question 3 # \\n\\nWe really appreciate and thank you for acknowledging that this point can rest, and when we did solely use the text encodings in our earlier experiments, the splits were quite random and not consistent. Hence we chose to disregard it altogether. Again, no worries, if need be, we can make this point briefly in the final camera-ready version of the paper. \\n\\nAs mentioned in our response, it was something we needed to clarify further. In our original paper, we assumed that the reader might interpret \\u201clabel\\u201d as the \\u201ctext label\\u201d opposed to the numerical one. This is the only change we made so that it is clear that in our original and only approach on this within the K-NEXUS algorithm was to encode the image-text pairs in tandem. We did not change our approach in this rebuttal period on K-NEXUS. We hope this clarifies, thank you. \\n\\n_____\\n\\n# Reply to Question 7 and Question 8 #\\n\\nAgain, we apologize for the lack of clarity in our previous response. Yes you are right, we did choose it based on the method as the SMPL-X meshes do adapt to the human based on identifying their gender. Once we saw this, we looked at the flipped and neutral cases too just by changing the gender parameter of the mesh. To make this process transparent, appended the documentation of the criteria used during this manual review to the revised paper to an additional Appendix D. The implications of this study was to show that we can maintain best performance using neutral meshes and that is actually what we take forward in Table 1 as well. We have now added this in the revised version of the paper (also see footnote 4). \\n\\nIn the quote you bring up, we were just clarifying that the SMPL-X fitting just works that way. However, for VISPR1 we show that after K-NEXUS used on Kinetics, we then apply mesh fitting. The most commonly used mesh superimposed by default is the neutral mesh on K-150, so this is the mesh we take forward for the VISPR1 evaluation. Without K-NEXUS it was the male mesh that most commonly occurred and when applied to VISPR1 that is why the cMAP scores using SMPLy Private without K-NEXUS was approx. 5% higher. Hopefully this clarifies everything now and addresses this concern appropriately.\"}", "{\"title\": \"Rebuttal Response (Part 1/2)\", \"comment\": \"## Addressing Weakness 1:\\nIt is helpful that you were able to run experiments with the VISPR1 protocol. However, I have some concerns about the evaluation. What was your method? Did you replace humans in the images with meshes, then train a classifier on these images? I am very surprised at these numbers, it is a drastic decrease compared to previous results. My major concern is that using an untrained, randomly initialized classifier achieves $\\\\approx$ 25% on VISPR1 due to its binary classification paradigm (label weights aren't balanced, accounting for <50% expected value). I would like some clear rationale for how your method is \\\"fooling\\\" the classifier, causing it to predict the incorrect attributes. The goal of this dataset is to reduce predictions to random chance (~25%), not specificially to cause the classifier to choose the wrong prediction. I believe this would imply that the classifier is able to learn the correct attributes, but choose the wrong one. This may not be 100% true, but the authors need to provide clear rationale for how these numbers were achieved.\\n\\n## Addressing Weakness 2:\\nThank you for the response, I have a better idea of the contribution intention now. However, it still unclear to me whether your performance improvements stem from the selected subset or from your M2M augmentation. It would provide more clarity to see a comparison using a subset from a previous work, but with your augmentation. It still seems possible that a previous method using your K150 subset would outperform that of your M2M method. More insight into the differences between the selected subsets would be helpful as well. \\n\\n## Addressing Weakness 3:\\nUnfortunately, these comments to other reviewer concerns do not address my concerns. In one, you state that \\\"the pipeline is designed for preprocessing datasets on robust computational systems, which are more than capable of handling the computational load\\\", while in the other, you state that \\\"(t)his insight highlights the efficiency of using M2M-augmented data in resource-constrained environments where computational efficiency is crucial\\\". These directly contradict each other, and I am left unable to see the benefit of the faster training from this perspective. Nonetheless, it is still an interesting insight to the learning process with your synthetic data, so it is good to include, but as is, it does not seem to be a valuable benefit.\\n\\n## Addressing Weakness 4:\\nIt is great to see further experiments with different backbones and model sizes, that is helpful. As a friendly note, even if MAE pretraining is the standard and achieves the best performance, it would still strengthen the rigor of this analysis to consider alternate forms of models and pretraining (contrastive for example), but the further analysis did address this concern by expanding upon the model choices.\\n\\n## Addressing Weakness 5:\\nI apologize if this was not clear, but my concern was more about how missing a detection within a video segment. I see your analysis of specific failure cases, this is good to show, but I am curious as to how this affects the final video. I would expect some unnatural, jittery movement in this case. It may not happen often/not be a major detriment, but I would like the authors to address this possibility and its potential implications. \\n\\n## Addressing Weakness 6:\\nThe response does not better emphasize the technical contribution. I get that the point is to address privacy-preserving action recognition, but my comment is about the technical contribution. The proposed contributions appear disjoint, not building off each other in a real sense, other than just both slightly improving performance from different perspectives.\\n\\n\\n## Addressing Question 1:\\nMakes sense, thank you for clarifying.\\n\\n## Addressing Question 3:\\nI understand the limitations of only using the text encoder, I just think it would achieve a similar split with much less effort. It would strengthen your contribution if you can show improved performance using your visual information K-NEXUS over the splits given by just encoding the text labels. This may just be a pedantic point, as it makes intuitive sense that your method would be better, so I can let this point rest. However, I am very concerned with your response to this. My impression when first reading this section is that you only used the visual features. There wasn't a problem with this, though using a combination of visual and text seems fine too. My concern is that the updated section now clearly indicates that both text and visual features are utilized. While I do not have the previous version of the paper saved and may have just misread, this change is a major difference. Can the authors please clarify what the original method was, and if the core method is changed, proper analysis of the difference between the two version should be provided.\"}", "{\"title\": \"Rebuttal to 1aVG: Part 5/5\", \"comment\": \"# Addressing Experiments Weakness 3 #\\n\\nWe appreciate the reviewer pointing out the need for greater clarity in defining \\\"coarse-grained\\\" and \\\"fine-grained\\\" actions in the context of our work. While our terminology may differ slightly from conventional definitions in the action recognition field, our categorization is driven by the unique challenges of privacy-preserving human action understanding. Below, we provide a thorough response to address these concerns:\\n\\n**1. Definition context and justification**: In our work, we classify the K-NEXUS-selected classes as \\\"coarse-grained\\\" because these actions involve distinct and well-separated categories that are less dependent on subtle pose nuances or fine contextual cues. Examples include actions like \\\"walking,\\\" \\\"clapping,\\\" or \\\"jumping.\\\" These categories are chosen to test whether the proposed SMPLy Private framework effectively learns high-level action semantics, even in the absence of scene or background context. On the other hand, the \\\"fine-grained\\\" classes involve more subtle distinctions, such as variations in hand positioning or object interactions, which pose challenges even for fully-supervised models trained on real videos. This is why we deem the remaining 250 classes as \\\"fine-grained.\\\"\\n\\n**2. Differences from traditional definitions**: The reviewer correctly notes that in the broader action recognition field, datasets like Kinetics-400 or UCF-101 are often labeled as \\\"coarse-grained,\\\" while datasets like FineGym are considered \\\"fine-grained\\\" due to their hierarchical structure and subtle distinctions. We acknowledge this difference in usage and recognize that our framework's coarse- vs. fine-grained split operates differently. Specifically: Our focus is not on hierarchical annotations or subtle interclass differences across datasets, but on the model's ability to handle categories that inherently vary in their reliance on pose-level distinctions versus scene or temporal information. The selected K-NEXUS classes typically reflect distinct, more easily separable actions that primarily depend on human pose, making them coarse-grained in the context of privacy-preserving meshes.\\n\\n**3. Supporting examples**: We have now addressed the lack of examples mentioned in the review (**see the revised Appendix B.1, Figure 4(b)**), by showing separability of various clusters. Furthermore, this shows that our definition of \\\"fine-grained\\\" vs. \\\"coarse-grained\\\" has to do with feature-based separability of classes (visually and semantically via K-NEXUS).\\n\\nIn summary, our terminology is motivated by the specific challenges in privacy-preserving learning and differs from traditional dataset distinctions. However, we recognize the need for greater rigor of discussion on such definitions and more concrete examples, which have now been included in the updated version of the manuscript (**see end of Appendix B.1**) to clarify our definitions and methodology. We hope this refinement will ensure our work is more clearly established amid conventions in action recognition and is to the reviewers liking. Thank you for pushing us to make this more clear, it definitely has strengthened our paper!\\n\\n________\\n\\n# Addressing Presentation Weakness 1 #\\n\\nThanks for bringing this up, we can most definitely make the writing more coherent on this in the final version! \\n\\n_______\\n\\n# Addressing Presentation Weakness 2 #\\n\\nWe intentionally avoided labeling each part with a specific model or framework because M2M is designed to be a dataset augmentation technique, rather than a pipeline tied to the particular configurations used in this paper. Our updates (refer to discussions with other reviewers) seek to reflect that M2M is more of a versatile solution, which is why we chose to keep this figure adaptable. But if need be, in the final version, we can clarify this in the caption and accompanying text for Figure 2 to eliminate any doubts or confusion. Please let us know if this is your preference and we will rectify it as such immediately.\\n\\n--------------\\n\\n*Thank you once again for your valuable feedback. We are committed to incorporating these revisions fully in the final/camera-ready version of the paper. We hope that, with these improvements and our thoughtful responses, you may consider raising your score, as we have aimed to address all the key concerns raised.*\"}", "{\"title\": \"Rebuttal to 1aVG: Part 3/5\", \"comment\": \"| Self-supervised method | Pretraining dataset (Steps 1 and 2: MAE + Alignment) | Backbone | UCF101 (FT, LP) | HMDB51 (FT, LP) | Diving48 (FT, LP) | IkeaFA (FT, LP) | UAV-Human (FT, LP) | Mean (FT, LP) | Realism Gap (FT, LP) |\\n|-------------------------|-----------------------------------------------------|----------|------------------|------------------|-------------------|-----------------|--------------------|---------------|-----------------------|\\n| Space-time MAE | Kinetics | ViT-S | 91.0 / 89.2 | 71.6 / 68.0 | 64.6 / 19.4 | 70.3 / 56.9 | 32.9 / 13.2 | 66.1 / 49.3 | 0 / 0 |\\n| | | ViT-B | 92.0 / 90.1 | 72.4 / 68.8 | 65.3 / 19.6 | 71.1 / 57.5 | 34.6 / 13.6 | 67.1 / 49.9 | 0 / 0 |\\n| | M2M Kinetics | ViT-S | 91.0 / 88.7 | 70.9 / 67.6 | 64.4 / 19.2 | 69.6 / 56.8 | 33.8 / 14.0 | 65.9 / 49.3 | -0.2 / -0.0 |\\n| | | ViT-B | 91.8 / 89.6 | 71.5 / 68.2 | 65.0 / 19.4 | 70.2 / 57.3 | 34.1 / 14.1 | 66.5 / 49.7 | -0.6 / -0.2 |\\n| VideoMAE **(X)** | Kinetics | ViT-S | 92.4 / 90.5 | 72.7 / 69.0 | 65.6 / 19.7 | 71.4 / 57.8 | 33.4 / 13.4 | 67.1 / 50.1 | 0 / 0 |\\n| | | ViT-B | 93.4 / 91.5 | 73.5 / 69.8 | 66.3 / 19.9 | 72.2 / 58.4 | 34.8 / 13.8 | 68.0 / 50.7 | 0 / 0 |\\n| | M2M Kinetics | ViT-S | 92.4 / 90.1 | 71.9 / 68.6 | 65.4 / 19.5 | 70.7 / 57.7 | 34.3 / 14.2 | 66.9 / 50.0 | -0.2 / -0.1 |\\n| | | ViT-B **(X)** | 93.2 / 90.9 | 72.6 / 69.2 | 66.0 / 19.7 | 71.3 / 58.2 | 34.6 / 14.3 | 67.5 / 50.5 | -0.5 / -0.2 |\\n| VideoMAE v2 | Kinetics | ViT-S | 92.6 / 90.8 | 72.9 / 69.2 | 65.8 / 19.7 | 71.6 / 57.9 | 33.5 / 13.4 | 67.3 / 50.2 | 0 / 0 |\\n| | | ViT-B | 94.3 / 92.4 | 74.2 / 70.5 | 67.0 / 20.1 | 72.9 / 59.0 | 35.1 / 13.9 | 68.7 / 51.2 | 0 / 0 |\\n| | M2M Kinetics | ViT-S | 93.2 / 90.9 | 72.6 / 69.2 | 66.0 / 19.7 | 71.3 / 58.2 | 34.6 / 14.3 | 67.5 / 50.5 | 0.2 / 0.3 |\\n| | | ViT-B | 94.6 / 92.3 | 73.7 / 70.2 | 67.0 / 20.0 | 72.4 / 59.1 | 35.1 / 14.5 | 68.6 / 51.2 | -0.1 / 0.0 |\\n\\n**(X) indicates the current SSL method and backbone used**\"}", "{\"title\": \"Rebuttal to 8VCT: Part 3/3\", \"comment\": \"# Addressing Sub-point 3.2 #\\n\\nThe critique that comparing M2M to NH Kinetics + Synthetic from PPMA is more apt overlooks the distinct aims of this work. While PPMA\\u2019s synthetic data serves as a generic benchmark, the M2M pipeline specifically addresses realism and to some degree demographic bias without relying on synthetic data. Furthermore, the critique on NH Kinetics lacking humans performing actions is addressed inherently in our methodology\\u2014our meshes are designed to replicate human actions more authentically, bridging the realism gap that PPMA\\u2019s approach does not do entirely. The alignment of our results with the real Kinetics baseline, coupled with bias mitigation, emphasizes the dual contribution of M2M in both performance and ethical considerations. In any case, you are right that we should more distinctly compare M2M with NHK + Synthetic, but that is what is done in Table 1 one anyways (PPMA vs Ours). Let us know if you would like this to be added within the table in Section 4.5 as well. \\n\\n----------\\n\\n*Thank you so much for your valuable feedback and insights, which have significantly contributed to enhancing our paper. We sincerely apologize if our response caused was disorienting, particularly as navigating this page and reviewing overlapping points with other reviewers may be cumbersome and require a lot of effort on your part. However, we do appreciate it a lot! Lastly, we are fully committed to incorporating all necessary revisions in the final camera-ready version of the paper. With these improvements and our thoughtful responses, we kindly ask you to consider reevaluating your score, as we have diligently addressed all key concerns raised. Thank you again for your time and consideration.*\"}", "{\"title\": \"Reply to Weakness 1, 2, and 3 + Question 4\", \"comment\": \"# Reply to Weakness 1 #\\nVery embarrassing on our part and we sincerely apologize!! There happened to be an implementation issue that outputted the final result inappropriately by a factor of 3, so the cMAP scores without and with K-NEXUS are actually 33.9 and 38.1 respectively (we have updated this in the other response to reviewer `C3CH` as well). Note, in the latter case we just use the gendered-mesh which is most occurring after K-NEXUS is applied on Kinetics (e.g., if that is the neutral mesh, that is what is applied on VISPR1). Yes, our approach involves replacing humans in the images with SMPL meshes using our M2M framework. The meshes are then superimposed onto the inpainted backgrounds to create anonymized versions of the VISPR1 dataset. We trained a classifier on the anonymized images to evaluate privacy leakage using the VISPR1 protocol. The reported cMAP scores reflect the classifier's ability to predict attributes from these anonymized images. Our reported score indicates that while the classifier is not entirely \\\"fooled,\\\" it performs only slightly above chance on the anonymized dataset. This suggests that the SMPLy Private method substantially obfuscates the key attributes used for attribute inference while not perfectly reducing the signal to random noise.\", \"we_also_offer_some_rationale_for_our_results\": \"* Despite anonymization, SMPL meshes inherently encode body proportions and approximate pose, which may inadvertently correlate with some privacy-sensitive attributes (e.g., height could correlate with gender in some contexts).\\n\\n* We posit the inpainting process retains the original scene, which might allow the classifier to use non-human features as weak proxies for certain attributes in VISPR1 which are all human-based. For example, scene objects or locations in the dataset may correlate with specific demographics.\\n\\n* A score above chance (but below previous benchmarks) could indicate that the classifier is capturing residual patterns but cannot robustly infer the true attributes. This is supported by the fact that the meshes are designed to be neutral, removing direct cues like skin tone and particular facial features. However, the classifier may rely on remaining weak signals or mislearn spurious correlations, resulting in slightly elevated cMAP scores.\\n\\nIn general, while we know the goal of privacy-preserving methods is to reduce attribute inference to random chance (25%), achieving this requires a more thorough elimination of all residual cues, including indirect ones from background and pose. Our results show that the SMPLy Private method significantly reduces attribute predictability compared to unmodified data and prior benchmarks, but we acknowledge that additional steps, such as further contextual obfuscation and/or enhancing mesh standardization, could further align the scores with the ideal random baseline.\\n_______\\n# Reply to Weakness 2 and Question 4 #\\nIn Table 1, we make this distinction now very clear. SMPLy-Private uses the K-150 sub-set that was curated by the work we built upon (PPMA and SynAPT), where we have noted in this discussion phase that they randomly selected the 150 classes (we now mention this at the start of Section 3.2). Everything in Table 1 uses that set-up of 150 classes excluding the last row where SMPLy Private uses the classes selected by K-NEXUS (currently included in the supplementary zip file). Our boosted performance with K-NEXUS (\\u201cSMPLy Priv. w/ K-NEXUS\\u201d) essentially builds on-top of our boosted performance with the M2M mesh augmentations (\\u201cSMPLy Priv.\\u201d from Table 1 which uses the K-150 sub-set from PPMA/SynAPT). We hope this clarifies your concern entirely now.\", \"this_should_also_answer_question_4\": \"We hope that we have addressed this now -- essentially, it is just the splits used from the SynAPT/PPMA papers.\\n_____\\n# Reply to Weakness 3 #\\nWhile you can refer to our new response to reviewer `8VCT` titled \\u201cReply to 3.1\\u201d, we have come to the conclusion it might be best to shift this experiment (since it is more exploratory) into the Appendix as an additional finding, and shift the VISPR results into the main paper here instead as we believe that you are right in that we should include such results. It is more important, within the context of our paper, to discuss potential data privacy leakage over faster learning in a lower resource setting. Let us know if you are happy with this, thanks!\"}", "{\"title\": \"Reviewer Reply to Point 3.2\", \"comment\": \"This still does not fully address my concern. The point of Section $4.5$ is to claim that M2M Kinetics is a better dataset for alignment than synthetic data. The way the authors initially presented the table was to pretrain all baselines on M2M Kinetics then performing alignment on different datasets. The added row does not address this concern, as I was requesting Stage 1 pre-training on M2M Kinetics and Stage 2 alignment on synthetic to properly evaluate M2M as an alignment dataset.\\n\\n**In summary, some of my concerns were addressed, some concerns were partially addressed but would require major changes to the paper, and some concerns are still misunderstood/not addressed. I believe through this rebuttal process that the paper has improved in quality, and with the additional feedback from other reviewers, the authors have plenty to add to the paper for a future resubmission. I will retain my score as it is still the highest among other reviewers, and I believe the paper is still truly marginally under the threshold for acceptance.**\"}", "{\"comment\": \"**Response to Dataset W.1**: I do understand what the purpose of selecting a further 150 classes from the 400 is, what I'm concerned about is whether such a use would really benefit downstream tasks, as the authors say:\\n> \\\"During downstream tasks, the model can fine-tune effectively for similar actions (e.g., \\\"playing a violin\\\") even if pretraining was performed on related actions (e.g., \\\"playing a guitar\\\"), and K-NEXUS effectively handles these redundancies.\\\"\\n\\nHowever, to substantiate this claim, a direct comparison between K400-pretrained and K150-pretrained results on downstream tasks would be more convincing.\\n\\nAdditionally, if this approach follows settings from previous work, I believe it is essential to cite the relevant studies in the appropriate sections of the paper.\\n\\n**Response to Dataset W.2**: The authors seem to have either misunderstood or not directly addressed my concern in this section.\\n\\n\\n**Response to Dataset W.3**: Thank you for providing some examples. However, I believe a complete list of actions selected and excluded by K-NEXUS would be more convincing. Furthermore, since the authors mentioned that prior works have also used K-150, a detailed comparison between the 150 classes selected by K-NEXUS and the previously used 150 classes would strengthen the argument.\\n\\n\\n\\n**Response to Technical W.1**: First, I do not fully agree with the claim that integrating off-the-shelf models can constitute the primary contribution of a high-quality paper, as stated:\\n> \\\"Our primary contribution lies in integrating these components to tackle privacy-preserving action recognition\\\"\\n\\nSecond, the authors themselves seem to acknowledge the limitations of K-NEXUS, particularly when compared to random sampling, which appears to be a critical weakness.\\n\\n\\n**Response to Technical W.2**: Thank you for the additional clarification provided in this section.\\n\\n**Response to Experiments W.1**: I understand the authors\\u2019 intention here. My comment on W.1 was simply a friendly suggestion and did not affect my overall assessment.\\n\\n\\n**Response to Experiments W.2**: Thank you for the results provided. However, it seems my original point was not fully understood. I requested a comparison where K-150 and K-400 are used as downstream task datasets to benchmark against baselines. This differs from comparing the results of K-150 to K-400, as provided.\\n\\n\\n**Response to Experiments W.3**: Thank you for the detailed clarification and the added examples in the revised appendix. While I appreciate the effort to contextualize your definitions within the challenges of privacy-preserving learning, I remain concerned about the terminology used for \\u201ccoarse-grained\\u201d and \\u201cfine-grained\\u201d actions and encourage revisiting the terminology to reflect the unique characteristics of the clusters better while aligning more closely with established conventions. This would make the paper\\u2019s contributions more accessible and clearer to the broader community.\\n\\n\\n***Overall**, I think my key concerns have not been fully addressed, and I believe this manuscript requires further optimization and improvement. Thus I will retain my original score.*\"}", "{\"title\": \"Reply to Point 2\", \"comment\": \"# Reply to Point 2 #\\n\\nThat is a fair assessment. We want to clarify our quote. We were saying that the meshes also help better feature learning as we improve performance over synthetic methods like PPMA while deterring biases related to race and gender (the latter of which we attempted to assess). While we cannot argue sheer novelty, we would like to think that we do have a significant and worthy contribution as an applied combination of methods that beat prior works in many ways \\u2013 setting a new benchmark and standard for SSL pre-training with privacy preservation, making the shift from synthetic data to \\u201cSMPLy\\u201d augmented data :). If anything, we refer you back to the updated Section 3.1 of our paper to explain our point on how our framework is, at least, somewhat novel in its approach but perhaps not a novel methodology that has been built from scratch as you and the other reviewers might be expecting/looking for.\"}", "{\"title\": \"Rebuttal Response (Part 2/2)\", \"comment\": \"## Addressing Question 4:\\nIt is helpful to see the K-NEXUS class list, thank you for including those. But I am also interested in the other split you used without K-NEXUS. Is this a split from a previous paper? Is it random? It is specifically chosen to be redundant? Is it just the first 150 classes?\\n\\n## Addressing Question 5:\\nThese are just single frame examples, seeing actual videos (where there may be some missed detections/occlusions) is what I was interested in.\\n\\n## Addressing Question 6:\\nThank you for addressing this.\\n\\n## Addressing Question 7 & 8:\\nI understand what a male and female biased class could mean, but I would like to know how these classes were chosen, and your response is vague. Figure 8 does not show manual qualitative review, and I still do not see any justification other than \\\"manual qualitative review\\\". I think this suggests looking at the gendered meshes chosen by the method, but this isn't exactly made clear. Also, it is still not clear what the implications of this gender study is. The results look like best performance is achieved simply by using the appropriate gendered mesh, even in these biased class scenarios. You then state that \\\"this indicates that 3D meshes help mitigate gender-action bias by offering a gender-agnostic representation\\\", but nowhere do you indicate that a gender-neutral mesh is chosen for your work. If this is the better option for gender bias, then why do you not choose to use them? On top of this, I find it very concerning that you state: \\\"Our mesh rendering strategies are designed to assign the appropriate mesh type for a more accurate representation: male or female meshes are used when the person's features in the video are descriptive enough, while a neutral mesh is applied when insufficient information is available.\\\" This defeats a major component of privacy-preservation by explicitly guessing subject gender and expressing this in your mesh representation. This also loops me back to my VISPR question. One of the attributes in VISPR1 is gender, so if you explicitly keep gendered informantion, then how is it that you are able to achieve such low performance? Overall, this aspect needs more clarification and justification for revealing subject gender in \\\"privacy-preserving\\\" action recognition.\\n\\n## Addressing Question 9:\\nThis does not address my concern. Your point in the following quote is important and potentially correct: \\\"our method pushes the model to focus on action dynamics rather than environmental correlations\\\", but I believe it needs more analysis and justification than simply 'we outperform previous methods therefore it is true'. I do agree with this statement: \\\"(f)or example, in standard datasets, certain actions might disproportionately co-occur with specific object types (e.g., \\\"riding\\\" often appearing with bicycles or horses), potentially biasing models to associate the action with the object rather than the human dynamics.\\\" But I fail to see how this does not apply to your method as well. Even though the meshes \\\"stick out\\\", the same actions still disproportionately co-occur with specific object types and are therefore susceptible to the same biases. An explicit justification to your claims about reducing background and scene-object biases is still necessary.\\n\\n\\n## Overall\\nIt is great to see the amount of effort the authors have put in to this rebuttal, but unfortunately, the responses seem to dodge many of the core concerns and make bold claims without proper justification. Therefore, I choose to maintain my rating for now.\"}", "{\"title\": \"Rebuttal to 8VCT: Part 2/3\", \"comment\": \"# Addressing Point 3 #\\n\\nThis concern is quite related to what reviewer `vWLp` raised (**please refer to Part 3 of our response there, titled \\\"Addressing Question 7 and Question 8\\\"**). We strongly encourage you to read that alongside this response. Your concern about the distinction between gendered and non-gendered meshes is valid and crucial. These distinctions are outlined in our methodology, where we explain that gendered meshes are instantiated based on gender labels, while gender-neutral meshes lack any specific gender characteristics. \\n\\nWe acknowledge that this explanation could be more explicit and supported with additional examples and definitions. We will address this by including these details in the Appendix in the final version of the paper. These additions will include differences in mesh characteristics, classes that exhibit gender bias, and a manual count we conducted to substantiate this (**we have now included the gender splits in the revised supplementary material for your reference**).\\n\\nAdditionally, we agree that comparisons with privacy-preserving baselines can provide valuable validation. For this reason, we evaluated our work on VISPR attributes (**again, see Part 2 of our response to reviewer `C3CH`). However, we also emphasize that our work uniquely focuses on exploring the potential of meshes for gender bias mitigation\\u2014a novel frontier not explicitly addressed by prior methods. Including benchmarks like PPMA or other privacy-preserving approaches that do not address gender bias would dilute the emphasis on this key aspect. To our knowledge, no existing privacy-preserving data augmentation framework addresses gender bias in tandem with privacy considerations.\\n\\nFinally, the claim that the \\\"stick-out\\\" nature of meshes improves action recognition performance conflates distinct issues even though what you say could be true (and will definitely be a point of analysis to add to our final version of this paper!). The observed improvements in gender bias mitigation stem from the neutralization of demographic cues, not merely enhanced visibility. We demonstrate this by training on data consisting of humans without meshes, where scores reflect these effects (as shown in the table presented in Part 3 of our response to reviewer `1aVG`). The unbiased representation achieved through meshes addresses systemic challenges in demographic-specific tasks, underscoring their value in mitigating gender bias and closing the realism gap.\\n\\n--------\\n\\n# Addressing Sub-point 3.1 #\\n\\nThe accelerated representation learning discussed in Section 4.4, while not directly correlating with final performance metrics, provides a novel perspective on the training dynamics of models on anonymized data. This insight highlights the efficiency of using M2M-augmented data in resource-constrained environments where computational efficiency is crucial. Notably, these findings suggest that SMPLy Private-trained models are inherently better suited for early-stage deployment. Although the final epochs achieve real-data baseline performance, the shorter training curve for SMPLy Private models reveals an underexplored advantage, paving the way for further research in low-resource optimization and early deployment strategies.\\n\\nTo enhance the rigor of our approach based on your suggestion, **we extended training by an additional 50 epochs. Under these slightly more resource-intensive conditions, our model surpassed VideoMAE with real data by approximately 1.1%. We appreciate the reviewer\\u2019s suggestion, which motivated this extended experiment**. The new training graphs reflecting these results will be included in the final version of the paper. Additionally, we address this point in our response to reviewer `1aVG` (**see the table in Part 3 of that rebuttal**) as it is worth noting that ViT-S using VideoMAE v2 also outperforms our setup when trained on real data. Therefore, our framework demonstrates significant utility in low-resource settings while offering modest gains when more computational resources and training time are available.\"}", "{\"title\": \"Rebuttal to 1aVG: Part 2/5\", \"comment\": \"# Addressing Dataset Weakness 3 #\\n\\nWe appreciate the reviewer's interest in understanding how the K-NEXUS algorithm selects and discards action classes to reduce category bias and how confusion-prone categories are identified. Indeed, K-NEXUS was designed to create a subset of categories that ensures balanced representation across various action types while minimizing semantic overlaps that could confuse models. In other words, K-NEXUS is supposed to identify and exclude categories with overlapping visual cues or semantically broad definitions. Moreover, classes prone to ambiguity or that exhibit high overlap within the embedding space (e.g., \\\"answering questions\\\" vs. \\\"news anchoring\\\" or \\\"biking through snow\\\" vs. \\\"riding mountain bike\\\") were systematically excluded by the algorithm. Similarly, visually distinguishable and contextually unique categories (e.g., \\\"archery,\\\" \\\"yoga\\\") were retained. From the 150 remaining classes, actions like \\\"archery,\\\" \\\"canoeing or kayaking,\\\" and \\\"tap dancing\\\" represent distinct activities with minimal overlap. While more ambiguous categories like \\\"reading newspaper\\\" and \\\"cleaning floor\\\" often have minimal visual distinctions and were excluded for this reason.\\n\\nWe found that examples such as \\u201ceating watermelon\\\" and \\\"eating ice cream\\\" are visually similar in terms of gesture and pose, leading to their clusters being close/overlapping as shown in **Figure 4(a) in Appendix B.1** that we revised based on your productive suggestion. Also, \\\"walking the dog\\\" and \\\"waiting in line\\\" often share a common standing pose, further adding to \\u201cconfusion\\u201d as you, the reviewer, called it. By removing these \\u201cconfusion-prone\\u201d categories, the model benefits from a more refined and less noisy dataset, enhancing its ability to generalize hence **leading to better downstream performance while saving on computation**. Finally, to address the reviewer's valid point on demonstrating the impact of this selection process, we\\u2019ve appended the classes K-NEXUS selected to construct Kinetics-150 dataset on which we applied M2M augmentation for pretraining (**see supp. material**) . \\n\\n______________\\n\\n# Addressing Technical Weakness 1 #\\n\\nWe appreciate your recognition of our paper's perspective on information safety. While it's true that off-the-shelf models were used in the segmentation, inpainting, and mesh generation pipeline, our primary contribution lies in integrating these components to tackle privacy-preserving action recognition\\u2014a novel and underexplored problem. Specifically, our transformation of real-world human data into SMPL-X meshes while preserving action fidelity and addressing biases represents a significant advance. Although we conducted experiments with various SSL MAE frameworks, we deliberately chose not to emphasize them in the paper, as that wasn't our primary focus. However, based on the reviewers' feedback, we now recognize that these insights are valuable. The **updated table below (see Part 3 of rebuttal)** demonstrates that SMPLy Private, while not built from scratch, synthesizes off-the-shelf methods effectively, with its robust M2M augmentation serving as a \\\"one-size-fits-all\\\" approach for SSL pretraining (note we only consider MAE methods, because this is typically gold standard in video pretraining [1, 2, 3, 4]). It outperforms other SSL baselines focused on privacy preservation and even can surpass or get close to some supervised encoders (see discussion with reviewer `C3CH`) if we discard the ViT backbone.\\n\\nRegarding K-NEXUS, it outperforms random sampling for coarse-grained classes derived from the original Kinetics-400 dataset. However, for fine-grained classes (a modified subset of the original), its performance drops due to inconsistencies in cluster assignments. These inconsistencies arise from multiple samples within each class, where varying embeddings cause the algorithm to assign them to different clusters. This highlights K-NEXUS 's inability to further refine fine-grained classes, as reflected in the results (this, by definition, is something K-NEXUS is not meant for). We hope this explanation clarifies the issue and that the subsequent points in this rebuttal provide further insight. Thanks for your thoughtful feedback!\\n\\n[1] Papers with Code Leaderboard on Kinetics-400 is populated with MAE ViT-based SSL-encoder frameworks: https://paperswithcode.com/sota/action-classification-on-kinetics-400 \\n\\n[2] Tong, Z., Song, Y., Wang, J., & Wang, L. (2022). VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Advances in Neural Information Processing Systems, 35, 4093\\u20134104\\n\\n[3] Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., & Qiao, Y. (2023). VideoMAE V2: Scaling video masked autoencoders with dual masking. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14549\\u201314560\\n\\n[4] Feichtenhofer, C., Fan, H., Li, Y., & He, K. (2022). Masked autoencoders as spatiotemporal learners. arXiv.\"}", "{\"summary\": \"The authors present a privacy-preserving augmentation framework that replaces human subjects with realistic 3D meshes. The method effectively mitigates bias and privacy concerns related to the human subjects without reducing the utility performance. The paper provides insight to how this replacement affects pretraining and finetuning stages. The authors additional propose a class sampling procedure to guarantee diverse classes for efficient training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The motivation and high-level ideas for this work is very strong. It makes a lot of sense that replacing diverse humans with a standardized subject would alleviate many privacy and bias concerns.\\n2. Results seem fairly strong and back up major claims about utility. \\n3. The point that using this augmented data can increase learning speed is moderately interesting.\", \"weaknesses\": \"1. There is no measure of privacy-utility tradeoff in Table 1. While replacing the human entirely is a strong form of privacy-preservation, it is not sufficient to just assume perfect privacy in this scenario. For example, a model may still be able to perceive gender through analyzing motion patterns, not just appearance. It would be helpful to see a quantitative tradeoff that can be used to compare between methods.\\n2. The K-NEXUS clustering algorithm is comprehensive and potentially interesting, but feels like an unnecessary contribution. I understand why it is effective, but a fair comparison would be using the same subset of classes as previous works. If the goal is not a direct comparison with prior work subsets, then why use a subset to begin with? Computation? Additional details to clarify these points would be useful.\\n3. One of the major claims is 'faster learning speed' in Section 4.4, which sounds useful, but this may be offset by preprocessing computation time.\\n4. Only one architecture/training setup (VideoMAE ViT-B) is shown, so it's hard to tell if the same findings would apply to more architecture types/training styles. Additional experiments would better support the paper claims.\\n5. It seems like the human detection occurs framewise (Section 3.2, Line 211). This may result in some unnatural videos if some frames miss a detection. It may be more natural to choose a video segmentation model to propagate masks instead of relying on many separate detections. The tennis GIF in the supplementary did look pretty good, but it would be more convincing to see video results from more complicated scenes.\\n6. The overall technical contribution feels weak here. The dataset construction is interesting, but as far as I can tell, there is no unique method proposed that goes with it, just basic MAE pretraining/fine-tuning.\\n7. On the minor side of things, the ICLR format mandates the table number and caption appearing before the table, not after.\", \"questions\": \"1. In Line 251, how are the segmentation masks for each object acquired, including the human subject? The text references a method specific to the human subjects, but not objects. Is there a separate segmentation model you use, so you would have two masks for each human? Please clarify this.\\n2. Can the authors provide some computation/runtime analysis for the preprocessing steps? How long does it realistically take for a single video, maybe like 10 sec/300 frames?\\n3. For K-NEXUS, since you are already using a LLaVA encoder, why not use the text encoder to encode the class names? In theory, the class videos and text names should have similar representations. It would be interesting to see a comparison between the classes chosen using the visual representations vs. the textual representations.\\n4. What is the difference between SMPLy Priv and SMPLy Priv w/ K-NEXUS? Is without a disjoint set of 150 random classes? So not just the K-NEXUS classes were annotated, but a separate set of classes as well? Please share more information on these classes, there is no list of selected classes anywhere in the paper or supplementary material.\\n5. See W5, is there anything done to handle disjoint human detections in the videos? Additional qualitative videos for multi-human scenarios would be helpful.\\n6. Line 323-324: \\\"with SMPLy Private and the use of our M2M-augmented dataset...\\\" What exactly is SMPLy Private then? I was under the impression that the dataset was the contribution and the model was VideoMAE.\\n7. How do you define 'male-biased' and 'woman-biased' classes in Table 3? How many of each are chosen/what chooses them? Why not just look at subclass performance across all classes? More details here would help justify this experiment. \\n8. Following up with Q6, what is the difference between the male, female, and neutral meshes (Table 3)? This is likely described in a previous paper, but it is an important detail to these findings and more detailed explanations of these is necessary for this paper.\\n9. Line 52: Claim (2) references that the paper explores the potential for mitigating background and scene-object related biases, but I don't see any further explanation for background/scene-object interactions. Is there additional support for this/performance on bias-related benchmarks?\\n\\n### Final Thoughts\\nOverall, I truly believe this paper has a lot of potential, but I would not recommend it for acceptance in its current state. The core ideas are strong, but there are a lot of details that need clarifying. A solid technical contribution for better learning with these meshes instead of using basic MAE pretraining would definitely flip my rating for this paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to 3.2\", \"comment\": \"# Reply to 3.2 #\\n\\nWe now understand what you meant, sorry for misinterpreting. We\\u2019ve added the baseline where NH-Kinetics is used only for MAE first and Synthetic is only used for Alignment afterward. We hope this point is now fully addressed. \\n\\n_____\\n*Again, we would like to apologize for the back and forth on this due to our misinterpretation of the initial review by you \\u2013 but we hope now everything is rectified.*\"}", "{\"title\": \"Replies to Point 3 and 3.1\", \"comment\": \"# Reply to Point 3 #\\n\\nWe apologize for the confusion; we updated the paper with the wrong draft. The revised version should have the mesh descriptions in the newly added Appendix D and the gendered class splits into .txt files within the updated supplementary materials. We hope this is sufficient; again, the changes are in blue. \\n\\nWe now also understand that you would like for us to compare action recognition performance given gendered classes with other methods. We conducted those experiments but chose not to include them in the paper as we wanted to show more of the efficacy of the meshes. Our experiment was more self-serving in that if the appropriate meshes are chosen, the performance improves with a middle-ground/balanced approach with the neutral/gender-agnostic mesh. However, we can compare PPMA and SPAct (the two we looked into for this); on average, their performance was up to ~10% worse. We can add this to the final paper if required. \\n\\nFurthermore, on the \\u201cstick out\\u201d point, we are glad you mentioned that again, as we believe it was a worthwhile investigation. The idea during our discussion with you was to change the neutral mesh\\u2019s color (we tried beige and a rainbow gradient) and see that beige had a drop in performance by 0.2 from the white mesh we reported scores on (from 83.1 to 82.9), whereas with a rainbow gradient, the performance increased from 83.1 to 83.5. So, there is some credence to your hypothesis on the meshes sticking out, which is directly tied to the type of color of mesh chosen. We can include these findings (and a few more on male and female meshes, too, with various other colors as well) in the final version of the paper. I hope this sufficiently addresses your concern. Again, thank you for pushing us to make this paper more complete and compelling with these fascinating new perspectives! \\n\\n________\\n\\n# Reply to 3.1 #\\n\\nThe additional 50 epochs were done when pre-training on M2M kinetics. The supervised alignment and downstream evaluation epochs were still fixed at 50 and 30, respectively (so we changed the 200 epochs from SSL pre-training to 250 if you look at Table 5 in the Appendix). Hence, if we train longer, the final/best epoch is at the end of training, as you mentioned. Under these conditions, SMPLy Private outperformed VideoMAE by 1.1%. While early-stage efficiency is valuable in resource-constrained scenarios, this result underscores M2M's ability to achieve superior final performance even under extended training budgets. But we are re-iterating ourselves here again a bit. \\n\\nHowever, faster early-stage learning does not replace final performance as a critical metric. However, it is helpful for time-sensitive or low-resource settings where competitive performance must be achieved with fewer training epochs. We propose treating early convergence as a secondary advantage that complements M2M's final solid results. But if this is not convincing, we suggest moving this part of the paper into the appendix as an additional exploratory finding and instead replacing it with our results on VISPR, as we now feel that is a more appropriate experiment for the context of this paper (i.e., ensuring low data leakage across various privacy attributes > faster representation learning in early stages < 100 pre-training epochs). We hope this is a good trade-off for improving the paper in your opinion.\"}", "{\"title\": \"Rebuttal to 1aVG: Part 4/5\", \"comment\": \"# Addressing Technical Weakness 2 #\\n\\nThe issue of occlusion in SMPL reconstructions is well-documented and acknowledged as a limitation of our work. While manually inspecting five videos per class may initially seem limited, this sampling strategy was chosen to ensure feasibility within the scope of our resources while capturing diverse scenarios to identify common challenges. Our pipeline\\u2019s reliance on a robust inpainting and mesh recovery process mitigates most occlusion artifacts, as demonstrated in our qualitative results. \\n\\nTo further validate our findings, we conducted an additional review of 15 more videos per class, resulting in a total of 20 videos per class across 150 classes (20 x 150 = 3,000 videos). Following the M2M augmentation, the revised occlusion rates were 1.6% and 0.6%, corresponding to 48 and 18 affected videos, respectively. This extended qualitative evaluation was rigorous, with an interrater reliability of 100%. Future work could expand on this inspection methodology through larger-scale evaluations. However, our results across multiple datasets indicate that the quality of the mesh superimposition process is sufficient to achieve competitive downstream action recognition performance while preserving privacy of the pretraining dataset. This suggests that the identified occlusions do not significantly compromise the approach's overall utility. Updates reflecting this additional qualitative review and findings have been included in our paper (**see Figure 8 in the revised Appendix B.1**).\\n\\n_______\\n\\n# Addressing Experiments Weakness 1 #\\n\\nWe appreciate the feedback regarding the evaluation of K-NEXUS, but please note that it is not our only main contribution. The experimental comparison structure as \\\"SMPLy Priv. vs. SMPLy Priv. w/ K-NEXUS\\\" was intentionally chosen to demonstrate the incremental benefit of K-NEXUS within the SMPLy Private framework. By directly comparing these two versions, we aimed to isolate and highlight the improvements brought by K-NEXUS in terms of dataset bias reduction and performance enhancement. As discussed previously, we wanted to strike a fair comparison relative to prior works within the sub-field (SSL pre-training on privacy-preserved datasets) like SynAPT and PPMA that do not use strategic sampling methods like K-NEXUS. We believe that the performance improvement across all downstream datasets in Table 1 should be sufficient to gauge our other ablations\\u2019 performance would be bolstered if we were to use K-NEXUS.\\n\\n________________\\n\\n# Addressing Experiments Weakness 2 #\\n\\nThis work focuses on the Kinetics-150 subset, curated to reduce bias and ambiguity, while situating our model's performance within the broader Kinetics-400 dataset. Initially, we evaluated our method on Kinetics-400 using M2M across the full dataset, achieving top-1 and top-5 LP accuracy scores of **80.3% and 88.6%** for SMPLy Private pretraining. However, these results are not directly comparable to Kinetics-150 due to the presence of fine-grained classes in Kinetics-400, which K-NEXUS seeks to address. On Kinetics-150, our pipeline achieves top-1 and top-5 LP accuracy of **86.2% and 93.8%**, respectively, demonstrating that our pretraining was effectively tested on relevant in-domain data and the features learned from Kinetics could be potentially used for further downstream action recognition classification tasks. We can add this to our final version of the paper if need be in a small table of results.\"}" ] }